AI used to be a tight quirky community. Having the brain as inspiration led to all sorts of anthropomorphizing. This was ok. Researchers understood what was meant with "learning", "intelligence", "to perceive" in the context of AI. Nowadays, it is almost irresponsible to do this, not because you'll confuse your co-researchers, but because popular tech articles will write about chatbots inventing their own language and having to be shutdown.
Still, as a business research lab, it is good to get your name out there, so all the wrong incentives are there: Careful researchers avoid anthropomorphizing, and lose their source of inspiration -- you can not be careful with difficult unsolved problems, you need to be a little crazy and "out there". Meanwhile, profit-seeking business engineers and their PR departments, obfuscate their progress and basic techniques, all to get that juicy article with "an AI taught itself to X and you won't believe what happened next".
The researchers actually busy solving the hard problems of vision, natural language understanding, and common sense, do not have time to write books about how AI is not yet general. Nobody from the research community ever claimed that, nobody came forward to claim they've solved these decade-old problems. It is people selling books railing against the popular reporting of AI. Boring, self-serving, and predictive, and you do not need to fit a curve to see that.
All this quarreling about definitions and Venn diagrams and well-known limitations is dust in the wind. Go figure out what to call it on your Powerpoint presentation by yourself, and quit bothering the community.
I’ve noticed at least as many people under-anthropomorphize as over. People who seem obsessed with human exceptionalism and are personally offended at the idea that plants and animals (and computers!) might have subjective experiences like our own.
But to me it seems obvious we are far more alike “lower” species than we are unlike them. I would say the cases of human exceptionalism are actually extremely rare. The main source of our uniqueness is that we amalgamate other species, not that we have transcended them.
My theory is that we are terrified that we might be simpler than we think, because socially we behave as if we are so singular. If we are simple, and animals and machines are like us, then maybe we should be treating them with more reverence.
But being afraid of that is OK for a random person. For a machine learning researcher I would hope they are more careful about what we have evidence for (the similarities between us) and what we don’t (that there is some ineffable magic about humans).
It's sort of like the color perception problem . Dogs and machines do see colors, but what do they see?
Probably the most cited paper regarding this debate is by Marc Bekoff, "Cognitive Ethology: Slayers, Skeptics, and Proponents" (http://cogprints.org/160/1/199709005.html). Your original comment would be categorized as a "slayer" a position which is widely criticized. In fact Bekoff's focus is on canines and he used your exact example with dogs, but to opposite affect.
I do wonder about the theoretical bird scientist trying to figure out the "fixed action patterns" of other animals. If anthropomorphism is the way to go, surely it goes in the other direction in some way.
I don't see a test that majorly distinguishes it from a human. It appears to be following the same process with a few tweaks around the edges. There are some exceptions in the 2-5 situations in Go where a human can actually use optimised logic to determine what will happen; but they aren't the meat of the game.
I don't recall ever reading in a technical paper, or in an interview, a leader in the field of ANNs claim they were thinking. If you have, I'd like to see a reference. Most are fairly honest about the differences between artificial neurons and real ones, and between human cognition and what ANNs are doing with data.
But ML, AFAIK, is so simple; its literally a glorified polynomial functions. The only thing it get going for it is the large data set that we can train it on. It cannot "learn" anything from a small data set and extract any information out of it without a human imposing his/her knowledge on it.
For instance, take the concept of an even number. This simple knowledge is so powerful in solving algorithmic problems. But, its very hard to make a machine learn of this concept in general.
I think the problem is really overestimating how "intelligent" human are. We are only as intelligent with respect to our imagination. Its possible that there is an entire class of intelligent outside of our imagination that we cannot fully grasp its intelligent. Similarly, I am only conscious with respect to my own consciousness, but there may be another class of consciousness that is unimaginable to this monkey's brain.
My intuition is that there's a lot of important work to be done using logical representations of models and transforming them back and forth using well understood semantics operators. Deep functions will be part of said models, but the whole model does not necessarily need to be deep. We can already see hints of the field going in this direction in deep generative models .
I personally do not believe in AGI since I also do not believe in psychology, sociology or neurobiology being anywhere near understanding the holistic nature of our own intelligence. We are getting better at emulating human traits for specific tasks with ML. We lack the specific knowledge of what the algorithm should mimic to become equal to us in terms of our intellect though.
All this resulted from evolutionary processes. Any approximation of AI which will deal with other agents will develop something like that and more in order to be competitive, collaborate and survive.
How can we assume that a simulated evolutionary process of a simple mathematical model or some arbitrarily sized multi-dimensional matrices yields similar evolutionary results?
Just think of the ongoing debate about quantum entanglement effects inside the neural signaling process. On a rather onthological level, we are still unable to formulate a mere definition of our consciousness or things like creativity that lasts longer than a few academic decades..
Hi, I work at one of the intersections of machine learning with certain schools of thought in neuroscience. The following is based entirely on my own understanding, but is at least based on an understanding.
Your list here really only has three problems in it: causal reasoning, theory of mind, and "emotional intelligence". Emotional intelligence works in the service of "drive and desire", considered broadly. Creativity likewise works for the emotions. To be creative, you need aesthetic criteria.
Most of that, we're still really working on putting into mathematical and computational terms.
As a take on your interpretation of creativity: I would argue that the act of forming new and valuable propositions is not related to emotion or aesthetics per se.
Aesthetic theory is observing a very narrow subset of creative processes. And even there, our transition from modernism into the uncertainty of the post-modernist world defies any sound definition of the "aesthetic criteria". Yet we perceive aesthetic human-creativity all the time.
In similar vain is the application of generative machine learning that spurs debate about computational aesthetics today. Nothing proofs better the incapability of modern ML forming real creativity than the imitating nature of adversarial networks spitting out (quite beautiful) permutations of simplified data structures underlying the body of Bach's compositions.
Now we could start on the assumed role of complex neurotransmitters in the creative process of the brain and the trivial way reinforcement learning rewards artificial agents, but that would push the scope of this comment.
You can't really separate emotion and aesthetics from the neurotransmitters helping to implement them! They're considerably more complex than anyone usually gives credit for.
Likewise, to form a valuable proposition, you need a sense of value, which is rooted in the same neurological functionality that creates emotion and aesthetics.
I've come to terms with the hype. There are still researchers doing the hard theoretical work, and they will still be toiling away after the next economic downturn. We can all choose every day whether to find fulfillment through seeking attention from other people, money, or satisfying our curiosity to solve problems.
> Nobody from the research community ever claimed that [AGI], nobody came forward to claim they've solved these decade-old problems. It is people selling books railing against the popular reporting of AI. Boring, self-serving, and predictive, and you do not need to fit a curve to see that.
Hear hear! That said, this is a good article by a respected researcher. Here's what LeCun had to say about it,
> ...In general, I think a lot of people who see the field from the outside criticize the current state of affair without knowing that people in the field actively work on fixing the very aspects they criticize.
> That includes causality, learning from unlabeled data, reasoning, memory, etc. 
Stuart Russell recently published a non technical book on AI. I really hope tech journalists take note
You can get involved in this, but it takes real work (i.e. time taken away from your research area) and an honest understanding that the policy issues their own deep specialty, and you are likely to be quite naive about it going in.
On the plus side, it makes it fairly easy to ask cocktail-party-caliber questions and quickly suss out whether you conversation partner knows what the hell they're talking about.
You haven't proven this statement. It's possible within your own brain is nothing more than a rudimentary curve fitting algorithm that allowed you to see this pattern.
It should be fairly obvious that ‘curve fitting’ is a misleading category—these models are clearly learning highly meaningful latent spaces that no prior approaches ever did. But I would agree that the actual high-level ability to make causal inferences seems to be lacking.
Where I disagree with Pearl is simply with the idea that these stronger models won't emerge through future research. It's too early to say this, after barely a decade of large-scale AI research that has been undergoing continual rapid progress. Greater generality and more powerful models are some of the most well-established goals of the field.
That should be expected. Humans also lack the ability to make causal inference. The vast majority of us have extremely primitive causal reasoning abilities and get even simple causality wrong. Reasoning about causality in complex systems still isn't a solved problem for humans, and we have entire fields within philosophy trying to make sense of the hundreds of paradoxes within causal reasoning. It's not a solved problem, and it's not clear that it ever will be.
In terms of causal reasoning for computers, it's more of a "common sense" problem than a reasoning one. In nice, closed systems we can do symbolic computation and automated theorem proving without mistakes. The only reason this doesn't work in the real world is the lack of axioms and consistency.
Maybe if we had some way of abstracting out the things a machine learning system implicitly learns so we could deal with them in a more classical AI-like way?
I agree with your comment about System 2 like reasoning not being common right now. I am not an expert in the field but the closest thing I have seen to learned planning is: https://arxiv.org/pdf/1911.08265.pdf
* Issue #1: There is a tremendous amount of hype, noise, and snake oil surrounding the moniker "AI." Pretty much everyone agrees with this statement. (And anyone who doesn't agree with it... is probably selling snake oil.)
* Issue #2: Is intelligence just a form of "curve fitting," i.e., is it just finding solutions to very complicated, high-dimensional problems with more and more computation via search and self-play? Note that DL via SGD is a form of learning via search, RL via state-action-reward mechanisms is a form of learning via search, and multi-model/multi-agent DL/RL are forms of learning via search with self-play.) There is sharp disagreement on the answer to this question #2. Is that really all there is to intelligence?
The OP believes the answer to #2 is no: intelligence ought to be more than "curve fitting" via search and self-play.
Others believe the answer to #2 is yes. For a typical example of their thinking, here's Rich Sutton, Distinguished Research Scientist at DeepMind: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
The author is someone called Kurt Mako.
Others believe the success of our program to teaching pigs to fly will greatly improve as we build taller towers.
From that perspective intelligence is indeed just a curve fitting.
I really enjoyed the "The Measure of Intelligence" by François Chollet. https://arxiv.org/abs/1911.01547
"We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power."
He argues that we should move towards evaluating "Intelligence as skill-acquisition efficiency".
I agree with him. We should move away from benchmarks that involve training and evaluating algorithms on the same datasets. This is indeed more or less "curve fitting". Instead we should focus on benchmarking how efficient algorithms are at solving tasks involving completely new datasets, preferably even unknown to the developers. For example, language model GPT-2 was trained to predict next word given some previous words. After that training GPT-2 was able to do things that were unrelated like question answering, translating etc. GPT-2 is of course doing that very badly, and requires GB of training data, but it is a step towards skill-acquisition efficiency and away from what everyone sees as curve-fitting.
We should benchmark models so that we select for these that are able to do solve tasks they were not build to solve.
do you mean "all computation that can be done, can be done using a Turing machine"? Or do you mean "no one has proven Church Turing wrong?"
If it's the second - yes, that's so. But so what?
If it's the first then many people in Quantum Computing community will be quite upset, P=BQP? You have proof?
A common issue you find would be confounding. Then, because you haven't identified the latent connection, you may try to increase level A to have effect on output B, and be disappointed.
This is basically Judea Pearls Book of Why's main hypothesis, that E(Y|X) != E(Y|do(X)), where do(X) is when we modify X somehow.
General intelligence is primarily about developing useful conceptual categories (not mapping to existing ones) and drawing cause-and-effect inferences that assist us in achieving goals.
Curve fitting is just another name for pattern recognition, mapping to previously defined categories. I would personally argue there's no intelligence there whatsoever. Intelligence can't exist without a foundation of pattern recognition, but it isn't the same thing.
Intelligence is fundamentally goal-directed and able to reason, while curve-fitting is fundamentally not.
(There is also unsupervised learning in deep learning, which doesn't use previously defined categories, but since it is similarly non-goal-directed, I would still argue that this is merely dimension reduction as opposed to intelligence -- useful for sure, but not the same.)
One of my favourite examples is the New Caledonian crows who have learned to use traffic to crack nuts . Here, a crow had no pre-defined objective function apart from "eat food to stay alive" and has accomplished something remarkable. It found a food source that it had never had access to before, it developed a complex model of its urban environment, it combined its knowledge of the problem (the hard nut shell) with its knowledge of its environment (cars crush small objects), and it constructed a sophisticated for strategy for using cars to crack open the nuts and fetching the contents when the traffic lights indicated it was safe to do so.
This is general intelligence!
There are some algorithms like https://en.wikipedia.org/wiki/K-means_clustering that get a set of data and try to create the categories to better classify them. There are many algorithm and the results don't agree all the time. But this is an open ended task, like the classification of biological species in animals. (Plants are more difficult, and bacterias even more.)
Second, deep learning model, however much we'd like to think they do, aren't capable of doing proper causal inference in a general setting (that is, within the confines of the model) and are therefore far from capable of doing what humans do and will remain so limited for a long time to come.
AGI will require the curve-fitting of deep learning, a general model of the world, the causal inference capabilities of something like AlphaGo, but in a general setting, not the super limited world AlphaGo operates in.
So no, AGI will require much more than just curve-fitting abilities.
This is a field called Model-based Reinforcement learning, and it's quite advanced already -- there are indeed models that have an internal state reflecting the world state.
A good recent example:
> deep learning model, however much we'd like to think they do, aren't capable of doing proper causal inference in a general setting
This is also addressed by recent models, somewhat. Once you have an abstract world model, searching for a high reward can be just a matter of running markovian simulation on it using high reward heuristics (given by a network of course), like AG does. This line is also very active right now, one example is the recent MuZero.
Inference at its core really isn't much more than an artful curve fitting (or an artful model search if you like), and it's one of the building blocks of intelligence.
But you need the right prior structure such that learning and producing action sequences is efficient or even feasible/reachable. You can see any additional program structure that aids e.g. generation and recall of memories and planning (production of output targeted at solving a goal) as prior structure that limits and defines the searched program space. You can even regard a planning module as part of the curve fitting as it simply concerns the last step of producing the output.
Therefore, intelligence is "curve fitting".
So the actual question is: How much additional structure over just a large number of simple repeated units is necessary? Nobody knows. Possibly not much. Possibly quite a bit.
So an ML model running an input through a collection of other models to see if it gets a reasonable answer.
Or to bring it down to Earth in another way, consider just the act of writing a program in the modern world. If you work really, really hard, you can define a space in which our act of programming is just "curve fitting"... but it's far from obvious that that is even remotely a sensible way to look at the world. (See "differentiable programming" for the best counterpoint I know to that: https://en.wikipedia.org/wiki/Differentiable_programming but it's a very small niche right now.) When I'm debugging a program there is almost never any utility at all in trying to think about it as a "curve" and trying to get it closer to a the "correct" curve. A Turing-complete-complex space can be described as curves, but those curves are just awfully complicated and I don't see how it would be a help.
My personal suspicion is that while our cognition involves rather less of this "Turing complete" thinking than we'd like to fancy ourselves using, we do irreducibly use elements of it , and as long as our best AI models are incapable of representing Turing-complete computations there is simply no chance of them being the answer to true human-scale cognition. (We do have models that can do it, e.g., evolutionary computation, but we lack any sensible idea of how to "update" such models like a neural net. Neural nets themselves in the simplest case aren't Turing complete, and none of the hybrid models seem to get there to me either, though I welcome correction on that point.)
: Evidence: I don't think we could program Turing-complete machines if we were incapable of thinking that way ourselves. We aren't necessarily great at it, our engineering techniques are deeply characterized by the fact we can't really manipulate very many things at once in this manner and we have no choice but to break things up into very small modules and for us to combine them in a way that means that at any given time we have only a very small number of things to keep track of locally, but we are still doing non-trivially more than zero of the Turing-complete style of thinking. It isn't a hard guess from there to think that even if we aren't all that great at a full mathematical manifestation of this style of thinking, we may indeed be doing something somewhere between what our current neural nets do and this full TC-style thinking at a larger scale, and the inability to capture this in our neural nets is a currently-fatal-flaw.
So, you can argue something like our brains are nothing but curve fitting machine with enough parameters, but then you are probably forced to argue that consciousness is very closely related to computation, to the point were a coin flip, or a hello world program has some sliver of consciousness.
There are of course two possible ways around that, either one can argue for p-zombies, that is intelligent but not conscious beings, which then seems to require a super natural explanation for consciousness. Or you can argue that the brain is different, and that this gives rise to consciousness, and to general intelligence, which is the explanation that at least corresponds most closely to my subjective experience (but that is precisely what a soulless machine would write, isn't it?)
I don't like very much this reverent way of thinking about consciousness, as if it is from another world, or a different essence.
I believe consciousness is the ability of the agent to adapt to the environment in order to protect itself and maximise rewards. It's not just in the matrix multiplication, but in the embodiment, the environment-agent loop. Consciousness is not something that transcends the world and matrices, it's just a power to adapt and survive.
And it feels like something because that feeling has a survival utility, so the agent has a whole neural network to model future possible rewards and actions, which impacts behaviour and outcomes.
Why not? Those things have zero-consciousness that is conscious only of itself and which correctly reflects their lack of self-model.
Obviously other living humans have the highest similarity, so they are automatically deemed conscious. Next are other primates, followed by other domesticated mammals, and other animals.
Furthest from the status of conscious are creatures we see as automata like dung beetles rolling their food, or jellyfish ... jellyfishing.
Presumably we'd apply a similar process to hypothetical AGIs.
It is more likely that consciousness is just a property of brains inside bodies. The interesting question to me is the level of complexity of brain and body required to produce something like what we experience and what is it like in other arrangements of brains and bodies.
I also don't know that intelligence is the great thing that we think it is. It's an adaptation. It's an adaptation that lots of other organisms survive just fine without.
We are all p-zombies. Problem solved.
The longer we refuse to acknowledge that consciousness is nothing special, the longer it will take to tackle this topic. The only reason we cling to the idea that our minds are somehow special compared to other animals of various complexity is because we refuse to acknowledge that consciousness might exist in something we can't communicate with and that consciousness is a sliding scale rather than a binary property. Ascribing consciousness only to ourselves is hubris.
If one spends a bit of time observing humans, they will inevitably realise that some humans are more 'conscious' than others also.
tl;dr: we're all p-zombies. The fact that we think that each one of us isn't individually doesn't detract from that.
Just as natural sciences left less and less hiding places for god to exist, ML is leaving less and less hiding places for this borderline magical version of human-unique consciousness to exist. Answering this question in any more detail requires a much more rigorous definition of consciousness which is a big can of worms in itself.
If I made a list of everything in order of how certain I am that the item on the list exists, consciousness would be at the top by far. Everything else could just be a nice illusion.
To put it another way, if AGI is computable, then we are all p-zombies. And evidence is starting to strongly hint that AGI is computable.
Why do you think the burden of proof should be inverted? The mere fact that most humans intuitively feel "something" doesn't count for much of anything, especially once you stipulate that p-zombies would vote the same way.
He then places causality on a 3-rung scale. The bottom rung is association in data, which is where he says AI is stuck. Then there's intervention, "what if I do this thing?", and then there's counterfactuals, "what if I had done some other thing?"
He then makes a case for what intelligence actually is, and unsurprisingly it's getting up those three rungs. The method revolves around directed graphs, which have certain unintuitive properties. For instance if A and B can both cause C, knowing that A is unlikely make B more likely, given that C happened. There's a few other stock situations in various graphs that he walks through as well.
In the end the point seems to be that we could have a causal machine if we'd spend some more time on it. It would take data and try out some potential graphs, and some of the graphs would be ruled out by the data. And then some algo would tell you things like whether a randomized trial is necessary or even whether you'd need one (yes, this is another revelation).
I think there's also an argument that this is how people actually think, which makes sense because the graphs are not terribly large and they need to fit in your meat hardware. I haven't finished it but I would guess that you could take it in another direction and say this is why some animals sorta have intelligence in that they learn patterns, but they don't know the higher rungs.
Really interesting ideas, and at least clarifies what we mean by causality.
- There's nothing wrong with curve fitting per se. NNs fit hundreds of curves in parallel and many of them may contain cues about the causal structure of the data.
- Deep learning has become part of reinforcement learning, which is trying to learn a causal structure. The primary determinant of causality is the temporal order of cause and effect. The question is , do humans use other hints apart from time for causal inference?
- There is also not much evidence from neuroscience that wet brains are causality-inferrence machines, most of the evidence is that they are decision-making machines. Humans are also pretty bad at inferring causality when it's not obvious, but we re pretty good at associations/patterns.
- Reasoning (conscious) is often considered to act on a meta-level, which observes the internal action of the human brain itself and vocalizes what it sees. What the brain sees at this level is not the external world, but the representation, and we don't have evidence there is a model of the world in there (except perhaps temporary maps of space that exist in hippocampus). Assuming this is true, it s not impossible that current methods can be extended so that self-explaining an NN ends up being causal reasoning
the more important question is whether any of these methods can lead to the remarkable ability of brains to generate extremely intricate and improbable causal chains. can we get a CNN to start from a photo of maxwell's equations and output the theory of relativity? who knows
I feel anything that we develop for AI would fundamentally always be inspired by our own experiences and hence curve fitting is something we understand to be the best metric to optimize for.
No, the book actually makes an important point about this.
For instance, let's take a counterfactual. How do you know what would have happened if Barcelona had played Lionel Messi in goal over the latest season?
They've never done this, there are no data points for you to fit. The situation almost never arises that an outfield player plays in goal, and when they do it is always in a situation where someone's been sent off or injured, which is also rare.
And yet you and I and everyone else who can think knows this would result in Barcelona having a much worse season.
Just to be clear, it's not only because you'd be taking a top player out of offense where he is worth a lot, which you can surely fit some curves to show.
We can all guess that he'll be a worse keeper than Ter Stegen, but what curve would you fit that shows this? There's no data about Messi in goal.
Pearl does give a way to work it out via counterfactual analysis though.
The fact is that everything in the world can be reduced down to curves. It's just a matter of your perspective.
That's kinda the problem here. Once you have a model fitting curves is fine. But you need a structure from somewhere.
> players require practice to excel in their position
The problem with this is there's no data. Nobody gets to play a position they haven't practiced for. Yet you still somehow came to the right conclusion.
Anyway Pearl is much better at explaining this than I am.
I don’t even think educators mistake that for intelligence. It’s, at best, a proxy for intelligence mixed with other factors.
We have been trying to manage governments, public services, and education the same as corporations, creating numerical targets for institutions to optimize for. Education itself was formulated as an optimization problem about how to create more jobs. Public services like healthcare were privatized and became a target for profit optimization. Half of the stock market is controlled by High-Frequency Trading supercomputers, which will do virtually anything to gain an upper-hand in profits. Those methods were all inherited from the management styles of corporations that began in the neoliberal era. As fundamental parts of our society are replaced by those systems, the society now curve-fits the systems rather than the systems curve-fitting the society. We now hyperoptimize ourselves to fit in this neoliberal landscape; our time is told as something to be optimized between work, socializing, exercising, and self-improving, with no space for "actual free time" of our own. We go to college not to learn but to pass exams and get ourselves a good job that can sustain us. And the faults of our systems are now blamed to be individual problems: "You didn't optimize towards the current trends of the job market, it's your fault." And from the view of the corporations, we are just AI agents waiting to be optimized for cash, and we're now becoming one through fitting our bodies and minds to the social media that tries to maximize engagement and ad revenue no matter the real societal cost.
Now, the real problem of AI compared to the algorithms of the past is its data-driven nature: it can only learn from what data you give it for training. We can only accumulate data from the past and never from the future, so the AI systems will just keep repeating the past, no matter what unseen change will come. We will lose the ability to imagine new political, economic alternatives, we will just be feeding ourselves the status quo, and societal advancement will stagnate at the hands of automated systems. The cancellation of the future: this is what I'm ultimately afraid of.
that said I think if you made its context recurrent in some way between responses and queries you could probably build a really interesting chatbot that could ramble about just about anything and nothing but do so in a sufficiently coherent way to scare up venture capital. Just a guess but it is my guess.
This is has the advantage of recognizing how human intelligence can be automated and aggregated into system processes. And, the disadvantage that the boundaries of the concept start exploding.
I like cybernetics for providing a clear model of what constitutes intelligence -- a feedback loop between perception and action that achieves goals or lowers local entropy.
And, cybernetic systems can be artificial, natural or a mix
I think the end state for large corporations will be to automate away so much of the human input that they end up looking like what we think of as "AI" in a broader cultural context. But we already live in a world controlled by AI, and we have for 3+ decades.
Are corporations AI, or rather superintelligences, because they are groups of people bound by bylaws? If so, the whole damn world is AI through and through and machine learning is really just the tip of the iceberg.
I think this is why late-stage capitalism feels so bad to live under: these artificial systems (corporations), by design, stand in direct opposition to our humanity. They exist to prevent our humanity from getting in the way of their perpetual growth by abstracting any ethical problems away so that no human is faced with an ethical dilemma. Which explains why the world so often feels like a dystopian nightmare.
Instead I saw it as a change of basis from the space of previous purchases to the space of recent purchases.
That approach led to an entire AI framework and it still makes Amazon quite a bit of money annually apparently if Jeff Wilke's 2019 Re:MARS speech is to be believed.
I'd love to work on something more ambitious like AlphaGo or AlphaFold but those require tremendous resources and I'm really focused on bang for buck. But even then I'd see it as the marriage of classical search with modeling the probability of victory.
If someone says that the AGI is almost upon us I pretty much bozo bit them no matter how prestigious or fancy they may be otherwise.
That would still makes sense
I have been paid to work in the field of AI since 1982 so I have experienced AI winters. I almost hope we have another to act sort-of like simulated annealing to get us out of the highly effective local minimum of deep learning. I have been paid to work with deep learning for about the last five years (except I retired six months ago) and I love it, a big fan, but it won’t get us to where I want to be. Perhaps hybrid symbolic AI and deep learning? I don’t know.
A prime example of this is "Adam," an autonomous mini laboratory that uses computers, robotics and lab equipment to conduct scientific experiments, automatically generate hypotheses to explain the resulting data, test these hypotheses, and then interpret the results.
I postulate (but cannot at the present moment prove!) that if there is proof such that:
a) AI fits curves
b) That if all polynomial numbers (NX^M .. AX^2 + BX^1 + CX^0) are in fact curves...
c) Perhaps there is a link between polynomial numbers (as curves) and AI... that is, perhaps all AI can be thought of as a function with a polynomial solution, such that F(X) = NX^M .. AX^2 + BX^1 + CX^0, with F(X) being the AI in question...
I leave it to mathematicians, logicians, and people who do dimensional reductions/transformations -- to prove or disprove this...
I'm surprised it's not more generally accepted as _the_ definition. Note that Markus Hutter the main author is currently working at DeepMind.
Ive always eyed companies and startups the boast predictive AI for the financial markets with suspicion. I just always assumed whatever they were doing was a glorified regression model... But you have a lot of shameless people who will gladly slap some AI jargon in their business plan just to get eyeballs
Is not cause and effect a temporal-based acyclic directed graph with mutating state (wrt CompSci). Anyways, I think ML needs to look closer at simple cases, like sea squirts which have 8617 neurons. Try to answer how and why do they work better than some of our algorithms.
Newsflash- We're likely to continue seeing gains from ML tech for decades. Basic techniques may become part of core CS. With all the private siloed data out there, ML will need to be applied uniquely over and over.
In decades to come, if marketing continues as-is, are salesmen going to keep laying claim to the moon (AGI) the whole time? It's hard to believe we can remain on this tilt for so long.
Taking a model trained in one subject, and reapplying it to build out new vertical is pretty much how children learn new things.
We'll have models using models to train new models sooner rather than later.
If a program can create it's own curve functions and decide which one to use to evaluate a problem, is it that much different than a brain?
You don't start from zero when you learn to paint if you already know how to draw. You take another model of behavior and use it as a baseline to develop your painting behavior.
The fundamental thing that you can do that the computer can't right now is decide when you need a new model, when to use an existing one and when to transfer off of something.
In math, via a reduction, human life is just a function of time with many inputs and many outputs. I.e. you can do curve fitting there as well...
There was no curve to fit into.
Current AIs seem more like specific, fairly simple, brain regions. Maybe we need a level up.
C(x) = alfa * A(x) + (1-alfa) * B(x)
But, I think that its a bit "hit and hope" - as an idea it doesn't really address any questions unless/until it is realised and works.
Deep Neural Nets is not AI, it's just one powerful idea amongst many other ideas within AI
1) Try to prove that human intelligence is not computable.
2) Give up and go to another research field, like particle physics or artificial sweeteners or whatever.
3) Try to see how close to human intelligence can you get with the current technology. Some small areas may have a good approximation, like image classification, playing chess, …
4) Try to make a fully functional bug compatible human simulation.
Most people is in 2 or 3.
For example, currently human intelligence accomplishes much more with much less resource usage (many many orders of magnitude) than any AI algorithm we've developed. Someone could look at the trend, and get an idea of the extent of improvement we'd have to reach parity with human intelligence, and see if it seems at all feasible. A simple quantitative analysis of what we know so far.
At any rate, I'm unaware of any sort of significant effort at #1.
In practice you can work on AI-the-engineering-discipline without taking a position on this. Perhaps this is why you feel that researchers don't talk about it. Disciplined scientists tend not to take public positions on things they don't know for sure.
This has significant implications for the engineering discipline. For example, if the mind is a halting oracle, then you can get much more performant algorithms by incorporating human interaction.
Experimentalists may have verified these ideas using some kind of curve fitting, but thinking in abstractions, aka, "a ball rolled out in the street, maybe a child will follow," is one of the things curve fitting can't do.
The Plank Equation for the black body https://en.wikipedia.org/wiki/Planck%27s_law was the most successful curve fitting in history. They need like 30 years to discover quantum mechanics and understand the details.