I can understand academia & researcher hyping it up, they need to chase the funding. But code boot-camp "ML" engineers being all over-hyping is really, really annoying and contribute to the problem far worse.
These people are easy to spot: go to an ML/AI meetup or community that also shares latest tech. Since those does not understand the underlying math, they cant follow the paper, the talk or the discussion, so they quickly try to shift the discussion to "AI ethics" or "AI will take over the world" hypothetical useless discussions. These seems to also be the first to run out to promise big things with "AI".
I also don't like the blanket term of "automation" because it is like AI: too vague and often used by someone who don't fully grasp HOW hard automation tasks are (and how the complexity scale and differs from the task ). Just as optimism in Self-driving car is now more on low-fire.
The over-promise of ML is something akin to people expecting "it to just get better" similar to Moore's law. If you take a look at what direction the algorithms are improving, what incremental advancements are being made the scope of the speed and velocity become much more clear; and just because cars get faster, doesn't mean it will one day fly to space like a rocket. If we one day in short term future see great improvement, it will probably not be the ML improvements, but due external factors (hardware, network speed, etc)
For many companies this is not a problem because they can afford not to care. When you operate at the scale of Google, you can not afford not to use some form of ML to detect abuse, and you don't care about false positives because they're meaningless at this scale - you just get a couple of grumpy folks complaining they lost their accounts. But when you do something more meaningful, for example translation, you can't depend on machine translation - you can very well use it for the first draft, but if you delivered it to your client in that form, you'd be laughed at and you'd quickly be out of business. So yes, great advancements, still far from being there.
I really enjoyed (think it was Joscha Bach) take on the idea that companies are basically AI with humans in the loop.
The more our life and work is online and digital the more computers and algorithms will control what we see and experience, and what opinions we have.
A company has a cost function to optimize (like revenue), something that machines can optimize towards as well.
So maybe AI is just like a company but with humans out of the loop. It might also be that the human way of "intelligence" is not the most optimal approach.
I mean we cant even define what it means to be intelligent, which is why we are having these discussions.
That's one of the reasons I use the downvote button without replying.
(I have no idea who in heck you are. Your name does not ring a bell for me. I'm just saying there are other explanations for downvoting without replying than the one you want to assert here.)
If your requirement for believing something is on the path to AGI is for it to practically already be AGI, it's no longer predictively meaningful. This sort of argument suffices to dismiss, for example, the possibility that today's humans evolved from apes, or even that today's humans could have come from pre-language, pre-civilization humans.
I don't see what might spur anyone to claim this. It's obviously ridiculous.
This is literally the watchmaker argument with the nouns swapped out.
As I said, this is literally the watchmaker argument with the nouns swapped out. Ant minds are not general intelligences, they are programs created by natural evolution. Their pathfinding and identification and anthill-making and communication are all fixed programs in their genetics, not learnt during life. They cannot execute other programs.
The fact that you don't consider ants as general intelligences means you agree that we aren't making progress towards general intelligences, since ants intelligence is way closer to general intelligence than any computer program we have.
> Their pathfinding and identification and anthill-making and communication are all fixed programs in their genetics, not learnt during life.
They only born with some bases, they intelligently comes up with fine tuned parameters to make them work in practice. Hence they learn. If you don't call this "intelligence" they you can't consider the "AI" we do today "intelligent". I mean, every model a computer executes was invented by a human, the computer only optimized the parameters.
I mean, insects can learn to recognize arbitrary objects without anyone giving them training data or a reward function, they just do it on their own. That is a part of general intelligence. The way we do AI can never achieve that, since it always relies on human intelligence to decide all of those things meaning we just get out some optimized function that can fuzzy match input data.
Please don't argue like this. It's your job to explain what you think is true, and my job to explain what I think is true. Up to this point in this conversation I explicitly haven't taken a stance about how generally intelligent our best AI are, because it's pointless to argue that until the epistemics are valid. In fact I didn't intend to argue past that point at all; I'm objecting to bad arguments, not the position per se.
> The way we do AI can never achieve that
This is not meaningfully true. We have unsupervised image recognition, for example, and there's plenty of work on all sort other unsupervised learning of all sorts of things.
The idea that insects ‘don't have a reward function’ is not really valid; a reward function is just the function you're optimizing on.
So the way I see it until AI starts meaning only models that updates themselves as they evaluate input it is a pointless term. Currently basically every single instance of AI you see people talk about are just dumb immutable functions.
Edit: And to clarify, I believe it is impossible to reach human level intelligence without having a model that mutates itself as it tries to solve the problem. Humans are smart since we learn while doing, not that we first take a long course before we start doing.
It's easiest to train on IID data, and also common to train in specific RL environments, because those paper over some issues with catastrophic forgetting and overfitting, and it's also preferable to do all the compute-intensive updates during training for efficiency. However, lots of people agree that online learning is fundamental to cracking AGI, and it's neither neglected nor unsuccessful.
For some definition of plenty. I'd argue there isn't since almost all resources poured into AI are spent getting better at optimizing immutable functions. I don't see progress in getting better at optimizing immutable functions as progress towards general intelligence, hence I don't see almost anything people do in AI today as progress towards general intelligence.
Do you understand now? I don't say nobody works on the things I think is important, just that it is mostly neglected compared to the enormous amount of resources spent on dumb models.
I don't really agree with your stance on it, but I was not objecting to the claim that people should focus more on online learning. I was objecting to the claims “super computers can't even compete with the intelligence of an ant”, which is false, since the impressive behaviours an ant displays are not learned, and the learned aspects aren't competitive with what computers can do, and that “[the] way we do AI can never achieve that, since it always relies on human intelligence to decide all of those things”, which is also false, because online learning research exists and works, if imperfectly.
I disagree with this assertion, treating online learning like offline learning means you aren't making any progress. The main difference is that online learning lets you interact with the problem and seek more information, offline learning doesn't. Offline learning doesn't help you at all solving that problem.
To be really frank, I doubt AGI is more than a teleological narrative.
Of course, for example when you use the phrase "ML/AI".
Sure, they found a new trick to recognize cat pictures. But this technique was discovered decades ago. It just took this long for computers to speed up, for it to be usable.
Instead, they should change the moniker from Artificial Intelligence, to Artificial Insight.
Because all it is doing is to comb through all that random data, to look for interesting signals. But, the algorithm doesn’t even know if what it found, is interesting. That’s up to the human to decide.
Hence, the algorithm can provide Insight. But, it is the human in the loop, that will provide the intelligence.
Several times when I spoke to people about this point I got empty faces. Some seem to like the idea of automating things more than augmenting human thought with better tools.
Related article about Moldable Development:
>We can trace both models back to the early years of the first electronic computers. However, the struggle between these two worlds is still present today. For instance, Intelligence Augmentation (IA), is all about empowering humans with tools that make them more capable and more intelligent, while Artificial Intelligence (AI) has been about removing humans fully from the loop. You can read an excellent essay (Shan Carter and Michael Nielsen) about this topic.
This isn't really true. Old techniques don't work well even with modern compute. Now, try replacing sigmoid with ReLU in old networks and see it improve.
Having a million or more times the processing power really makes or breaks modern ML.
As a field, artificial intelligence has always been on the border of respectability, and therefore on the border of crackpottery
Most AI workers are responsible people who are aware of the pitfalls of a difficult field and produce good work in spite of them. However, to say anything good about anyone is beyond the scope of this paper.
I'm not opposed to the hypothesis but the number of people I've encountered who really believe current heirarcical neural nets are the end all tech is truly head scratching. Especially when so many technologies such as the Internet have not produced certain expected results and completely surprised with such unexpected ones.
Unlike the blockchain, "AI" brigade seems to just have too many people that benefit from keeping it over-hyped so I guess it won't come soon enough.
Seems pretty accurate given the broadness of the topic and variety of use cases.
Nothing special about ML except applying it to large(r) datasets, particularly in areas where it works well (various recommendation based models for example).
Nothing really to get one’s panties in a knot about though.
Why would it be OK to call it learning in a human or an animal, but not in software?
A machine would struggle with simply machine learning to answer pure math questions.
Although the M1 chip from Apple.... ;)
If I learn how to boil water for example, there is close to 100% causation that heating water causes it to boil (notwithstanding we do not know everything about Physics, eg before we would say the earth is flat because we learnt it so, but that was incorrect, ie not everything we learn is true, it’s just a super high level of correlation)
However if a machine learning model says the probability of water boiling is 99% after heating water, that is actually correlation. So I would distinguish it from learning
In this example there’s not much different. But when you apply to more complex examples, the difference between causation and correlation become clearer
For some things, correlation can be sufficient, so I can see where you are coming from.
As a better counterpoint: if we could not distinguish between causation and correlation, why is general AI so hard?
Yes, pretty much. Newton's laws are a perfect example of something that establishes a correlation but are useful absent accurate causation.
They did just fine until we needed something better. Einstein provided this but can we be so sure gravity as a result of curved space is "true" and not just a really good model? Perhaps the hunt for what quantum gravity will one day tell.
> As a better counterpoint: if we could not distinguish between causation and correlation, why is general AI so hard?
I think general AI is hard because for one it might be a fiction. For general AI to be real, we'd have to assume people posses "general intelligence". Or at the very least, were it beyond us, we'd have to assume we could tell if we had created it.
What we are really trying to do with "general intelligence" is reproduce something that took 3.5 billion years of natural selection to design. We have only recently become capable of even creating machines with the kinds of part counts that biology comes with. Were we to produce "general intelligence" in the next thousand years I'd be surprised it came so easy.
What does “learning” mean?
The answer is its neither. But we've taken them to be useful analogies and loosely designated them to mean different things. Neither one really applies because in truth, the machine isn't learning. Because we've deemed some types of organs optimizers, others dynamic programing and others learning. It's all arbitrary.
There are many "learning" algorithms beyond what is popularly called ML today but they have in common that they use information about their own success or failure to inform future results.
First off AI and MI are used almost synonymously and MI very rarely:
- "Artificial Intelligence" 192M hits
- "Machine Intelligence" 4.8M hits
- "Machine Learning" 153M hits
These is not that different, they are the same thing on different time scales. The first usage names the current (moving definition) state, which moves toward the second which is a singular (theoretical) point in development.
Finally, is this just a PSA? We already know this. If it doesn't say ML, deep learning, CNN, or other specific term it's a fluff piece, or just trying to get eyeballs on research (which is rational).
But the goal is specifically to conflate these two, isn't it? If I create a machine that beats a chess grandmaster, does it matter whether or not the machine "knows" how to play chess? If I create an app that takes you to a great restaurant for the mood you're in, does it matter whether or not it understands "mood" or "restaurant"? At some point, the stringing together of specific bundles of technologies becomes indistinguishable to the average person from real intelligence. (This probably happens far, far sooner for the average person than it does tech folks and researchers)
At that point, we've created something that for a specific domain _is_ intelligent: it reasons about things in an internal way that is opaque to us and provides what we perceive to be interactive value. Then, that thing just becomes another bundled to be assembled in an even larger chain. True AGI would be able to create and assemble the bundles.
But even then, the same end-state occurs: at some point whatever we're calling AGI will be able to do things with bundles of technologies that we perceive to provide interactive value. Will it matter at that point whether or not the machine is "truly" intelligent? (Apologies for the scare quotes, but many of these terms are quite suspect in this conversation and are used in all sorts of ways by authors)
To put it bluntly, the goal of AI work is to do cool stuff for reasons we're not exactly sure about. Otherwise it'd just be programming. We are using a lot of programming tools, like NN, to do this. At some point, various groups will fake themselves out and stop working in that area. Whether or not that's intelligence or not, whether or not anything new will ever happen in that field, is beside the point and not in the (commercial) scope of work. Aside from all the formal work and really cool stuff happening, in the end, when it gets used somewhere, this is a "looks good enough to me" situation. We're not looking to create intelligence, we're looking to create an uber-duber supreme version of Eliza for some given problem. The we find other, more complex and interesting problems. Then if we need to we'll join them together (There is a lot of detail that I have ignored, including how GANs play into my argument)
It’s perhaps best encapsulated by Douglas Hofstadter’s quip: “AI is whatever hasn't been done yet”.
The point being, AI is a moving target, and the threshold for reaching it migrates further which each new accomplishment in the field. If you showed the ML of today to someone in the sixties, you’d have a hard time convincing them it wasn’t AI.
But I don’t believe this is an issue reserved only for the press. At least anecdotally, practitioners don’t think of it this way either.
If it is written in Python, it's probably machine learning
If it is written in PowerPoint, it's probably AI