It is like stating: "The function that gives software value is the ability to create if-then statements." Both remotely true and meaningless.
Conflating analysis with predictive modeling, pretending self-driving cars are a thing of the last decade (and not fully operational since the 80s) and this:
> “So what’s going to happen is that these prediction machines are going to make predictions better and faster and cheaper, and when you do that, two things happen. The first is that we will do a lot more predicting. And the second is that we will think of new ways of doing things for problems where the missing bit was prediction.”
If using ML or DL qualifies as a subset of AI, then AI qualifies as a subset of software and IT. Turning above statement into:
> “So what’s going to happen is that these computer are going to run code better and faster and cheaper, and when you do that, two things happen. The first is that we will do a lot more coding. And the second is that we will think of new ways of doing things for problems where the missing bit was software.”
Then you are still correct, it is a safe bet, but you are correct about a very insignificant thing.
Cheap dopamine hit of false comfort for someone distressed about not understanding AI. And also a book ad maybe.
Casinos allow you to bet on games like roulette or blackjack because they have a mathematical expectation - a prediction - that they will come out ahead overall. Would you say that this prediction is wrong because individual players sometimes win a lot of money?
You can’t look to a single event that differs from a prediction and use that as a basis for an argument that the entire model is flawed.
There was a lot of hype on how to best use ML/Neural-nets in time-series forecasting. Well, it was well-founded hype. It was hype by people who know there is potential there, but who also consider a 20% accuracy improvement over naive methods that were developed in the 1980s to be a cause of great success.
And even then, it doesn't always work. NN/ML has had true breakthrough success in classification type problems, and fitting situations of nonlinear high-fidelity. For example, self-driving cars.
But in terms of economics/demand/weather dynamics, which include much much less data, and often deal with more chaotic macro-patterns (i.e. Your data is a time-series of a few megabytes over years, rather than gigabytes over minutes from cameras), it offers much less.
If a tree falls in the forest and no one is around to hear it, does it make a sound? Outside of Occam's razor and (measurable) knock-on effects, it's impossible to judge in any meaningful way.
* Predict which strategy will arise given a metric to optimize
* Predict the next action a human operator would perform here
* Predict which action yields the most likes / smiles / upvotes
* Predict which output will have the most citations if it was an article in a scientific journal
Your remark isn’t a rebuttal but a reaffirmation. You have fallen prey to the bias of not thinking in sufficiently high generality.
* Photoshop predicts the output of a given a set of buttons, filters, UI states, etc.
* Your car's steering system predicts the wheel outputs given steering wheel inputs.
* The abs( ) operator predicts the absolute value of a number.
If we want to make a funny analogy with ML using my first example: this model -- i.e. the entire Photoshop software -- is trained in a slow and manual (not automated) iterative process against the cost function "whether Photoshop engineers and managers will ship it as the next version of Photoshop". It's repeatedly tested against it (or through whatever training algorithm its designers want, including waterfall. The training algorithm doesn't have to be good - but anything they use to design the software by definition is the training algorithm for the model - under this stretched analogy / way of thinking about it.)
I just mention this to show the absurdity of this way of thinking about it - like the entire Adobe campus is just one giant training algorithm for the "next version of Photoshop" model which predicts "what will the output of pressing these buttons be".
If you'd like a second example: any simple pocket calculator going back to 1970 is a system that "predicts the result of its operations and operands". The cost function is the happiness of the engineers who designed it, and a human is involved in the iterative method of training the model, whose cost function is the human's happiness with it. Kind of an absurd way of thinking about the system.
So while these (and anything else with an output) can be formulated as "predictions", my examples going back to 1970 aren't machine learning, since a human is involved in this training loop. So this sense of "prediction" is kind of specious.
Sure, you can call them all predictions but you don't gain any insight by doing so. And you lose OP's point, which I thought was insightful.
For example, my AI says stock X will go up 10%. My act of buying stock X drives the price up, therefore reinforcing the prediction regardless of it's original accuracy. Or, if I predict stock X will go up 10% before the earnings can and don't tell anybody until after the event is over and don't act on the information, what's the point?
This includes, and not limited to:
* mental models: if your mental model doesn't predict reality, then it probably needs to change
* financial models: if it doesn't forecast well, it is wrong
* even AI models :-)
I would add though:
"All generalizations are dangerous, even this one."
― Alexandre Dumas
Humans seldom care about actually improving forecasts beyond the level given by very simple models. We are drawn to a very, very tiny set of domains where people might care, like epidemiology or meteorology, while in the vast majority (investing, customer analytics, national security, energy, climate, politics) we totally don’t care. Advanced algorithms only help to the extent they offer marketing hype or drive recruiting.
Robin Hanson alread wrote a great summary about this:
< https://www.cato-unbound.org/2011/07/13/robin-hanson/who-car... >
Another way to put it is a person would look at a person and notice fur, eyes, paws, ears, and the specific shapes and colors of these things and conclude it's a cat. Take away a leg, or an ear, or have it half out of frame, and most likely a person would still recognize it as a cat. The idea of "cat" exists in the mind of the agent in this case, but a computer may predict cat only if the animal is fully in frame and not missing any parts. The machine is entirely reliant on features whereas a person is reliant on a mental model that has more elasticity in what it defines.
Now, I am sure you know this, so I don't know why you chose to use an example that's inaccurate in practice.
The fact that a computer can label an object missing many features does not imply that it cannot also make a mistake doing so. Like the Tesla that couldn't recognize a truck right in front of it.
Then there's Google's Deep Dream, which did silly things like think that all hammers had arms attached to them.
Then there's also this:
and many other examples like it. I chose a simple example that would be maximally relatable and still accurate even with respect to state of the art algorithms and datasets with billions of samples.
Does a tree that falls alone in the forest make a sound? Potato, potatoe. shrug
Say you have a photo, and your image tagger says it's a photo of an apple. What that really means is "if I were to go to the place and time where this photo was taken, there would be an apple there in front of the camera".
So all predictions are conclusions but not all conclusions are predictions. I.e, a deductive argument comes to a conclusion, but, as we know, inferential methods aren't deductive.
All of our sensory systems work this way - we use an enormous amount of context to bubble up predictive categories and the collect marginal amounts of information until one prediction dominates over all others.
When one doesn’t, we get gestalt like illusions; like the cubes that are projected both forwards or backwards - or dresses that are blue and black and white and gold - or laurel and yanni in perfect harmony, etc.
This is a pop-sci article, so take it with a grain of salt, but there is a growing body of research that attempts to frame many cognitive processes as inherently predictive (inferential?). I think this view arises pretty naturally when you start thinking about us having a 'model' of reality that we is updated based on sensory information, but I digress.
I think I see where you're coming from.
A human might not be 'predicting', since the term has some kind of temporal element connotation. But we could say that it's 'inferring' what the photo represents?
I was commenting from the perspective of an analogy from personal experience of my own conscious.
One of the points made by Mr Gans is: When I’m going to catch a ball, I predict the physics of where it’s going to end up. I have to do a lot of other things to catch the ball, but one of the things I do is make that prediction.
You do yourself a disservice sir - there is no AI on earth that could possibly match your ability to determine where a ball will land and work out how to catch it (one hand or two, over or under), whilst teetering on a pair of legs or perhaps diving. You may also be about to land in water and be working out how to deal with that as well at the same time. If you are diving you will also be making some horrifically complicated calculations that will ensure that you don't snap your neck or ribs and land in one piece. You may do that whilst daydreaming about something else.
"AI" is making some wonderful advances. Doing most of the stuff that we do routinely is not one of them. For starters an "AI" doesn't have a body!