
The function that gives AI value is the ability to make predictions - benryon
https://www.forbes.com/sites/bernardmarr/2018/07/10/the-economics-of-artificial-intelligence-how-cheaper-predictions-will-change-the-world/
======
taurine
This article is weird. I work with ML (AI is overloaded term) and don't
recognize myself in this article at all. It seems to be written for managers,
politicians, or economists or something.

It is like stating: "The function that gives software value is the ability to
create if-then statements." Both remotely true and meaningless.

Conflating analysis with predictive modeling, pretending self-driving cars are
a thing of the last decade (and not fully operational since the 80s) and this:

> “So what’s going to happen is that these prediction machines are going to
> make predictions better and faster and cheaper, and when you do that, two
> things happen. The first is that we will do a lot more predicting. And the
> second is that we will think of new ways of doing things for problems where
> the missing bit was prediction.”

If using ML or DL qualifies as a subset of AI, then AI qualifies as a subset
of software and IT. Turning above statement into:

> “So what’s going to happen is that these computer are going to run code
> better and faster and cheaper, and when you do that, two things happen. The
> first is that we will do a lot more coding. And the second is that we will
> think of new ways of doing things for problems where the missing bit was
> software.”

Then you are still correct, it is a safe bet, but you are correct about a very
insignificant thing.

~~~
evrydayhustling
Yeah. Article summary: AI is about prediction. So if you want to use AI,
reformulate your problem as predicting the answer.

Cheap dopamine hit of false comfort for someone distressed about not
understanding AI. And also a book ad maybe.

------
taytus
Like the world cup finals that was shared some weeks ago? Man, I've been
coding since I was a kid. I have a deep love for technology but the hype age
we are living in, is just too much.

~~~
natalyarostova
I was just at the international institute for forecasters conference on
Colorado as an applied practitioner.

There was a lot of hype on how to best use ML/Neural-nets in time-series
forecasting. Well, it was well-founded hype. It was hype by people who know
there is potential there, but who also consider a 20% accuracy improvement
over naive methods that were developed in the 1980s to be a cause of great
success.

And even then, it doesn't always work. NN/ML has had true breakthrough success
in classification type problems, and fitting situations of nonlinear high-
fidelity. For example, self-driving cars.

But in terms of economics/demand/weather dynamics, which include much much
less data, and often deal with more chaotic macro-patterns (i.e. Your data is
a time-series of a few megabytes over years, rather than gigabytes over
minutes from cameras), it offers much less.

~~~
johntiger1
20% is pretty significant actually, I've read papers where they report
baseline beats in the 0.1% to 1% range.

~~~
p1esk
Improvement of 99.0% to 99.2% is a 20% improvement.

~~~
Ntrails
a 20% reduction in mis-classification or a .2% absolute improvement in
success, or a 0.02% relative improvement to success.

------
thaumasiotes
That's not just AI; the function that gives science value is the ability to
make predictions. This is where the replication crisis came from -- when
nobody's judging you on the ability to make predictions, your predictions wind
up being worthless.

~~~
gowld
The function of _every decision_ and every input into a decision is to make
predictions. The value of everything anything does is a judged by how well it
serves a purpose of some kind to someone.

If a tree falls in the forest and no one is around to hear it, does it make a
sound? Outside of Occam's razor and (measurable) knock-on effects, it's
impossible to judge in any meaningful way.

------
JoeAltmaier
Or to optimize, or to replace human operators, or to entertain, or to
create....

~~~
robertk
All of which can be reformulated as predictions:

* Predict which strategy will arise given a metric to optimize

* Predict the next action a human operator would perform here

* Predict which action yields the most likes / smiles / upvotes

* Predict which output will have the most citations if it was an article in a scientific journal

Your remark isn’t a rebuttal but a reaffirmation. You have fallen prey to the
bias of not thinking in sufficiently high generality.

~~~
logicallee
True but vacuously so. Any system that has output can be rephrased the way you
just phrased it:

* Photoshop predicts the output of a given a set of buttons, filters, UI states, etc.

* Your car's steering system predicts the wheel outputs given steering wheel inputs.

* The abs( ) operator predicts the absolute value of a number.

\--

If we want to make a funny analogy with ML using my first example: this model
-- i.e. the entire Photoshop software -- is trained in a slow and manual (not
automated) iterative process against the cost function "whether Photoshop
engineers and managers will ship it as the next version of Photoshop". It's
repeatedly tested against it (or through whatever training algorithm its
designers want, including waterfall. The training algorithm doesn't have to be
_good_ \- but anything they use to design the software by definition is the
training algorithm for the model - under this stretched analogy / way of
thinking about it.)

I just mention this to show the absurdity of this way of thinking about it -
like the entire Adobe campus is just one giant training algorithm for the
"next version of Photoshop" model which predicts "what will the output of
pressing these buttons be".

If you'd like a second example: any simple pocket calculator going back to
1970 is a system that "predicts the result of its operations and operands".
The cost function is the happiness of the engineers who designed it, and a
human is involved in the iterative method of training the model, whose cost
function is the human's happiness with it. Kind of an absurd way of thinking
about the system.

So while these (and anything else with an output) can be formulated as
"predictions", my examples going back to 1970 aren't _machine_ learning, since
a human is involved in this training loop. So this sense of "prediction" is
kind of specious.

Sure, you can call them all predictions but you don't gain any insight by
doing so. And you lose OP's point, which I thought was insightful.

~~~
robertk
Ok that is clever. I take your point!

------
everdev
Actionable predictions are tricky though because acting on the prediction
before the event happens influences the system.

For example, my AI says stock X will go up 10%. My act of buying stock X
drives the price up, therefore reinforcing the prediction regardless of it's
original accuracy. Or, if I predict stock X will go up 10% before the earnings
can and don't tell anybody until after the event is over and don't act on the
information, what's the point?

~~~
robertk
So? Just reformulate it in a way that allows you to apply the Kakutani fixed
point theorem and call it a day. That’s what John Nash did.

~~~
grafs50
Good luck finding a Nash Equilibrium in the stock market.

------
a_d
I could probably generalize: the value of _every_ model is in its ability to
make predictions.

This includes, and not limited to:

* mental models: if your mental model doesn't predict reality, then it probably needs to change

* financial models: if it doesn't forecast well, it is wrong

* even AI models :-)

I would add though: "All generalizations are dangerous, even this one." ―
Alexandre Dumas

------
mlthoughts2018
This is why, as a machine learning engineer, my skepticism about most AI / ML
work is rooted in the sociology of predictions / forecasting.

Humans seldom care about actually improving forecasts beyond the level given
by very simple models. We are drawn to a very, very tiny set of domains where
people might care, like epidemiology or meteorology, while in the vast
majority (investing, customer analytics, national security, energy, climate,
politics) we totally don’t care. Advanced algorithms only help to the extent
they offer marketing hype or drive recruiting.

Robin Hanson alread wrote a great summary about this:

< [https://www.cato-unbound.org/2011/07/13/robin-hanson/who-
car...](https://www.cato-unbound.org/2011/07/13/robin-hanson/who-cares-about-
forecast-accuracy) >

------
pnloyd
I think things like computer vision would be more accurately labeled as
"drawing conclusions" rather than "making predictions". And if you can get
your prediction success rate close to 100% that seems more like drawing a
conclusion as well.

~~~
denimalpaca
I would say it's making a prediction because to me the phrase "drawing
conclusions" implies a conscious mental model of information from which a
result is established. Making a prediction is not necessarily reliant on these
mental models - especially conscious ones - it's about making an inference
based on available information.

Another way to put it is a person would look at a person and notice fur, eyes,
paws, ears, and the specific shapes and colors of these things and conclude
it's a cat. Take away a leg, or an ear, or have it half out of frame, and most
likely a person would still recognize it as a cat. The idea of "cat" exists in
the mind of the agent in this case, but a computer may predict cat only if the
animal is fully in frame and not missing any parts. The machine is entirely
reliant on features whereas a person is reliant on a mental model that has
more elasticity in what it defines.

~~~
dismantlethesun
Computers can and do label objects missing most of their features. This is
especially important in computer vision work for cars, as inaccurately
labeling an arm and a head as "not human" could lead to tragedy.

Now, I am sure you know this, so I don't know why you chose to use an example
that's inaccurate in practice.

~~~
denimalpaca
I chose this example because, in practice, sometimes changing a single feature
_does_ ruin the prediction, especially in computer vision. Often, the systems
are somewhat resilient to these kinds of errors, but often not also.

The fact that a computer can label an object missing many features does not
imply that it cannot also make a mistake doing so. Like the Tesla that
couldn't recognize a truck right in front of it.

Then there's Google's Deep Dream, which did silly things like think that all
hammers had arms attached to them.

Then there's also this:
[http://www.evolvingai.org/fooling](http://www.evolvingai.org/fooling)

and many other examples like it. I chose a simple example that would be
maximally relatable and still accurate even with respect to state of the art
algorithms and datasets with billions of samples.

------
gerdesj
At the root of this article is a complete lack of insight into what it means
to compare apples with apples and oranges with oranges.

One of the points made by Mr Gans is: _When I’m going to catch a ball, I
predict the physics of where it’s going to end up. I have to do a lot of other
things to catch the ball, but one of the things I do is make that prediction._

You do yourself a disservice sir - there is no AI on earth that could possibly
match your ability to determine where a ball will land and work out how to
catch it (one hand or two, over or under), whilst teetering on a pair of legs
or perhaps diving. You may also be about to land in water and be working out
how to deal with that as well at the same time. If you are diving you will
also be making some horrifically complicated calculations that will ensure
that you don't snap your neck or ribs and land in one piece. You may do that
whilst daydreaming about something else.

"AI" is making some wonderful advances. Doing most of the stuff that we do
routinely is not one of them. For starters an "AI" doesn't have a body!

------
sgt101
What about optimisation : everything is known, the challenge is the optimal
distribution...

------
8bitsrule
I remember when fusion was only 25 years away. I predict the same for AI.

------
madenine
ctrl-f "softmax"; 0 results

