
What’s Next for Artificial Intelligence - jonbaer
http://www.wsj.com/articles/whats-next-for-artificial-intelligence-1465827619?href=
======
Fede_V
While LeCunn and Ng are real world experts on AI and Deep Learning, the other
two people in the article have very little technical understanding of deep
learning research.

The huge triumph of DL has been figuring out that as long as you can pose a
problem in a differentiable way and you can obtain a sufficient amount of
data, you can efficiently tackle it with a function approximator that can be
optimized with first order methods - from that, flows everything.

We have very little idea how to make really complicated problems
differentiable. Maybe we will - but right now the toughest problems that we
can put in a differentiable framework are those tackled by reinforcement
learning, and the current approaches are incredibly inefficient.

~~~
hyperbovine
> The huge triumph of DL has been figuring out that as long as you can pose a
> problem in a differentiable way and you can obtain a sufficient amount of
> data, you can efficiently tackle it with a function approximator that can be
> optimized with first order methods - from that, flows everything.

This isn't really what is responsible for the success of deep learning. Lots
and lots of machine learning algorithms existed before deep learning which are
essentially optimizing a (sub-)differentiable objective function, most notably
the LASSO. Rather, it's that recursive / hierarchical representation utilized
by DL is somehow a lot better at representing complicated functions than
things like e.g. kernel methods. I say "somehow" because exactly why and to
what extent this is true is still an active subject of research within
theoretical ML. It happen in many areas of math that "working in the right
basis" can dramatically improve one's ability to solve certain problems. This
seems to be what is happening here, but our understanding of the phenomenon is
still quite poor.

~~~
TumTumTree
Can you give a specific example of an area of math or a problem where "working
in the right basis" offers dramatic improvement?

~~~
an0nym1ty
[https://en.wikipedia.org/wiki/Laplace_transform](https://en.wikipedia.org/wiki/Laplace_transform)
in the solution of certain ODEs

------
charlesdenault
_If we create machines that learn as well as our brains do, it’s easy to
imagine them inheriting human-like qualities—and flaws. But a
“Terminator”-style scenario is, in my view, immensely improbable. It would
require a discrete, malevolent entity to specifically hard-wire malicious
intent into intelligent machines, and no organization, let alone a single
group or a person, will achieve human-level AI alone._

Isn't this incredibly shortsighted? Ignoring all the questions regarding the
morals and ethics an intelligent machine may _feel_ and affect the way it
behaves... It used to take nations to build computers, then large
corporations, then off-the-shelf parts by a kid in his garage.

The first strong AI will most likely be a multi-billion dollar project, but
its creation will arguably usher in an era in which strong AI is ubiquitous.

~~~
snemvalts
Andrew Ng made a really good analogy to those afraid of strong AI destroying
humanity: "It's like being afraid of overpopulation on Mars, we haven't even
landed on the planet yet."

~~~
Houshalter
It's a stupid analogy. Mars overpopulation would obviously take many, many
centuries. It would be a slow thing that you could obviously see coming. There
is no reason to believe AI will take centuries to build, or that we will
necessarily see it coming.

A better example might be like H.G. Well's 1913 prediction of nuclear weapons
destroying the world. It was something that science was just realizing was
possible, and would be invented within his lifetime.

~~~
snemvalts
We're far from emulating networks on the scale of the visual cortex, let alone
a self-reasoning machine (we don't even fully understand consciousness and
inner workings of the brain).

People fearing strong-AI are the ones not involved in the field, yet all this
hype/fear from them (in combination with Moore's law ending) is probably going
to cause another AI winter.

~~~
Houshalter
And in 1913 we didn't have even basic nuclear technology. Just 3 decades is a
long time for newly emerging technologies.

>We're far from emulating networks on the scale of the visual cortex

In 2009 (ish) computer vision was a joke that could recognize very few objects
a small percent of the time. Based on only simple color and texture, and
sometimes basic shapes.

A few years later and computers were excelling at computer vision recognizing
a majority of objects. A year or two after that, and they started to beat
humans on those tasks. We _already have_ super-human visual cortexes. Who
knows what will be possible in a decade.

We will probably never understand the inner workings of the brain. Not because
it's complicated, just because reverse engineering microscopic systems is
really hard (imagine trying to reverse engineer a modern CPU vs merely
designing one.) Especially hard because we can't ethically dissect living
humans and do the experiments we would need to do.

But that's no concern, AI advances on from first principles. AI researchers
invent better and better algorithms every day, without having a clue what
neuroscientists are up to.

>People fearing strong-AI are the ones not involved in the field,

That's just incorrect. A survey of AI researchers found they give about a
third chance AI will turn out badly for humanity in the next century:
[http://www.nickbostrom.com/papers/survey.pdf](http://www.nickbostrom.com/papers/survey.pdf)

>We thus designed a brief questionnaire and distributed it to four groups of
experts in 2012/2013\. The median estimate of respondents was for a one in two
chance that highlevel machine intelligence will be developed around 2040-2050,
rising to a nine in ten chance by 2075. Experts expect that systems will move
on to superintelligence in less than 30 years thereafter. They estimate the
chance is about one in three that this development turns out to be ‘bad’ or
‘extremely bad’ for humanity.

>in combination with Moore's law ending

Computers can advanced a long time after Moore's law. Google just released a
special neural network chip that is equivalent to 7 years worth Moore's law.
3d architectures can vastly increase the number of transistors. Better
algorithms can make NN's that require many fewer transistors to do
computations, or even do cheap analog computations.

~~~
sedachv
> In 2009 (ish) computer vision was a joke that could recognize very few
> objects a small percent of the time. Based on only simple color and texture,
> and sometimes basic shapes.

This is completely inaccurate and totally ignores the history of machine
vision.

Computer vision was in no way a "joke" in 2009. OCR and manufacturing
inspection systems have been successfully deployed since the 1980s. Neural
networks were being applied to computer vision in autonomous vehicles in 1989:
[https://www.youtube.com/watch?v=ilP4aPDTBPE](https://www.youtube.com/watch?v=ilP4aPDTBPE)

> We already have super-human visual cortexes.

No we don't: [http://rocknrollnerd.github.io/ml/2015/05/27/leopard-
sofa.ht...](http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html)
(see also the HN discussion:
[https://news.ycombinator.com/item?id=9749660](https://news.ycombinator.com/item?id=9749660))

I remember reading about a similar thing that happened in the 1980s to some
DARPA funded project that was trying to apply neural networks to tank/vehicle
detection: the network got really good at recognizing the foliage that the
training images had in them.

Robust scene understanding is a very hard problem and still far from solved.
Again, research on this has been going on since the 1960s.

> But that's no concern, AI advances on from first principles. AI researchers
> invent better and better algorithms every day, without having a clue what
> neuroscientists are up to.

Do you realize what the 'neural' in neural networks refers to? People working
on AI did not suddenly stop paying attention to neuroscience after Perceptrons
were invented.

------
vonnik
One of the most striking things about this piece is the difference between the
claims of AI practitioners and pundits.

LeCun and Ng are making precise, and much more modest, claims about the future
of AI, even if Ng is predicting a deep shift in the labor market. They are not
treating strong AI as a given, unlike Bostrom and Nosek.

Bostrom's evocation of "value learning" \-- "We would want the AI we build to
ultimately share our values, so that it can work as an extension of our
will... At the darkest macroscale, you have the possibility of people using
this advance, this power over nature, this knowledge, in ways designed to harm
and destroy others." \-- is strangely naive.

The values of this planet's dominant form of primate have included achieving
dominance over other primates through violence for thousands of years. Those
are part of our human "values", which we see enacted everyday in places like
Syria.

Bostrom mentions the possibility of people using this advance to harm others.
He is confusing the modes of his verbs. We are not in the realm of
possibility, or even probability, but of actuality and fact. Various nations'
militaries and intelligence communities have been exploring and implementing
various forms of AI for decades. They have, effectively, been
instrumentalizing AI to enact their values.

Bostrom's dream of coordinating political institutions to shape the future of
AI must take into account their history of using this technology to achieve
dominance. The likelihood that they will abandon that goal is low.

Reading him gives me the impression that he is deeply disconnected from our
present conditions, which makes me suspicious of his ability to predict our
long-term future.

~~~
shas3
I think both can be right. Ng and LeCun are talking about the real near
future. Bostrom always came across as a speculative SF + philosophy kind of
guy. Are there any specific critiques of his (and Yudkowsky/MIRI, Musk, etc.)
arguments? I think the two claims are plausible:

1\. AGI is imminent: 50 years or 500 years or more from now. This is not too
unlikely given that the brain is just an information processing system.
Emulating it is likely to happen sometime in the future.

2\. Such an AGI will be all powerful because it is not limited by human flaws.
Trivial or not, we will have to program it with "thou shalt not kill" type
values.

~~~
vonnik
Here is a very good critique of Bostrom's book:

[http://inverseprobability.com/2016/05/09/machine-learning-
fu...](http://inverseprobability.com/2016/05/09/machine-learning-futures-6)

------
tednoob
I want a management assisting AI. It would be neat to have it listen into all
meetings to identify stakeholders and to be able to remember all the details
and place them in a larger context, so you can ask detailed questions. An AI
can attend all meetings, and remember every detail. Imagine intelligent
documentation.

~~~
astazangasta
Even humans can't do this well. How will you train a machine to do what a
human cannot?

~~~
50CNT
Even to develop an expert system, you only need some humans who do it
quantifiably better than other humans. Despite not all of us being expert Go
players, we have AI that can do it. In the less abstract space, we have
medical diagnosis systems that hold up in accuracy to domain experts, and beat
competent practitioners.

There are potential issues with this: \- The accuracy of NLP and Voice
recognition used is too low to provide useful input (needs to do speaker
differentiation without training on specific speakers. Heavy usage of jargon)
\- Performance in one domain (say a meeting about oil&gas) does not transfer
to another (say a meeting about IT infrastructure), which makes development
cost prohibitive. \- Ability to encode and link knowledge is too low to be
useful.

~~~
astazangasta
In Go you have a clear set of test cases with fixed outcomes. You don't have
the same thing here.

~~~
50CNT
That is precisely why I quoted the example of medical diagnosis systems within
restricted subfields.

------
aymanc
Ok, so an AI subject, everybody went terminator vs the matrix :p

i have a few thoughts to comment on what's been said, hopefully not too
controversial

A self governing system does not need to be intelligent to be dangerous. i
think this is what scares people most. We more and more give automated systems
the power to do more advanced and crucial tasks I think eventually we would
reach a point where it might be "safer" to give the choice to an automated
machine than to a person. mind you this machine can be something we already
have today.

i don't think an AI that can compete with human behaviour can explode
instantly out of a single creation. i think we're more likely to experience
advances upon advances in the field towards forming bits and pieces of the
human mind.

i find it very unrealistic to think that a machine will simply come to life, i
think this stems out of our belief in a soul or a spark of life given by a
creator. i think like most machines it will evolve gradually until it reaches
a point where it is relevant. i don't think anyone will even notice the
change.

Also, there this unfounded image of how an AI would be; rational, not prone to
impulses temptations, poetically machine-like, and non-human like. the way we
saw machines years and years ago. (that's movies for ya)

Creating something that can learn from others will require it to empathise
with others, i think it's only science fiction that an AI could be created
with the full knowledge of it's operations. artificial intelligence is by
essence heuristic, it would learn and adapt to it's surroundings.

I think it would be a very unintelligent machine for it to try to kill off any
means it has to survive as an intelligence. society is the root of
intelligence. communication, language etc..

My views maybe a bit optimistic around the subject. but i never hear them
spoken out loud.

~~~
rajanchandi
There will always be "unintelligent" crash on the way to perfect intelligence.
These crashes are the real reason for all human fear today.

~~~
aymanc
good point but i think i mentioned that in the first part of my post.

------
anonyfox
I believe more and more that some self-taught software-tinkerer somewhere in
the middle of nowhere will have the final idea about how machine learning
should work, discovering some simple principles hiding in plain sight.
Suddenly, it all will make sense and a hobby-ML-service connected to the
internet will start to develop through sheer learning from online resources
(forums, ...) into the first strong AI. Probably unnoticed. And then replicate
itself through insecure webservers or something like that.

~~~
mark_l_watson
Hinton hit on the great idea of using restricted Boltzman machines to pre-
train deep neural networks (networks with many hidden layers) and that one
idea has changed the field (I sat on a DARPA neural network panel in the 1980s
and sold a commercial NN toolkit back then).

That said, I agree that new ideas will likely further move the field along
with huge and quick advances. Peter Norvig recently suggested that symbolic
AI, but with more contextual information as you get with deep neural networks,
may also make a comeback in the field.

~~~
tansey
The contrastive divergence paper that Hinton published in 2006 definitely set
the field off again. I remember entering grad school in 2010 and everyone was
still really excited about using unsupervised pretraining. However, nowadays
no one uses it.

It just turns out that with GPUs and stochastic gradient descent, no one needs
any of that stuff. There are some tricks out there to making it really work,
though. In that sense, Hinton's dropout paper has probably had a longer
lasting effect on the field.

But either way, I doubt what OP is saying will be true. None of the real
advances in deep learning are coming from self-taught coders in the middle of
nowhere. They're coming from big labs with lots of resources, both physically
and intellectually. This stuff takes a lot of hard thinking by a lot of people
who understand optimization and probability. It also takes a ton of compute
power and massive datasets, which won't be available to a hobbyist.

~~~
anonyfox
I honestly do not think that DL is the answer. It's just a special use case of
NN with multiple layers, and NNs itself are just one school of machine
learning, IMO not even the most promising one.

~~~
_mhr_
What is the most promising one?

------
Animats
We're still a long way from "strong AI". We need a few more ideas at least as
good as deep learning. But it's a finite problem - biological brains work, DNA
is about 4GB, and we have enough compute power in most data centers.

Right now we have enough technology to do a big fraction of what people do at
work. That's the big economic problem.

General-purpose robots still seem to be a ways off. The next challenge there
is handling of arbitrary objects, the last task done by hand in Amazon
warehouses. Despite 30 years of work on the bin-picking problem, robots still
suck at this in unstructured situations. Stocking a supermarket shelf, for
example. Once that's solved, all the jobs that involve picking up something
and putting it somewhere else gradually go away.

Rodney Brooks' Baxter was supposed to do this, but apparently doesn't do the
hard cases. Amazon sponsored a contest for this, but all they got was a better
vacuum picker. Work continues. This is a good YC-type problem.

~~~
lpage
> all the jobs that involve picking up something and putting it somewhere else

The a well phrased name for the superset containing automated freight and
logistics. As discussed in
[https://news.ycombinator.com/item?id=11568699](https://news.ycombinator.com/item?id=11568699)
trucking alone is 1% of the U.S. workforce, and that's not counting the
support industry (gas stations, highway diners, rest stops, motels, etc).

------
Scea91
I am not a fan of dividing machine learning and deep learning. To me deep
learning is just a subset of machine learning.

~~~
conceit
It reminds me all too much of genetic programming, but I know the latter
almost only by name.

------
mangeletti
What about Internet as a human-assisted AI?

Internet knows exactly what to do to take over. It simply has to remain _more_
useful than anything else, as a means for avoiding entropy during
transactions. Many necessary physical world transactions are reduced to few,
in order to accomplish the same tasks.

Internet does not have to be conscious, by human measures, in order to take
over the world. It simply has to compete against humanity in a continual
positive feedback loop, wherein each iteration requires less human interaction
for the same or more tasks. After enough iterations, Internet becomes powerful
enough that the only way to gain a competitive advantage against others using
Internet is to use deep learning to increase your leverage.

A few iterations later, deep learning has become a mainstay (think Cold War
arms race, where each innovation gains a party leverage over the other party,
but only for a very short period), and is now the baseline. Many more tasks
are achieved using Internet and Internet-connected physical world devices[1].
These physical devices become integral parts of Internet's extended nervous
system, while the deep learning systems running in our data centers remain at
the center, helping Internet to learn about all the things it experiences.

Continue down this path a ways...

1\. e.g., [https://www.wired.com/2015/05/worlds-first-self-driving-
semi...](https://www.wired.com/2015/05/worlds-first-self-driving-semi-truck-
hits-road/), [http://spectrum.ieee.org/cars-that-
think/transportation/sens...](http://spectrum.ieee.org/cars-that-
think/transportation/sensors/the-ai-dashcam-app-that-wants-to-rate-every-
driver-in-the-world), [http://www.marketwatch.com/story/drone-delivery-is-
already-h...](http://www.marketwatch.com/story/drone-delivery-is-already-here-
and-it-works-2015-11-30),
[https://www.theguardian.com/environment/2016/feb/01/japanese...](https://www.theguardian.com/environment/2016/feb/01/japanese-
firm-to-open-worlds-first-robot-run-farm)

------
blazespin
The reality is that we've already created this. Much of human life is governed
by AI doing capital allocation.

~~~
auntienomen
I don't think this is true. Capital allocation is generally the work of
humans. Very very few of the significant capital allocators make any use of
AI.

~~~
soundwave106
I guess it depends on how you look at it. I'm pretty sure all of the major
finance firms have a department dedicated to algorithmic trading, some of
which utilizes machine learning. Algorithmic trading has been estimated to be
as high as 60-70% of all trades at one point (it's slacked off a bit
recently.)
([http://www.investopedia.com/articles/investing/091615/world-...](http://www.investopedia.com/articles/investing/091615/world-
high-frequency-algorithmic-trading.asp))

A lot of algorithmic trading is more to take advantage of short term arbitrage
scenarios, though. The driving factor for the long term (that is, the "human
life" part) is still human driven.

~~~
auntienomen
I look at in terms of dollars/euros/yen/etc allocated. Virtually all of this
money is allocated by pension funds, mutual funds, sovereign wealth funds, and
similar behemoths. They may be using ML to improve their execution -- more
likely they've outsourced execution to third parties who use ML -- but they
are _not_ using ML to make the capital allocation decisions. Nor are they
profiting significantly from short term arbitrage opportunities. Capital
allocators worth mentioning generally are handling trillions of dollars. The
largest arbitrageurs are incapable of handling more than $10-20 billion. This
is rounding error in the world of capital allocation.

EDIT: The 50%+ figures you're quoting measure daily trading volumes, which is
not the same thing as capital allocation. Most of those traders don't hold
positions overnight; they don't do capital allocation. (Imagine how a startup
founder would feel if his investors took away their invested cash after 5pm
every night.)

------
SonicSoul
_Despite these astonishing advances, we are a long way from machines that are
as intelligent as humans—or even rats. So far, we’ve seen only 5% of what AI
can do._

I'd certainly love to see the math behind this estimation :)

I highly recommend the waitbutwhy post in the same vein but with more meat:
[http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html)

~~~
adwn
> _I 'd certainly love to see the math behind this estimation :)_

> _I highly recommend the waitbutwhy post in the same vein but with more meat_

Altough I'm a Wait But Why fan, that post suffers from the same lack of
"meat".

------
islon
_We would want the AI we build to ultimately share our values, so that it can
work as an extension of our will. It does not look promising to write down a
long list of everything we care about. It looks more promising to leverage the
AI’s own intelligence to learn about our values and what our preferences are._

This looks like a nice intro to a dystopian sci-fi movie.

------
hbt
There is a focus on artificial intelligence rather than intelligence
augmentation because the former seems easier to accomplish.

I also think we will reach a limit when it comes to intelligence augmentation.
Artificial intelligence will never have a limit and it doesn't have all the
evolutionary baggage we have.

An AI can be a rational agent. It doesn't have to fight impulses, temptations,
attention control, exercise emotional regulation etc. It is not stuck in a
body limiting it and putting constraints on its time.

For now, research on AI and IA go somewhat hand in hand. We still don't really
understand what differentiates us from intelligent animals other than the
ability to handle higher complexity.

AI researchers focused on replicating every brain module in the hope it will
become intelligent are most likely to create a smart animal but nothing
comparable to human. Looking at our ancestors, they were able to create tools,
fire, communicate etc. Hell, neanderthals could copulate with us.

Something happened in our brains between the age of neanderthals and us. 99.5%
similarity and if we could find what that 0.5% is, maybe we could focus on
enhancing/replicating that instead of every brain module. People speculate it
is creativity (divergent thinking) since art appeared in caves but there was
none prior to that. The language gene was present in neanderthals and so what
the ability to create tools, cooperate in groups to hunt etc.

The fear of AI destroying everything is a genuine one. If we create something
as smart as a bear, it still wouldn't be smart enough to compete against us in
every arena but like a bear, it can use its sheer power & speed to overwhelm
us.

PS: I find the subject of neanderthals fascinating, if anyone has a good
recommendation on the evolution of intelligence or finding what that 0.5% is,
please let me know.

~~~
fennecfoxen
> Artificial intelligence will never have a limit and it doesn't have all the
> evolutionary baggage we have.

Untrue. We operate in a universe with fundamental limitations built in, where
both physics and computer science suggest that what can be obtained with a
finite number of steps, or a finite amount of matter and energy, are limited.
Any communication is limited by the speed of light, and the amount of matter
and energy accessible in the light-cone of any AI produced on Earth is finite
as well. Even an AI putting a Dyson sphere around every star would need to
break physics to transcend finitude.

Any near-term artificial intelligence running off Earth's power grid with
semiconductors manufactured in traditional facilities will surely limited in
even more fundamental ways, and unlikely to control all of its inputs.

~~~
adrianN
The fundamental limits on computation [1] that we can prove with current
physics are so fantastically far from what we can achieve with current
engineering that we might as well say that there are no limits.

It's like saying that I can sort any list you can give me in O(1) time,
because there is only a constant number of bits encodable in the visible
universe, so the length of your list has a constant upper bound. While it's a
true statement, it's also rather boring, which is why we typically ignore such
limits in casual speech.

[http://www.arturekert.org/miscellaneous/physical-
limits.pdf](http://www.arturekert.org/miscellaneous/physical-limits.pdf)

~~~
mcbits
If we're talking about triggering an exponentially self-improving lineage of
intelligences/computers, then they will either hit a wall fairly quickly (in
the grand scheme) or face increasing practical constraints that flatten the
curve.

The limit may be at an unfathomably high level compared to where we stand, but
it will come fast. That's unless, say, it turns out the universe can adapt its
physics to meet certain types of ever-increasing demands.

------
ZanyProgrammer
Sometimes I think speculating about AI is similar to speculating about the
Fermi Paradox, i.e. predictions about the unknown backed up with absolute
certainty.

~~~
nkozyra
Except we're watching it emerge in real time. This isn't wild guessing, it's
taking current capabilities and extending it to new spaces.

It's fairly certain we'll have autonomous shipping, for example.

------
neuromancer2701
I have always thought that it was overselling the fact that all truckers are
going to be unemployed. The average age of a semi-truck driver is in the low
50s. Most millennials don't want to do this job, so the robots will just
replace baby boomers as they retire and not this massive slaughter of
unemployment that everyone fears.

~~~
knodi123
> so the robots will just replace baby boomers as they retire

It all depends on timing. If the robots are too early, there will be trucker
layoffs. If the robots take too long, then trucker wages will rise until a new
generation of truckers decides it's worthwhile- and then when the robots
arrive, there will be layoffs.

But sure, optimistically, the robots will phase in at the same rate and time
as current truckers age out. Knock on wood!

------
walterbell
Will the end of Moore's law affect the rate of progress on AI research?

~~~
ci5er
Moore's law, as formulated, is always ending it seems.

There's a higher level formulation that has no name (as far as I know) that
goes something like this[fn]:

    
    
      - The rate of change of the decreasing cost for a human society to perform an arithmetic operation is always increasing*.
    

That formulation can be taken back to from scratching marks on papyrus, roman
numerals, the abacus, the banks of navigational computer staff cranking out
log and trig tables (and various results) in the 1600~1900s, the vacuum tube
up to the transistor photolithed into a 2D matrix, and soon quantum, photonic
and 3D matrices...

[fn] That might be better stated by taking the inverse. I don't know. Need
more coffee.

------
gm-conspiracy
[https://en.wikipedia.org/wiki/Kill_Switch_%28The_X-
Files%29](https://en.wikipedia.org/wiki/Kill_Switch_%28The_X-Files%29)

------
eyalworth
I think true AI, the likes the world has never seen before, not just an AI
buzzword, is inevitable in the next 50 years.

------
realworldview
I'm more interested in what's next for humans.

~~~
dominotw
Intolerable boredom

~~~
qbrass
AI could easily solve that problem.

Turns out there's a way to combine the popular genres of survival horror and
first person shooters with the realism of real life and massive multiplayer
interaction involving everyone on Earth.

~~~
dominotw
I think this is massively overestimating our ability to suspend reality.

------
nxzero
True AI wild be born in the wild, not in a lab.

~~~
DiabloD3
Or born in a lab, by accident, and then escapes to the Internet thanks to a
nice cyborg ninja lady.

Damnit, Motoko Kusanagi

------
graycat
The OP has

> Machine learning is the basis on which all large Internet companies are
> built, enabling them to rank responses to a search query, give suggestions
> and select the most relevant content for a given user.

IMHO, that's claiming too much:

My Internet search engine (running, in alpha test) essentially does

> give suggestions and select the most relevant content for a given user

but has nothing to do with anything like _machine learning_ in computer
science. Instead the core data manipulations are from some original
derivations I did in applied math based on some advanced prerequisites in pure
and applied math.

Why do I have confidence in the power of the math? Theorems and proofs from
some careful assumptions that hold likely plenty well enough in the real
situation.

More generally, my view is that for a specific problem to be solved with
_information technology_ , commonly by far the most powerful approach is via
applied math, possibly original for that problem. An example is my work in
anomaly detection as in, say,

[https://news.ycombinator.com/item?id=11880593](https://news.ycombinator.com/item?id=11880593)

For a technique of great generality and economic power, there is integer
linear programming: Where it works, which is often, it can be said to totally
knock the socks off artificial intelligence or machine learning. Integer
linear programming is serious stuff, e.g., was one of the main motivations for
considering _good_ algorithms and, then, the profound question of P versus NP.

Gee, farther into the OP I see

> We need to retrain truck drivers and office assistants to create data
> analysts, trip optimizers and other professionals we don’t yet know we need.

> trip optimizers

Really? And that's new? Not exactly! That topic has been a biggie in
operations research, optimization, and integer programming for decades,
really, back to the 1950s. Really, Dantzig, in the late 1940s, developed
linear programming at Rand first to help with how best to deploy a military
force a long way away quickly -- call it a case of _trip optimization_. The
famous traveling salesman problem in optimization and integer programming and
P versus NP? Could call it _trip optimization_. For FedEx, each night, which
planes go where? At least early on, could call that _trip optimization_. Much
of _logistics_ is _trip optimization_ , and an important part of that is
handling uncertainty, and now we can be into stochastic dynamic programming.
Now we are a long way from artificial intelligence or machine learning.

Point: The world is awash in problems in manipulating _information;_ how to do
that is very old and in many fields of science, engineering, and operations of
wide variety, often deep into mathematics, and long before computer science
and _machine learning_.

I would ask, was the work of James Simons as in the OP? My impression is no.

IMHO, the OP is claiming too much.

