
What AlphaGo Zero teaches us about what’s going wrong with innovation - bowyakka
http://timharford.com/2017/11/what-alphago-zero-teaches-us-about-whats-going-wrong-with-innovation/
======
makomk
The big question this fails to ask is: full employment of what kind? Recent
research has found that nearly all of the net job growth is in "alternative
work", meaning temporary jobs, contract workers, freelancers, etc:
[https://qz.com/851066/almost-all-the-10-million-jobs-
created...](https://qz.com/851066/almost-all-the-10-million-jobs-created-
since-2005-are-temporary/)

Now think about the kind of innovation that companies like Uber and Deliveroo
represent. While ostensibly they might be tech companies, in reality their
business models revolve around a workforce of contractors who lack employment
benefits and job stability. Their innovation is primarily in making more
people work for less and take on all the risk.

~~~
jjaredsimpson
Criticism of the -- what I believe to be -- inevitable post-work post-scarcity
economy is rooted in the false idea that employment provides meaning to those
employed.

But proclaiming the "Value of Work" is just arguing for the "Merits of
Drudgery."

I can't wait to not work ever again. The weak reply that, "doctors have
valuable employment that gives them meaning," is completely beside the point.
Doctors like helping people or the challenge of solving an ailment or they
like the high status of being a doctor in society or the high pay.

But they don't like paperwork, or interacting with insurance companies. Most
work is like that. Low status, repetitive, boring, meaningless. Trading the
best hours of the day of the best years of your youth is a terrible bargain,
but persists because it is connected to survival and status.

Break the connection and humanity prospers.

~~~
topmonk
When I think about UBI, I keep picturing a group of pidgeons fighting over
pieces of bread.

I don't see UBI as freedom. It shares a lot with slavery, in that someone else
is feeding you, and therefore has control over you.

What I hope for in the future is a fully independent machine that each person
owns and is capable of caring for them, by providing food, shelter, etc. and
is capable of building a clone of itself.

In this way, you truly are free, having a machine which you own, and can
shutdown and leave at anytime if you wish.

~~~
ThomPete
You are looking at this from the wrong perspective.

You are assuming there is someone (other humans) who are feeding you.

The point of UBI is that it's built on a realization that technology itself is
feeding you. In the end, there isn't going to be any single owners because
everything is better solved by the technology we are all going to be owning
the means of production so to speak.

The post-scarcity society is where all basic needs are met.

If each person has their own machine you are kind of back to the same problem
you have now. Who gets to use what resources?

~~~
bluesnowmonkey
Why should technology continue to feed you though? Maybe it's more efficient
to let you starve. This isn't a theoretical, what-would-a-currently-
uninvented-AI-decide-to-do kind of question. Just look at what corporations do
now. It's all about dollars and cents. Non-human entities don't care about
what's right for humans. Even ones that are mostly composed of humans.

~~~
cinquemb
> _Why should technology continue to feed you though?_

I think this comes from current thinking based on past history where we were
more resource constrained, embodied by this:

"In the rest of society, however, we often both try to hire people who seem to
show off the highest related abilities, and we let those most prestigious
people have a lot of discretion in how the job is structured. For example, we
let the most prestigious doctors tell us how medicine should be run, the most
prestigious lawyers tells us how law should be run, the most prestigious
finance professionals tell us how the financial system should work, and the
most prestigious academics tell us how to run schools and research."[0]

Where as a more technological perspective might recognize how thinking purely
along the current "dollars and cents" prestige lines, and might come to
realize that by seeking to sustain every human to some increasing degree, will
then "free" the marginal human to help maximize along some dimension that
isn't necessarily the "dollar and cents" direction (think for every
high/college/grad school drop out now making ~6 figures writing software, that
could be if afforded a similar style of living/degree of autonomy in life as
they do today, might choose to pursue something more likely to enhance
technological development[well who knows, maybe I am just speaking for
myself], or those who were born into a situation where everyday was a arduous
to feed themselves who then will be "free" to spend more of this time to
anything but relative foraging for sustinence). This can perhaps be embodied
as a solution by recognizing this:

"This can go very wrong! Imagine that we wanted research progress, and that we
let the most prestigious researchers pick research topics and methods. To show
off their abilities, they may pick topics and methods that most reduce the
noise in estimating abilities. For example, they may pick mathematical
methods, and topics that are well suited to such methods. And many of them may
crowd around the same few topics, like runners at a race. These choices would
succeed in helping the most able researchers to show that they are in fact the
most able. But the actual research that results might not be very useful at
producing research progress."[0]

[0] [http://www.overcomingbias.com/2016/06/beware-prestige-
based-...](http://www.overcomingbias.com/2016/06/beware-prestige-based-
discretion.html)

~~~
fragmede
Lacking UBI, scientists are forced to pick research topics that are only
barely related because that's where the money is. (Eg grant proposals for
million-nanometer sized machines when nano-machines were a hot research
subject). How is letting scientists research subjects they're _actually_
interested in, instead of seeking funding from industry any worse for the
progress of science? Science research frequently has _zero_ commercial
application. (Eg Chemical compounds that could help humanity go unresearched
because they can no longer be patented.)

~~~
cinquemb
> _How is letting scientists research subjects they 're actually interested
> in, instead of seeking funding from industry any worse for the progress of
> science? Science research frequently has zero commercial application._

Let's ignore any influence industry already may exert on any grant funding.

Because in some fields, at best, industry is behind what's being worked on in
academia. In the mid 60's when arpanet was being developed, I wonder what the
industry/market would have been asking for instead? Probably some minor
extension of something they already seen before…

Being beholden to industry can be a blessing for some (just like the grant
hamster wheel is for others), but for others it could just be another set of
relatively arbitrary constraints in the scheme of figuring out the
theoretical.

Here's a list of research in mathematics that in some shape or form, was
considered to have zero commercial application[0]. Im sure you can find
analogues in many sectors.

[0] [https://mathoverflow.net/questions/116627/useless-math-
that-...](https://mathoverflow.net/questions/116627/useless-math-that-became-
useful),

------
GuB-42
Projects like DeepBlue and AlphaGo are not fundamental innovation nor
research, they are just PR stunts that show the expertise of the company
making them.

TBH, winning a game of chess or go has little value in itself, except for the
limited market of selling chess or go software. The reason they are doing that
is mostly for publicity. IBM makes computers, and they show how good they are
at it by having one beat top players at chess, and Google makes machine
learning based products and they use AlphaGo to show how good they are at it.

Chess and go don't drive innovation, they are just a side effect of real
innovation.

~~~
mda
Funny that up until 2016 go was regarded as one of the most difficult games
that computers could master, and now that it is solved it becomes a PR stunt?
Would you claim the same in 2015?

~~~
GuB-42
It was a hard problem, and that's what makes it an effective PR stunt.

Google was working on machine learning for some practical application like
image classification, better targeted ads or whatever thing Google does. A
bunch of people then came up with the idea: "hey, we have all that AI stuff,
we may be able to use it for computer go". And Google replied with "OK, sounds
like good publicity, here is a budget, we also have a bunch of servers and if
you need help, feel free to ask our machine learning department".

It is like making an industrial robot that can crush concrete blocks or
whatever difficult but not that useful task. Maybe it is a huge deal because
all previous attempts failed, but the point here is not that years of research
in concrete crushing robots have payed out, but rather that recent advancement
in practical engineering made it possible, and maybe even easy.

~~~
fjsolwmv
That's not true though. Google bought a company whose main product was a Go
machine, on the idea that potentially those smart people could do useful work
also. Or just as PR cover for their unrelated AI work.

------
d--b
Mmmh, I am not a specialist and I don't know the numbers, but it seems to me
that fundamental research is not less active than it used to be.

Physics made a lot of progress in materials (nano tech, weird polymers, and so
on), in building batteries, in finding the higgs boson and gravitational
waves, and I'm sure plenty of other fields.

Medical research has advanced a lot with the invention of CRISPR.

CS has grown a lot in AI and quantum computing...

~~~
wallflower
If you read the science fiction series "The Three-Body Problem" (highly
recommended), it makes a very compelling argument that fundamental research is
the most important investment in the future.

For example, fusion drives, not traditional stored rocket propellant engines,
will be necessary to navigate between planets and the outer solar system.
Also, existing known behaviors/laws of physics aside, the book posits that
colonization of other planets/stars in the universe requires achieving light
speed travel (along with hibernation technology).

However, the other argument the book series makes is that there needs to be a
strong motivator to get all the countries and economies of the world to focus
on fundamental research and applying it.

~~~
nine_k
The mother planet reaps no material benefits from colonizing another star.
Those groups who colonize it get a world if their own.

So interests of Earth governments are not aligned with star travel, and only
marginally aligned with colonizing e.g. Mars. Those groups who want to ho
there will have to do it themselves. (See Elon Musk.)

~~~
gech
Hmm. What about an escape mechanism for its people at least? Or scarce mineral
resources that could be brought back?

~~~
nine_k
Scarce minerals from the asteroid belt? Already in R&D (google "Planetary
Resources"). Shipping anything in bulk from another _star system?_ Unlikely
even with the fabled "teleportation" tech.

------
payne92
This article is off in so many dimensions.

Fundamental research is not less active, but it's happening in different
places (e.g. the Google Brain team). Find the most profitable companies and
you'll find the research.

And to suggest a computer Go player, taught in a few days, is a "marginal
improvement" over decades of "AI research": as my kids say, "wut?"

If anything, today's deep learning driven AI is a prime example of how
fundamental research can work (neural networks were considered research
"fringy" by many until about 10 years ago).

~~~
nkurz
_And to suggest a computer Go player, taught in a few days, is a "marginal
improvement" over decades of "AI research": as my kids say, "wut?"_

Maybe I'm the one that has it backward, but I'm pretty sure that Harford would
not agree with the statement that Alpha Go Zero is only a "marginal
improvement".

To the contrary, he says that Alpha Go is an "outlier", and uses it as an
example of the sort of "speculative research" we should be doing more of:
"Productivity and technological progress are lacklustre because the research
behind AlphaGo Zero is not typical of the way we try to produce new ideas."

Apparently he should have been clearer, but I took the article as a call for
more real research of the type that produced Alpha Go, and fewer of the
"pragmatic shortcuts" and "brute-force approaches that taught us little but
played strong chess"

~~~
mannykannot
There seems to be a mismatch between the headline and the article itself. I
see this quite often, and I think it is often due to headlines being written
by editors, or even editorial assistants.

The author's choice of examples, featuring a counter-example prominently,
seems odd - perhaps it is to capitalize on the interest in AlphaGo Zero. The
article is something of an anachronism, in that it would have worked better
immediately after Deep Blue (or even after Watson/Jeopardy).

------
cdancette
A simple solution would be for the government to fund more fundamental
research.

Research results should be public goods.

It seems hard to encourage companies to do public research, as they have no
short / middle term interest to do so

~~~
eru
Companies have at least as much interest to do research as they have to do any
other charitable activity. The private sector does plenty of charity.

Not sure the government should be involved. Not because basic research ain't
great---in an ideal world we'd all get ponies from the government---but
because budgets are finite and there are other opportunities some of them with
more definite benefits.

(Like eg funding education, especially early education. Or perhaps just taxing
less, etc.)

One interesting thing to note is that in our world the American and British
military funded some of the first computers. A clear example of government
funded research. But---if the government wouldn't have paid for inventing
computers for the militaries, IBM came up with computers only a few years
later. (And in the counterfactual with less government expenditures, the
private sector might have had more funds left over to build computers
earlier?)

~~~
cdancette
I think charity improves their public image more than fundamental research,
that will be known only by specialists. Moreover, they have large tax
incentives for charitable activities (not sure if they have for research too)

For the military computers, they had a clear interest in doing so, and the
military keeps most of their research hidden, so I think it's not really
comparable to public research

~~~
pjc50
> large tax incentives for charitable activities

People misunderstand how these work. You don't get money by giving away money.
What happens is that the charity gets the money as if it were pre-tax, that's
all.

(Trying to get the money back into the company from the charity after you've
got the tax break is fraud)

~~~
pault
I'm under the impression that it works like this: I make $100,000 this year,
donate $20,000 to charity, pay taxes on $80,000 in income. Is this not
accurate? My understanding is that it can save you money if you're just over
the bottom end of a tax bracket. Not sure if that applies to corporations too.

~~~
pjc50
> I make $100,000 this year, donate $20,000 to charity, pay taxes on $80,000
> in income. Is this not accurate?

That sounds accurate.

> it can save you money if you're just over the bottom end of a tax bracket.

That sounds like a misunderstanding of tax brackets - if the brackets are
(e.g) 20% up to $80k and 40% above that, with no deductions, what do you pay
if you earn $80,001?

~~~
pault
You're right. Obviously I've never given this subject much thought. :)

------
l3robot
"It was a time when companies weren’t afraid to invest in basic science." No
they were probably afraid, but they were forced to invest in science by
states. AT&T did not decide to invest massively in science and risky projects
like Unix, they were forced to. Please stop thinking companies are behind
innovation. A great piece of article that demestify this myth:
[https://www.theguardian.com/technology/2017/may/11/tech-
inno...](https://www.theguardian.com/technology/2017/may/11/tech-innovation-
silicon-valley-juicero)

~~~
choonway
The push came most likely from the pressures of the cold war. Now that it is
long over, threats like Russia/Syria/ISIS/North Korea/China don't provide the
same level of urgency to compete as before.

~~~
fjsolwmv
We are also past the era where someone like Shannon or Turing noodling on a
sheet of paper could invent a war-changing idea.

------
cosmic_ape
Not only corporations abandon real fundamental research, but people like the
the author of that blog start referencing things like AlphaGo as "fundamental
research" comparable to Shannon's or Turing's work.

AlphaGo is a good and necessary engineering, but the ideas are pretty old, and
not especially illuminating. Start confusing it with fundamental research
often enough, and people will start to believe it. And then, corporate
managers, and academic research grants, and academic publishing venues, like
conferences, will start expecting "fundamental research" to be like AlphaGo
instead of actual fundamental research.

------
dingo_bat
I don't think there is a very sharp distinction between results oriented R&D
and "basic research". In the article, IBM's deep blue is dismissed as a dead-
end victory but apparently alphago is not? Why? They both seem identical to me
in goals and research methodology.

On a side note, I cannot wait for general super intelligence. It cannot come
soon enough. I'm tired of being poor and stuck in a fucking rut, and
contemplating my death in a few short decades.

~~~
dagw
_Why? They both seem identical to me in goals and research methodology._

In theory taking the work done on AlphaGo (and more importantly AlphaGo Zero)
and generalizing it to non-Go related problems should be a lot easier than
taking the work done on deep blue and generalizing to non chess related
problem.

~~~
alenmilk
Yes, AlpaGo Zero is mainly self taught. It means it learned to play through
the game mechanics. There are no databases of moves or smart optimizations
that are based on our understanding of go.

------
teabee89
I highly recommend this talk "Greatness cannot be planned: the Myth of the
Objective" by Kenneth Stanley: he created picbreeder.org (evolutionary art
platform) and realized that if an interesting state is set as an objective,
then it is extremely hard to reach it from the initial state with AI
algorithms, because you need to move away sometimes a lot, from local optima.

~~~
fjsolwmv
The notion of local optima is centuries older than the invention of the first
.org

------
forgot-my-pw
Join Leela Zero in trying to replicate AlphaGo Zero:
[https://github.com/gcp/leela-zero](https://github.com/gcp/leela-zero)

It's estimated that AlphaGo Zero took about 1700 GPU years to train. We can
only reach this number by having a distributed effort.

400k games has currently been submitted to the Leela Zero project:
[http://zero.sjeng.org/](http://zero.sjeng.org/) . It's still playing in
amateur level. (AGZ had about 30m self-play training).

------
dandare
This article implies a positive corelation between technology, research and
employment, which is unsubstantiated. Employment rate is a function of
politics.

~~~
d--b
While I agree that the article is not very convincing, it's not far fetched
that automating technologies may have an impact on employment rate.

Self-driving trucks and cars are likely to completely change the job
landscape, and truck drivers are not going to switch job overnight.

But yes, you're right, politics play a big part, so do monetary policies, and
culture (as in: it's more acceptable to be jobless in some cultures than in
others).

------
nottorp
I'm not current, so please enlighten me:

Is "deep learning" just a new buzzword for neural networks, or is there
something extra?

~~~
lustig
Technically yes, most often it's about stacking more layers in neural
networks, making them "deep". However, there is some merit to the new hype
since stacking more layers worked way better than anyone previously working
with neural networks and ML thought it would. But in theory you could
generalize deep learning to other methods than neural networks, it's basically
about creating way more complex models than those used in previous research
and feeding them lots of data. Thereby assuming less about the problem and
letting the model figure it out.

~~~
marcosdumay
> it's basically about creating way more complex models than those used in
> previous research and feeding them lots of data

Those are instructions for over-fitting. Deep learning neural networks escape
from this problem somehow, but it's not a given that other models would escape
it too.

~~~
lustig
This is true! Overfitting is definitely one of the biggest problems with deep
learning. Some techniques to avoid it have been developed, such as dropout
(introducing noise) and early stopping. But in general this is why deep
learning requires huge amount of data, a deep learning model will overfit if
not given enough data. That is also why (at this time) it only performs well
for certain problems where the ratio between available data and problem
complexity is high enough.

------
timthelion
It is strange to read this, remembering how Microsoft has been lambasted time
and again for doing so many interesting things with Microsoft Research and
hardly ever taking any of them to the product stage. Is Microsoft unique in
it's too much R and not enough D approach?

~~~
munin
MSR has been heavily prioritized onto doing products and commercialization,
stimulated by a re-org. In my research area, there have been a number (more
than ten) of high profile departures of researchers from MSR, frequently going
_back_ _to_ _academia_ , which is madness.

------
purrcat259
Currently giving 'Error establishing a database connection'

~~~
tobyhinloopen
An article about AI revolution and we still having problems keeping a database
alive. Isn't that a bit ironic?

~~~
vog
Even more ironic is using a database for static content at all.

Right now, we (as a society/industry) are unable to deploy well-known best
practices of 5 years ago:

[https://www.martinfowler.com/bliki/EditingPublishingSeparati...](https://www.martinfowler.com/bliki/EditingPublishingSeparation.html)

(In software development, it is even worse. I can't find the source to cite,
but the saying is along the lines of: The mainstream programming languages and
paradigms are mostly at the state of research of the 1970s, and right now we
are evolving towards the state of research of the 1980s.)

~~~
pault
Isn't that the case for all industries? How many of the battery technologies
currently being researched will be commercially available in the next 10
years?

