
When algorithms surprise us - imartin2k
https://arxiv.org/abs/1803.03453
======
kpil
I think it was almost 20 years ago when I expectantly started the robot
algorithm that had slowly been evolving for weeks.

I had already realized that I had to adjust the fitness function to give
points for killing, not just surviving, since the first attempt had created
robots that instead of skillfully picking off the opponents one by one - as I
had envisioned - took speed for the nearest corner and then just sat there
waiting to be killed by one of the other traditionally programmed robots.

I realized that the corner strategy made sense as they got the most protection
there, but it was a bit unsatisfactory to watch...

In the second try the robots scurried to their usual corners, waiting to be
killed, but this time firing randomly, occasionally killing one of their
opponents by luck before they got hit.

The evolutionary algorithm sort of hit a plateau there, and besides I kind of
wanted to use my computer for other things rather than spending days and days
simulating robot fights.

But I learned two things:

* Evolution most likely is a thing.

* It's really really hard to write fitness functions...

~~~
paulific
So you were working on a lab proof of S.L.A. Marshall's Men Against Fire? ;-)
[https://en.wikipedia.org/wiki/S.L.A._Marshall](https://en.wikipedia.org/wiki/S.L.A._Marshall)

------
btrettel
It's important to make sure that what these algorithms actually produce is
surprising to subject matter experts. I can recall, but can not find at the
moment, a tumblr post where the blogger was surprised that a computer program
which optimized a physical structure to minimize material while maximizing
strength ended up looking organic.

This didn't surprise me at all. Organic structures evolved over millions or
billions of years and probably are nearly optimal at accomplishing a
particular task. I'd be surprised if the optimization software didn't produce
something that looked organic.

It's not like the optimization software was better than actual organic
structures either, which are neither isotropic (in organic structures the
strength varies depending on the direction) or homogeneous (in organic
structures the strength varies depending on the location) as the software had
assumed.

------
pietroglyph
This article is actually providing select anecdotes from a more exhaustive
paper:
[https://arxiv.org/pdf/1803.03453.pdf](https://arxiv.org/pdf/1803.03453.pdf)

~~~
dang
Ah, good catch. It was discussed here:
[https://news.ycombinator.com/item?id=16600701](https://news.ycombinator.com/item?id=16600701).
But maybe a second discussion isn't so bad.

We'll change the URL to that from
[http://aiweirdness.com/post/172894792687/when-algorithms-
sur...](http://aiweirdness.com/post/172894792687/when-algorithms-surprise-us).

~~~
baking
The title is still the same as the original link, but I wanted to comment on
the use of the term "algorithm" in this sense. I'm wondering if that's the
proper term for the output of a machine learning algorithm, and if we
shouldn't be calling it "learned behavior" instead. I just feel it disparages
all the careful work of library developers and the like and cheapens an
important field of study.

------
starchild_3001
I tend to believe life is universal. It may arise in a Turing machine that
executes all possible programs one line at a time in parallel. It may arise in
the subatomic realm... assuming sub-atomic particles exhibit sufficiently
diverse complexity between 10^-35 and 10^-17 meters (a space unknown to us).
It may arise between galaxies in the universe etc. We should view life more
broadly.

~~~
mjcohen
Then I think that you would enjoy "Last and First Men and Star Maker" by Olaf
Stapeldon.

Available quite reasonably at [https://www.amazon.com/Last-First-Men-Star-
Maker/dp/04862196...](https://www.amazon.com/Last-First-Men-Star-
Maker/dp/0486219623/ref=sr_1_1?ie=UTF8&qid=1523806615&sr=8-1&keywords=last+and+first+men+and+star+maker).

------
schoen
I'm also reminded of [https://blog.openai.com/faulty-reward-
functions/](https://blog.openai.com/faulty-reward-functions/) (which I saw on
HN a while ago), where an AI unexpected learned to

> turn in a large circle and repeatedly knock over three targets, timing its
> movement so as to always knock over the targets just as they repopulate.
> Despite repeatedly catching on fire, crashing into other boats, and going
> the wrong way on the track, our agent manages to achieve a higher score
> using this strategy than is possible by completing the course in the normal
> way.

------
stabbles
Would be interesting to provide videos of the evolved robots rather than just
the anecdotes.

~~~
hcs
The paper he draws the examples from has links to videos of most of the
visually interesting examples.

------
sundarurfriend
Very interesting. 'Learning to Play Dumb on the Test' was the most surprising
one to me, followed by the alien-turned-car that lead to 'novelty search'
algorithms.

------
majewsky
> Sometimes I think the surest sign that we’re not living in a computer
> simulation is that if we were, some microbe would have learned to exploit
> its flaws.

Someone needs to tell the author about quantum mechanics. It's not an
unreasonable hypothesis to explain quantum effects as numerical errors in the
Matrix.

~~~
scottie_m
It doesn’t make sense for quantum effects to represent errors as much as it
would for them to be “shortcuts” to save on power. This would be especially
true if spacetime itself turns out to be quantized rather than continuous.
Either way though, the probabilistic nature of QM on its own could represent a
“fudge” factor I guess?

Truthfully simulation theory is uncompelling as a real theory, but very
entertaining to contemplate.

