
Letting neural networks be weird - gpresot
http://aiweirdness.com/post/172894792687/when-algorithms-surprise-us
======
AnIdiotOnTheNet
> Sometimes I think the surest sign that we’re not living in a computer
> simulation is that if we were, some microbe would have learned to exploit
> its flaws.

What makes the author so sure that it hasn't? We can only take the workings of
the universe for what they are. We have no context with which to determine
what is a flaw and what is just physics. One could say that quantum tunneling
looks a hell of a lot like a common collision detection bug, but that could
just be because how reality actually works is unintuitive to minds evolved for
a more applicable set of rules. We'd never know the difference. In fact, it
probably isn't meaningful to say there even is a difference.

~~~
amelius
Also: if the effects of the bug were obvious, the creator would have fixed it.

~~~
adrianN
Unless we're running on some forgotten machine in a closet somewhere.

~~~
Viliam1234
Any simulation in a sufficiently forgotten machine is indistinguishable from
reality.

------
colordrops
These anecdotes remind me of stories where someone asks a genie for a wish,
and the genie technically grants it but in a way that is not what the wisher
intended. For example, (cribbed from another thread), "I want to be rich". So
the genie renames you to Richard. Ironically a very religious buddy of mine
claimed that I as a software engineer am in part responsible for releasing a
"Jinn", which could have unintended consequences for humanity.

~~~
Sharlin
Computers are the genies and golems of the fairytales. Stories that
demonstrate the folly of confusing what you _ask_ with what you _mean_ are as
old as humanity itself, but the stories we now tell are not fictional anymore:
we went and _built_ those golems and genies and now share the planet with
billions of them!

~~~
booleandilemma
Your comment reminds me of Neuromancer:

 _For thousands of years men dreamed of pacts with demons. Only now are such
things possible._

------
clickok
It's interesting that so much neural net weirdness emerges from exploiting
errors in physics simulators or floating point math. I am now expecting the
next generation of perpetual motion machines to include AI to try to take
advantage of physics bugs in our own universe.

On a related note, does anyone know how you might go about fixing a simulator
that allows collisions to generate more energy/momentum than was initially
supplied? Or otherwise violates known invariants?

~~~
comicjk
You want a symplectic integrator
([https://en.m.wikipedia.org/wiki/Symplectic_integrator](https://en.m.wikipedia.org/wiki/Symplectic_integrator))
such as Verlet Integration
([https://en.m.wikipedia.org/wiki/Verlet_integration](https://en.m.wikipedia.org/wiki/Verlet_integration)).
Such integrators naturally conserve energy and momentum as long as your forces
and energies are self-consistent and you don't use gigantic timesteps.

A classic mistake is to use something like "velocity += acceleration*time"
(this is called Euler integration). It looks reasonable, and it's good enough
for a toy project, but it doesn't conserve energy unless the timesteps are
infinitely small.

A more sophisticated mistake is to use something like Runge-Kutta Integration:
highly accurate in terms of position, but it is not symplectic so the total
energy will drift over time. Think of your simulated world as a stack of graph
paper sheets, where each sheet represents a surface of constant energy. Runge-
Kutta will take you very close to the ideal (x,y) point - but not necessarily
on the same sheet. Verlet Integration may be a little further from the right
point each time, but by its mathematical form it will always stay on the same
sheet.

------
jmmcd
Bad title (in the original) because the examples in the paper are mostly drawn
evolutionary computation and artificial life, with only a few relating to
neural networks being used in those fields.

------
partycoder
In my opinion, it is very, very hard to fully sandbox a clever AI. It can find
side channels, or it can turn innocuous stuff into computing device...

e.g: "Accidentally Turing complete" shows how things that were not intended to
be computing devices are actually Turing complete. Including: Magic the
gathering, the card game.

[http://beza1e1.tuxen.de/articles/accidentally_turing_complet...](http://beza1e1.tuxen.de/articles/accidentally_turing_complete.html)

~~~
scaryspooky
I've often thought that JIRA workflows are Turing complete, I was hoping your
list would have evidence.

~~~
mlthoughts2018
It’s funny to think of JIRA as Turing complete because when using JIRA nobody
can complete _any_ programs.

~~~
k_sze
Well, everything halts then, so that just solved the Halting Problem.

~~~
anarazel
But you need to prove that it halted - to you manager obviously - that's the
impossible and stressful part.

~~~
tripzilch
But is it provable in nondeterministic polynomial stress?

------
matthberg
This a pretty much direct summary of a paper [0] previously discussed(twice)
on hacker news [1][2]

[0]:
[https://arxiv.org/pdf/1803.03453.pdf](https://arxiv.org/pdf/1803.03453.pdf)

[1]:
[https://news.ycombinator.com/item?id=16837030](https://news.ycombinator.com/item?id=16837030)

[2]:
[https://news.ycombinator.com/item?id=16600701](https://news.ycombinator.com/item?id=16600701)

------
John_KZ
Those labeling nets keeps resurfacing in the news. They're more of a hack than
a proper implementation. It uses a focus-based algorithm to detect prominent
features and just glues them with NLP without using any context. If it detects
a scissor and a paper it might say "Man uses scissors to cut painting" despite
no man or painting being present. That's because there's a high statistical
correlation between the two in it's dataset. That's it.

------
chimprich
Most of the examples actually come from the unexpected behaviour of
evolutionary algorithms.

My favourite example of such (and it doesn't look like it made it into the
paper the article is based on) was an attempt to create a random number
generator via a genetic algorithm that controlled the development of an
electronic circuit. The unexpected way the algorithm solved the problem was to
produce a radio.

Edit: I may have been thinking of the result reported this paper.
[http://people.duke.edu/~ng46/topics/evolved-
radio.pdf](http://people.duke.edu/~ng46/topics/evolved-radio.pdf) . If so, the
goal was to build an oscillator rather than a random number generator.

------
FrozenVoid
Why is the author sure we need microbes to discover exploits in a simulation:
the "floating point error" problem would be far more obvious, like being
unable to measure sub-planck lengths or model singularities?

------
flavius37
Here is explain a paper that uses AI to gind bugs in games:
[https://youtu.be/wm8tK91k37U](https://youtu.be/wm8tK91k37U)

------
xaedes
In the first example there really _could_ be sheeps, or at least goats. They
could cover there quite well.

Like in this video:
[https://www.youtube.com/watch?v=0izxifsh0MU](https://www.youtube.com/watch?v=0izxifsh0MU)

