

Machine Learning #2 - Hill Climbing (Meaning of Life?) - linux_devil
https://www.iamtrask.squarespace.com/blog/2013/12/18/machine-learning-2-hill-climbing-search-lifes-journey

======
nmc
Another way to look at AI climbing hills is "causal entropic forces" [1],
which can make the AI spontaneously want to climb, because it maximizes
entropy — it gives more future opportunities than going down.

On an unrelated note, your certificate is for _.squarespace.com which does not
include www.iamtrask.squarespace.com and triggers an SSL exception.

[1] Wissner-Gross and Freer. "Causal Entropic Forces", in _PhysRL* 110 (2013)
—
[http://www.alexwg.org/publications/PhysRevLett_110-168702.pd...](http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf)

~~~
eli_gottlieb
The problem being that "more possible futures" doesn't actually learn any
specific function or direct actions towards a specific goal.

~~~
nmc
You are right, it does better: it discovers the goal on its own.

~~~
eli_gottlieb
That makes no freaking sense. If I want to teach a program to play Pac-Man, it
will have more _possible_ futures in situations where it runs away from ghosts
and _avoids_ eating fruit. But I want it to eat the fruit!

