
Physicist proposes new way to think about intelligence - alexwg
http://www.insidescience.org/content/physicist-proposes-new-way-think-about-intelligence/987
======
calhoun137
This sounds like nonsense to me. First of all, what exactly do they mean by
"intelligent behavior"? I looked in the paper but I couldn't find where they
define what they are trying to model. They do however use the vague expression
"remarkably sophisticated behaviors associated with the human ‘‘cognitive
niche,’’ including tool use and social cooperation."

Tool use and social cooperation, sounds interesting. Let's see, that seems to
be figures 3 and 4 from page 4. Once you get over the idea of modeling an
animal with a disc, it seems a little bizarre that figures 3 or 4 have
anything to do with intelligent behavior. Figure 3 is about one disc bouncing
off another disc so that a third disc goes into a cylinder. I would say that
doesn't really capture the essence of how "non-human animals" use tools. Then
the social cooperation example is about a big disc attached to a string, that
moves differently when 2 smaller discs touch it at the same time, and then
under some specific configuration the discs have "social cooperation" to move
the big disc. As far as I can tell, whether or not moving the big disc is an
intelligent decision or not seems to not matter in this example.

We then get very bold claims, such as "physical agents driven by causal
entropic forces might be viewed from a Darwinian perspective as competing to
consume future histories."

~~~
BenoitEssiambre
The article is kind of written in gibberish. It also implies that the idea of
linking thermodynamics to intelligence is novel when a lot of experts actually
see it as a fundamental of thermodynamics and the universe. It's even possible
that this physicist is naive about the whole debate an thinks that he
discovered this.

However while the idea is not new, it does seem like he made a software
implementation of some some aspects of it and this could be interesting.

I find the link between thermodynamics and intelligence very interesting. Here
is how it works according to me. First the best definition of intelligence IMO
is the ability to predict the unknown (whether because it's hidden or because
it's in the future) from current knowledge.

In order to have 'intelligence' you have to have some information in your
head. That is to say, a part of you has to be physically correlated with a
part of the world, some particles in your head have to have patterns that
approximately and functionally 'mirror' part of the world.

Thermodynamics is about order and this is also about particles having
properties that correlate with each other. This means that in a low entropy
situation knowing something about a particle tells you something about some
other particles. Intelligence would by useless if the universe had too high
entropy. You can't get much insight from white noise even if you are very
smart.

There is a saying that "knowledge is power". This is truer than you might
think. It is true in a very physical sense.

For example, take the thermodynamic textbook example of a container with a gas
on one side and void on the other. If there is no barrier between the two side
the gas should move to fill the void and settle in the higher entropy case of
evenly filling the space. Thermodynamics says that you would need to spend
energy to push the gas back to one side.

However, if you were to put a wall in the middle of the container with a
little door large enough to let a molecule go through, and you knew exactly
the position and velocity of each molecule in the container, you could open
and close the door at exactly the right time when a molecule is about to go
from, say, left to right through the door and close it when one is about to go
right to left. Using very little energy you could get all molecules to end up
on one side of the container.

This should violate the second law of thermodynamics but it does not! Why is
that you ask? It's because the knowledge you have of the position and velocity
of all these molecules is a source of low entropy. Knowledge is low entropy
and the correlation between the particles in your head and the real world is
what allows you to make predictions and extract useful energy from things.

Note that an interesting comp sci result of this is that low entropy
information is more easy to compress and the more a system is good at
predicting data from other data, the better it is at compressing such that
there is also a tight link between the concepts of compression algorithms and
artificial intelligence. But that's another story.

~~~
calhoun137
Just so you know, before I started spending all of my time programming I was a
physicist, and statistical physics was my best subject.

Anyway, I kind of object to your definition of intelligence as "the ability to
predict the unknown". Of course it comes down to personal preference, but I
feel that when it comes to making AI, a better definition is something along
the lines of "intelligence is the ability to make good choices when faced with
a decision". In other words, I like to come at AI using the definition of
rational agent[1].

Machine learning is the subject that involves predicting the unknown, and
there is a ton of linear algebra involved. I am not an expert at ML, so it's
possible that the second law in some form plays a role, but if so I have not
come across that yet in my studies.

Now in as much as the second law applies to AI, I am skeptical, but your
comment has convinced me to keep an open mind and look into it more carefully.
Entropy is merely the log of the probability distribution, so when we talk
about maximizing entropy for decision making, what probability distribution
should we use, and should we use different distributions for different
situations, and if so how do we decide which one to use?

A much more relevant subject for AI in my opinion is game theory, which is
really a theory of decision making. Finding the optimal strategy for
navigating a decision tree can be a hard problem, and just because maximizing
entropy works to find solutions for some types of decision problems, doesn't
mean that it's a magic bullet that will always, or even frequently, work out.

[1] <http://en.wikipedia.org/wiki/Rational_agent>

~~~
ac500
_> "I feel that when it comes to making AI, a better definition is something
along the lines of "intelligence is the ability to make good choices when
faced with a decision"."_

What defines a good choice? What defines a bad choice? Saying "A true machine
intelligence is one that makes good decisions" is a tautology -- it doesn't
define anything. Rather, it just reframes one vague question into another.

Intelligence is notoriously difficult to describe, so don't feel bad about it
if you keep thinking of circular definitions. It's a tricky topic.

Also, IMO there's a big problem with rational agents: In order to be actually
useful/implementable, they must be defined in a specific logical framework.
You can define a rational agent in a vague sense without any specific domain,
but doing so does not solve any problems -- it again just results in
tautology. But the problem with choosing a logical framework is shown by
Godel's famous theorem. Simply put, any such agent in a well-defined domain
will be confined to that domain of reasoning. To me, a machine that excels in
some area of planning but can never think outside the box, is not intelligent
on the same level as humans.

Anyone advocating rational agent AI as true machine intelligence IMO will need
to either find a flaw and disprove Godel's incompleteness theorem, or provide
some system of logic that encapsulates all human reasoning (absurdly
impossible IMO).

On the other hand, I personally think true machine intelligence will be solved
much more "organically", far separated from formal logic and more related to
fuzzy pattern matching than rigid formal optimization. I really like this
definition of intelligence:

[http://lesswrong.com/lw/vb/efficient_crossdomain_optimizatio...](http://lesswrong.com/lw/vb/efficient_crossdomain_optimization/)

Though I wouldn't claim it's the ultimate one, as it's still a bit vague. I
think the key lies in what he describes as "cross domain" -- what is
colloquially called "thinking outside the box", because this is the only thing
humans seem extremely good at that every computer AI to date has failed at.

Humans are capable of transcending formal logical systems and finding new
truths that are _unprovable_ from the original formal system. How is this
possible? Godel proved that a formal system cannot ever determine this by
itself, from axioms, without being inconsistent.

IMO a true machine intelligence will also need to be capable of this formal-
system-transcending property (which you can also call "cross domain thinking",
or "thinking outside the box", or "creativity", or whatever).

~~~
calhoun137
For a rational agent, a decision is good or bad based on it's preferences,
which are described mathematically as part of the definition of the rational
agent. So I don't believe it's tautological.

I do agree with your statement that intelligence is difficult to describe. And
for the record I never claimed the rational agent approach will achieve "true
machine intelligence". Frankly, I don't even know what "true" machine
intelligence is or would be. What I am very interested in however is how to
create an AI that is capable of fooling a person into thinking it's
intelligent, in other words an AI that can pass the Turing test. For that
purpose I find the concept of a rational agent to be very helpful, in that it
provides a solid foundation to work from.

I'm also not worried about Godel's theorem at all. The statements that are
undecidable are very obscure and basically never come into play in human
decision making. I think Godel's result has very little practical application,
if it has any at all; I certainly don't think it's relevant for AI. I agree
that choosing a logical framework to work in presents many problems, but I
don't agree that you have to disprove Godel's theorem to create an AI that can
think outside the box.

~~~
ac500
I also don't think you have to disprove Godel's theorem to create an AI that
thinks outside the box, however, I believe you would have to for it to be a
rational agent. A rational agent by definition operates strictly within the
domain of a formal system. Thus by definition, a rational agent is in direct
conflict with the notion of "thinking outside the box."

Of course any simulation is implemented in some formal system, but that's not
what I'm saying. I'm saying the inherent problem with a rational agent is that
they maximize utility within a formal system -- that is what they do, that is
their entire purpose. Once we start talking about rational agents that can
"think outside the box", or think beyond the formal system in which they're
defined, _we're no longer talking about rational agents._

I agree with your belief that we may some day build AIs that think outside the
box, I just don't see how it can be a rational agent. I think it will be much
closer to a neural network, or probabilistic reasoning model, or some
extremely "organic" or "fluid" device from which intelligent behavior
organically emerges. Because if intelligence emerges from a strictly
formalized framework, then Godel's theorems prove some very crippling
limitations on what's possible within that fixed framework.

FYI there is definitely a connection between Godel's theorem and intelligence,
though it may not be immediately obvious. I would highly recommend this as fun
learning material: <http://ocw.mit.edu/high-school/courses/godel-escher-bach/>

~~~
solistice
Wait, wasn't a rational agent simply an agent that optimized a certain result
given a set of knowledge about it and it's environment? I mean in engineering
and physics, you do try to build the perfect something, but you also realize
you will never ever build the perfect something. You'll never build the
perfect computer, the perfect op amp, nor the perfect rational agent, just as
you'll never find the biggest number, because there's no upper bounds on these
kind of things.

~~~
ac500
Try to implement a rational agent (the AI or mathematically formalized
variety) in software, and you'll see what I mean. You have to define some
utility function precisely, and some algorithm to maximize it.

The problem with this is it won't really produce a "general intelligence" that
can think outside the box, because it will always be maximizing some utility
function defined in some rigid formal system. In other words, it will be
completely unable to "understand" things outside of the formal system in which
you define it.

------
Steuard
I haven't read the original paper, but (speaking as a physicist) I'm always
skeptical when physicists leap into extremely different disciplines and claim
to have a grand insight that its specialists have never considered before.

It could always be true: physics really does have a lot to contribute in many
areas! But the burden of proof is very, very much on the physicists to justify
the relevance of their approach.

In this case, based only on this little summary, I have to wonder whether they
are really explaining _intelligence_ or whether they are identifying some
broader natural principle for which intelligent behavior is just one
manifestation. There may be something quite deep here, but I'm holding off on
deciding what exactly it is.

~~~
cscurmudgeon
Welcome to modern physics, where physicists do unfalsifiable and imprecise
things based on bad philosophy. Anything goes as long as it gets you attention
anywhere.

<https://news.ycombinator.com/item?id=5562156>

HNers need to more critical of Science articles. This is where we need good
criticism not on some poor hacker's pet project.

Edit 1

"Additionally, a company he founded is exploring commercial applications of
the research in areas such as robotics, economics and defense."

That is a major red flag when people posit a universal AI theory and try to
sell to the hapless government tech based on it.

<http://en.wikipedia.org/wiki/Thinking_Machines_Corporation>

~~~
Steuard
Your comments seem strangely insensitive, given that I already said that I'm a
physicist (and presumably a modern one).

As I've already indicated, there's reason to be skeptical when physicists
stretch past the usual boundaries of the discipline. (Though it still _could_
make important contributions.) But modern physics _as_ modern physics is, by
and large, solid science.

~~~
cscurmudgeon
I am also a physicist doing AI. The comment was not aimed at you! I did not
mean modern physics (QM,Relativity etc)! That is rock solid!

What I meant was physics which is closer to the Time Cube theory than rock
solid mathematics on a spectrum of math and science.

"As I've already indicated, there's reason to be skeptical when physicists
stretch past the usual boundaries of the discipline."

There is reason to be skeptical when anybody proposes a grand unifying theory
for something as grand and difficult as intelligence!! Extraordinary claims
require extraordinary evidence!

This has been going since 1957 with so many young lives lost (sometimes
literally) due to techniques bordering on charlatanism. Sorry to be harsh, I
love AI and want it to succeed. It won't succeed if people mix pursuing
knowledge with pursuing fame and $.

[http://www.wired.com/techbiz/people/magazine/16-02/ff_aimyst...](http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all)

------
saulrh
This is a lot like the "efficient cross-domain optimization" definition from
Eliezer Yudkowsky. It looks like they even use the same information-theoretic
measure of optimization power across the distribution of possible futures.
[http://lesswrong.com/lw/vb/efficient_crossdomain_optimizatio...](http://lesswrong.com/lw/vb/efficient_crossdomain_optimization/)

------
technotony
This is fascinating. It proposes that intelligence is about maximizing the
control we have over future events, ie maximize the entropy of the system.
Intuitively this aligns well with how I think about many strategic options or
inflections in life/business - you try to take the path which maximizes future
options. Lots of interesting applications, will be interested to see how well
they manage to communicate this work across different disciplines.

~~~
patcon
I think it's about trying to minimize entropy (disorder) of the system.
Maximum entropy is maximum disorder (which the universe tends toward
naturally), such as when the pendulum hangs below the cart without any
intervention.

~~~
dsowers
Yeah, I had to parse the article a couple times because the wording is wrong.
It seems to me that entropica is trying to minimize the entropy of the system.
Life is all about low entropy.

~~~
thibauts
Indeed. I'd say intelligence tries to keep its environment in a state it can
predict.

~~~
SittingDuck
Good one: Ralf Der and colleagues proposed homeokinesis as such a principle in
1999. That turned out not to be enough, because the agent would then have an
incentive to move into a corner and just hide. That renders the world very
predictable, but clearly not a good strategy on the long run.

So, they generalized the method to find a predictable future, but in very
unstable states. Later, Der and Ay generalized this towards maximization of
predictive information - for that, it is not sufficient to predict to
environment, but there also must be something nontrivial to predict. This
gives rise to a number of highly interesting behaviours in many of their
scenarios.

So, yes, your idea is a good one...

------
scythe
I'm surprised they got through the whole article without mentioning
Nietzsche's _der Wille zur Macht_ (will to power), which seems to be
qualitatively identical (or at least very similar) to this proposal. Not bad
for a philosopher from the 1800s.

~~~
guylhem
It seems to me that Nietzsche work is frequently discussed yet rarely read.

I read Zarathustra with a goal of non casual reading - it took me 6 months,
due to what is IMHO the complexity of the issues discussed. One has to
frequently stop at every paragraph to ponder whether one's understanding is
correct, and how it articulates with the other paragraphs.

If on average, it takes more time indeed to read Nietzsche than any other
author, there might be an adverse selection against reading Nietzsche's work -
to maximise utility, if one is to judge utility by the amount of book read as
I frequently see in goals list (1 book a week, etc)

A consequence could be that among those who are discussing his work,
especially outside philosophy (CS, math, etc.) very few people have actually
read any of it - or a whole book.

~~~
dmix
> One has to frequently stop at every paragraph to ponder whether one's
> understanding is correct, and how it articulates with the other paragraphs.

Which is interesting, because Nietzsche is generally seen as one of the more
accessible of the philosophers from around that period.

I highly recommend reading Arthur Schopenhauer's work as well which heavily
influenced Nietzsche. It predates Nietzsche's work by about 30yrs but it is
very similar (although much more nihilistic).

<http://en.wikipedia.org/wiki/Arthur_Schopenhauer>

------
gbog
The top comment currently says "This sounds like nonsense to me", its first
child agrees and add something like "everybody new it since ages".

Well, for one I like this article. I wonder if we developers could use entropy
instead of debt to explain the necessity of refactorings and abstractions.

When you say that there is a lot of technical debt, the boss would assume that
it is like financial debt: as long as you are still running forward, you can
go to the banks and fix the issue with them. It is reversible.

But technical "debt" is not reversible. One wrong lazy "if customer = John
Doe" (instead of a proper configurable flag) paves the way to other wrong ifs,
and it becomes very fast an irrecuperable mess. This is more like growing
system's entropy: a broken mirror won't reassemble, or spilled water won't
come back to the bowl, as the Chinese say (for a lost hymen).

Using entropy instead of debt, provided people understand the concept#, we
could by showing that increasing the complexity of a system the wrong way
(adding ifs because "no time") is drastically and irreversibly reducing the
number of possible futures.

# Obviously the weakness of my point...

~~~
marcosdumay
Going off topic...

The "technical debt" term was created to explain the situation in a way that
your boss (that knows a bit about money) could understand and make usefull
analogies. Framing it in terms of entropy, altough more precise won't achieve
any of those goals.

------
zenogais
Catching up to early 20th century western philosophy. Relevant links:

\- Heidegger/World-disclosure (<http://en.wikipedia.org/wiki/Being_and_Time>)
\- Nietzsche's will to power (<http://en.wikipedia.org/wiki/Will_to_power>) \-
Process and Reality (<http://en.wikipedia.org/wiki/Process_and_Reality>)

------
espeed
"Trying to capture as many future histories as possible" (keeping your options
open) harmonizes well with Paul Graham's view on procrastination
(<http://paulgraham.com/procrastination.html>) and the tenet of "put off
decisions as long as possible".

UPDATE: Evidently several smart people have had this intuition. I just
remembered who said, "Never make a decision until you have to" -- it was Randy
Pausch in the "The Last Lecture"
(<http://www.youtube.com/watch?v=ji5_MqicxSo>).

And "delay commitment until the last responsible moment" is also an agile
principle ([http://www.codinghorror.com/blog/2006/10/the-last-
responsibl...](http://www.codinghorror.com/blog/2006/10/the-last-responsible-
moment.html)).

~~~
melipone
Yes, there is also something similar in game theory. You're losing if your
adversary reduces your branching factor.

------
mahrz
After a quick glance, this sounds a lot like information theoretic
empowerment, which has been around for more than five years:

[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjourna...](http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0004018)

[https://uhra.herts.ac.uk/dspace/bitstream/2299/6712/1/901919...](https://uhra.herts.ac.uk/dspace/bitstream/2299/6712/1/901919.pdf)

[https://uhra.herts.ac.uk/dspace/bitstream/2299/9659/1/ContiP...](https://uhra.herts.ac.uk/dspace/bitstream/2299/9659/1/ContiPaper.pdf)

Also the concept of homeokinesis is quite similar and also not quoted in this
work:

[http://www.informatik.uni-
leipzig.de/~der/Forschung/homeoki....](http://www.informatik.uni-
leipzig.de/~der/Forschung/homeoki.html)

~~~
irickt
Thanks for posting these. The third link in particular applies a very similar
approach to the inverted pendulum example, in depth and with clarity. I also
note the modest concluding paragraph for comparison to the grand claims of the
OP.

"Finally, we conclude that it would be desirable to transfer some of the
generality of the underlying empowerment concept into methods for empowerment
calculation in systems combining continuous and discrete dynamics, especially
systems of real-world relevance."

------
protez
Cognitive science states that intelligence is all about finding patterns, and
patterns reduce randomness, or what can be expressed in terms of entropy.
What's new here? Capturing future-histories? As long as the patterns can
describe the time evolution of steps ahead, the patterns would reduce the
complexities of what needs to be described, or what Wissner-Gross may say by
"capturing as many future-histories as possible." In my humble opinion,
Wissner-Gross just rephrased/repackaged the classical sense of intelligence
already embraced by many scientists.

~~~
huahaiy
Agree.

This work claims that there is no goal in their implementation of
intelligence, but forget the goal of "capturing as many future-histories as
possible". With a right amount of tolerance for vagueness, it is not that hard
to convince oneself that this goal can explain a lot of behaviors. But saying
the same thing with new words is not very useful in cognitive science.

A more interesting question to me is this: could Dr. Wissner-Gross use his own
theory to explain physicists' (including his own) obsession with cognitive
science, a field they know next to nothing?

------
charlieflowers
If this is a new piece of scientific research, why is there already product
marketing around it? The website has named the product "Entropica", and has a
demo video that is (while interesting) pure marketing.

New research ideas do not typically come packaged as a product for sale.

------
colanderman
This isn't just about keeping options open. Entropy can be seen as a measure
of available useful energy. It seems that it would be natural for human
intelligence to have evolved to maintain as much useful energy as possible,
since that would no doubt aid survival.

If that's true, I don't think it's as much that the developed process mimic
intelligence, as much as that both (human) intelligence and this process act
to meet the same ends.

~~~
scotty79
I think this research is the missing part of the puzzle of constructing truly
intelligent (even sentient) artificial life.

We can't do it the way evolution did it because evolution does not work on
individuals and we don't have enough computing power to simulate whole
population of artificial human sized brains and let them figure out on their
own that they should maximize their available useful resources while still
reproducing and evolving towards better architecture. That would lead to
intelligence same way real evolution reached it but we neither have power nor
time for that. Besides, even if we succeeded ... do we really want to share
the world with thousands AI-s all trying to maximize useful resources
available to them? Competition breeds innovation but also breeds war.

If you design AI and give it a specific goal you never get intelligence, you
are just getting useful machine that achieves the goal brilliantly. With this
discovery we finally know how to set the goal to be "act intelligent"

If you artificially set the goal to be to maximize the possible futures that
the AI can make real from the present point we can get entity that behaves in
a way we recognize as intelligent.

We just have to remember to be really sure that this AI models world correctly
enough to know that more potential futures could be realized by keeping
animals (including humans) around and ... you now ... not make them suffer.

------
budd1726
I will just leave this here: <http://xkcd.com/793>

------
conroe64
<http://www.entropica.com/> has a very nice, short, video demonstrating the
AI.

~~~
pitchups
Completely blown away after watching that video by the sheer range of tasks
this AI can accomplish on its own. Learning to walk, balance a stick, using
tools, co-operating, playing pong, and...even buying stocks low and selling
high! And all of this without being programmed or given a goal to do so. If
true this could be a huge breakthrough indeed. Would really like to learn more
about the core technology and will be following this more closely.

~~~
jk4930
I'm far from being blown away. Researchers in AI have demonstrated AI systems
with a "blank mind" that can learn by interacting with its environment given
some reinforcement learning. They often operate in simple toy examples, but
once confronted with the real, complex, messy reality, they fail. I found
mostly two reasons: Either their mechanism is too simple and can't solve other
than simple problems or it's (computationally) too complex and doesn't scale.

So I'm not saying this approach is bad, but we have yet to see whether it does
more than simple examples.

~~~
pitchups
From what I can gather the difference here seems to be the claim that a
simpler, more fundamental principle may explain the motivation behind all
intelligent behavior. A tall claim no doubt - so I agree that it needs to be
validated with more complex examples.

------
nilkn
The connections are slim, but it sounds a bit like the NES AI, which played
Mario pretty successfully just by "making bits go up" (more or less).

~~~
Houshalter
No you are right. You can compare this to almost any AI that uses a similar
strategy, to predict the future and maximize some goal. The cool thing about
this is that it defines it's goal as something general purpose enough to
create interesting behaviors in a lot of different situations.

But the hard part of AI is actually predicting the future in the first place.
Exploring a search tree with billions and billions of possibilities. Or worse,
trying to figure out how the world works in the first place from a limited set
of observations, then doing that. Figuring out the goal of the AI was never
really the hard part.

~~~
nilkn
> The cool thing about this is that it defines it's goal as something general
> purpose enough to create interesting behaviors in a lot of different
> situations.

Yep, that is the first thing that stood out to me about both the NES AI and
this new paper. Both are defined in ways that are so far removed from the
tasks they've been put to that they seem to seek out their own goals relevant
to the context.

------
pdog
This seems remarkably similar to Jeff Hawkins' memory-prediction framework
theory of the brain in his seminal work, _On Intelligence_ [1].

[1]: <http://www.amazon.com/gp/aw/d/B000GQLCVE/>

------
snowwrestler
This is interesting because it connects intelligence directly to the concept
of life as a locally anti-entropic system (gravity can break an egg, but only
a chicken can make a new egg). It implies that intelligence is an aspect of
all life, which fits the latest research into animal behavior and
intelligence.

------
Houshalter
>The proposal requires that a system be able to process information and
predict future histories very quickly in order for it to exhibit intelligent
behavior.

Well isn't that basically the problem with all attempts at artificial
intelligence? Processing information quickly and predicting the future is far
from simple.

------
tokenadult
It will be interesting to draw psychologists who research human intelligence,
especially those who do so from an evolutionary psychology perspective, into
this discussion. I aspired to be a physicist back in my high school days, and
yet today spend most of my time with academics among psychologists. It's good
to see physics prompt some thinking about the nature of human intelligence.

A bibliography I like about current research on human intelligence from the
classical and avant garde points of view in psychology:

[http://en.wikipedia.org/wiki/User:WeijiBaikeBianji/Intellige...](http://en.wikipedia.org/wiki/User:WeijiBaikeBianji/IntelligenceCitations)

------
leot
I came here hoping to find intelligent commentary on the article. Instead,
it's mostly uninformed criticism.

The most promoted comments are basically "I didn't carefully read or try hard
to understand this paper, but everything I think I know up to now says that it
can't be very interesting."

I don't really understand the paper either. But given that it was reviewed and
published in a good journal, and given that it's by a guy who's evidently
quite smart, and given that it seems carefully written, it surely deserves
better than cursory dismissal.

If you want to go off half-cocked, try Reddit.

~~~
cscurmudgeon
Science requires constant criticism especially when people claim grand things.
Don't assume people here are idiots and have not read the paper.

Every year there are dozens of papers claiming a grand theory of AI.

~~~
leot
I'm not assuming that. In fact, I assumed the opposite and it was after I read
most of the comments that it became evident that most people here weren't
engaging with the article in any kind of substantive or even remotely
sympathetic way (and, believe it or not, good academic reading requires that
one adopt the most charitable interpretation of the author's claims -- at
least to begin with).

And, please don't assume that I don't understand how science or AI theories
work, either. If you email me I'd be happy to explain my bonafides.

~~~
cscurmudgeon
You should apply your criticism to yourself. The top ranked commenter now has
direct comments about the paper. You are just saying that we should take the
paper positively because of the author and venue and you are talking about
credentials and not about the content of the paper :)

~~~
leot
Hardly. The words "nonsense" and "gibberish" are being bandied about -- anyone
who uses those words for an article that's been reviewed and published in a
good journal needs to have strong justification for such derision. No such
justification was found. The criticisms sound not that different from
politicians who condemn scientists for studying fruit flies and worms.

I never said to take the paper positively. Giving something a charitable
reading is different from agreeing with it. If I stop on the road and say to
you that I think there's gas nearby and ask you where it is, and you say
"we're surrounded by nitrogen", then, yes, you have produced a correct parsing
of the sentence. But you have not performed a good faith effort to understand
what I'm trying to say on my terms.

The difference between a superficial reading of a text and a deep dive is
obvious, at least to most academics. I had assumed it would also be obvious to
the people who frequent this forum.

~~~
cscurmudgeon
AI has been and is always full of such papers. That explains the attitude.
People are just tired of snake oil. Note: I am not saying that this is snake
oil.

If a new theory comes up and it is being reported in the press before anything
convincing has been built, it is more efficient to be negative and suspicious.

But wait a minute, people here have actually read the paper. Look at the top
comments.

------
IsaacL
Very interesting. The big idea, as I see it, is that intelligent systems try
to maintain as many possible futures as they can - to keep their options open.

People often think that all living beings are driven by the need to reproduce,
but the need to survive - to maintain homeostasis, to keep one's shape in a
complex environment - is even more fundamental, I think.

If you extend the idea of living beings attempting to keep their own
metabolism and structure fixed, you realise that if they have the capability,
they'll also try and keep their environment fixed.

~~~
pjscott
The need to reproduce is more fundamental, and there are plenty of examples in
nature of animals sacrificing homeostasis in order to reproduce -- male black
widow spiders, for example. It's obvious why this would be the case; that's
how evolution _works._ Keeping your future options open is a nice heuristic,
but it's not necessarily an end goal.

~~~
snowwrestler
Reproducing captures more future states than not reproducing, since it allows
some of your collected information to extend past your death.

~~~
Houshalter
You can explain almost any process with this theory, and that's why it's
useless. Trying to "capture more future states" is not the reason certain
traits evolve, they evolve simply because they are more likely to survive and
reproduce.

~~~
snowwrestler
Why any living thing reproduces at all is a much more fundamental question
than e.g. why men have nipples or snakes don't have legs.

~~~
Houshalter
Because things that do reproduce quickly out-reproduce and out-compete things
that don't.

------
espeed
I emailed this article to Marko (<http://markorodriguez.com/>), and he
summerized it nicely as, "Always ensure many degrees of freedom."

------
mikhailfranco
As referenced in the paper, Verlinde proposed that gravity is an entropic
force:

<http://en.wikipedia.org/wiki/Entropic_gravity>

I highly recommend Verlinde's brilliant paper, it's an easy read:

<http://arxiv.org/abs/1001.0785>

The theory remains controversial, but it's a valuable attempt to put
holographic principles center stage.

------
rajeevk
When I was working on developing algo for shape recognition for my iPad app
(Lekh Diagram), I thought in terms of entropy too. But my understanding is
just opposite. In my understanding, intelligent system mostly tries to
minimize the entropy. intelligent system tries to find out pattern, perform
task with some pattern which ,IMO, minimizes the disorder.

------
fiatmoney
Does anyone have an ungated copy of the paper?

~~~
alexwg
Yes:
[http://www.alexwg.org/publications/PhysRevLett_110-168702.pd...](http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf)

------
0xdeadc0de
<http://www.youtube.com/watch?v=rZB8TNaG-ik>

------
redwood
"If it has legs, it has legs!"

Whether or not you find this science rigorous, it is the kind of thing that
leads to a "wow! my mind is blown" a-la so many smokey college dorm room
nights.

Similar to the anthropic principle, these concepts are cool but hard to use.

Nevertheless I do like the way this one gives us a new way at justifying the
value of 'interesting' stuff that we might be otherwise interested in
discarding.

For example: the impulse on HN to discard this research... whether or not it's
valid, it is certainly an interesting example of the point of the research
itself. If this stuff has legs, it'll have legs. Mind blown? Bleh maybe for a
second right?

------
Steko
Just think, these guys could do for human behavior what Newton did for
chemistry, Kelvin for geology, Seitz for oncology, Fomenko for history and
what a number of other physicists have attempted to do for climate science.

------
6ren

      Keep your options open
    

I don't think it models intelligence; but maybe it models wisdom. The
prediction part requires intelligence.

I can understand physicists getting excited over a simple mathematical model
that explain a lot. Like Einstein thinking relativity was so beautiful, it
must be true. But a difference there was unexplained empirical data (speed of
light constant in all directions) - which we don't have for intelligence.

That is, it's not that we haven't solved the problem of intelligence; but that
we don't understand the question. (sometimes dna seems terribly wise).

~~~
lurker14
"Keep your options open" is also called "analysis paralysis" when it is
unwise.

------
altrego99
> Allowing the rod to fall will _lower_ the entropy of the system??

~~~
ozankabak
Yes. I don't know the exact dynamics of their toy problem, but thinking about
it this way can help: When the rod falls over, it will stay there from then
on. In terms of the number of possible future states, there are less
possibilities for the system.

------
drucken
This is a stunningly elegant proposal. It seems to apply to any field as well,
which inspires confidence.

The idea of all Life as simply the Evolution fitness function over the
maximization of Entropy...!

------
npsimons
Unfortunately, I haven't had the time to delve into AI, but one interesting
conversation I had the other day we discussed one difference between (current)
software and people is that a code duplication checker would miss a number of
things a human could look at and recognize as obviously similar, if not
identical. Applying things like Bayes can help (to an extent), but the
question is, where is the line drawn? If we get a piece of software that can
automatically refactor, does that count as AI?

------
giantsteps
As an AI researcher, I rate this paper as CRANKY
<http://www.crank.net/about.html>

------
auctiontheory
I first read the sentence "it's not science as usual," as "it's not science,
as usual."

------
Tycho
I suppose they're right in a sense. Intelligence and entropy are like opposing
forces.

------
pilooch
How frustrating these days to hit a science paywall. The original article
would be worth reading.

------
hcarvalhoalves
I believe that with enough creativity, everything can be modeled under
thermodynamics.

------
dlitz
In the diagram, I like how everything is approximated as a homogeneous sphere.

------
Poyeyo
I think Rodolfo Llinás would somehow agree with this.

------
mrslx
ads on this site are of bikini girls.. don't really trust it.

------
arthurrr
everybody has a different definition of intelligence.

personally, i don't consider the vast majority of humans to be intelligent.

if we are trying to create artificial intelligence, it's probably a bad idea
to use humans as a model.

------
alexvr
This could be _big_.

~~~
lurker14
It would be interesting if someone other than the author submitter had that
opinion. Nice astroturf attempt, though, alexwg/alexvr.

~~~
alexvr
Note to self: don't post an enthusiastic comment on someone's thread if they
have a similar name. I was really puzzled as to why I got so many downvotes on
that comment.

You're wrong, though. I'm not the "author submitter." Believe it or not, Alex
is a common name.

And what is an "astroturf attempt"?

