Hacker News new | past | comments | ask | show | jobs | submit login

The article is kind of written in gibberish. It also implies that the idea of linking thermodynamics to intelligence is novel when a lot of experts actually see it as a fundamental of thermodynamics and the universe. It's even possible that this physicist is naive about the whole debate an thinks that he discovered this.

However while the idea is not new, it does seem like he made a software implementation of some some aspects of it and this could be interesting.

I find the link between thermodynamics and intelligence very interesting. Here is how it works according to me. First the best definition of intelligence IMO is the ability to predict the unknown (whether because it's hidden or because it's in the future) from current knowledge.

In order to have 'intelligence' you have to have some information in your head. That is to say, a part of you has to be physically correlated with a part of the world, some particles in your head have to have patterns that approximately and functionally 'mirror' part of the world.

Thermodynamics is about order and this is also about particles having properties that correlate with each other. This means that in a low entropy situation knowing something about a particle tells you something about some other particles. Intelligence would by useless if the universe had too high entropy. You can't get much insight from white noise even if you are very smart.

There is a saying that "knowledge is power". This is truer than you might think. It is true in a very physical sense.

For example, take the thermodynamic textbook example of a container with a gas on one side and void on the other. If there is no barrier between the two side the gas should move to fill the void and settle in the higher entropy case of evenly filling the space. Thermodynamics says that you would need to spend energy to push the gas back to one side.

However, if you were to put a wall in the middle of the container with a little door large enough to let a molecule go through, and you knew exactly the position and velocity of each molecule in the container, you could open and close the door at exactly the right time when a molecule is about to go from, say, left to right through the door and close it when one is about to go right to left. Using very little energy you could get all molecules to end up on one side of the container.

This should violate the second law of thermodynamics but it does not! Why is that you ask? It's because the knowledge you have of the position and velocity of all these molecules is a source of low entropy. Knowledge is low entropy and the correlation between the particles in your head and the real world is what allows you to make predictions and extract useful energy from things.

Note that an interesting comp sci result of this is that low entropy information is more easy to compress and the more a system is good at predicting data from other data, the better it is at compressing such that there is also a tight link between the concepts of compression algorithms and artificial intelligence. But that's another story.




I might add that what made me understand these concepts is the writings of physicist and probability theorist E.T. Jaynes, especially his unpublished manuscripts: http://bayes.wustl.edu/etj/node2.html

I think if he would have been alive at the right time these would have been blog posts. Before reading them, I had taken an intro class in thermodynamics which at left me completely confused.

Read THE EVOLUTION OF CARNOT'S PRINCIPLE ( http://bayes.wustl.edu/etj/articles/ccarnot.pdf ) for incredible insights on how Carnot pioneered thermodynamics by trying to optimize steam engines.

Also if you think you dislike statistics and probabilities but you like math in general his book might change your mind: Probability Theory: The Logic of Science. Free draft: http://omega.albany.edu:8008/JaynesBook.html

Amazon: http://www.amazon.com/Probability-Theory-Science-T-Jaynes/dp...

In fact understanding his stance on probabilities, the mind projection fallacy in particular might be prerequisite to understand thermodynamics, the fundamental point being that entropy is not really directly a property of matter but more of a meta property that is about knowledge or information which is taken to mean correlations across aggregate matter.


Just so you know, before I started spending all of my time programming I was a physicist, and statistical physics was my best subject.

Anyway, I kind of object to your definition of intelligence as "the ability to predict the unknown". Of course it comes down to personal preference, but I feel that when it comes to making AI, a better definition is something along the lines of "intelligence is the ability to make good choices when faced with a decision". In other words, I like to come at AI using the definition of rational agent[1].

Machine learning is the subject that involves predicting the unknown, and there is a ton of linear algebra involved. I am not an expert at ML, so it's possible that the second law in some form plays a role, but if so I have not come across that yet in my studies.

Now in as much as the second law applies to AI, I am skeptical, but your comment has convinced me to keep an open mind and look into it more carefully. Entropy is merely the log of the probability distribution, so when we talk about maximizing entropy for decision making, what probability distribution should we use, and should we use different distributions for different situations, and if so how do we decide which one to use?

A much more relevant subject for AI in my opinion is game theory, which is really a theory of decision making. Finding the optimal strategy for navigating a decision tree can be a hard problem, and just because maximizing entropy works to find solutions for some types of decision problems, doesn't mean that it's a magic bullet that will always, or even frequently, work out.

[1] http://en.wikipedia.org/wiki/Rational_agent


"I feel that when it comes to making AI, a better definition is something along the lines of "intelligence is the ability to make good choices when faced with a decision"

I love it. Mine is even more constructivist (and Wittgensteinian). Something like "intelligence is the ability to detect patterns and act accordingly."

The problem, as many philosophers have noted, is that such definitions -- in fact, most definitions -- make it hard to argue that the thermostat is not intelligent (or conscious, or sentient). And that, of course, is at the heart of Turing's most famous argument.

I, myself, find it hard to argue my way around what is essentially a "walks like a duck, talks like a duck" argument when it comes to these matters -- which is to say, defining something in terms of its activities, behaviors, likenesses. That perhaps tilts practical AI toward the statistical, though, and it's hard for me to believe that that's the whole story.

I have never presented the arguments back-and-forth on this issue without watching my students collapse into confusion and desperation.


The great personal insight is when one finally discards prejudice and bigotry and realizes that a thermostat is intelligent, and that a neuron is a "molecule" of intelligence.


I think your definition of intelligence is good and is simply broader than mine.

I split your definition in two parts and only labelled the first part 'intelligence'. part 1: Predicting the unknown which can be the probabilities of outcome of your actions and such. part 2. Evaluating these outcomes and their likelihoods to make a decision based on some utility or preference function.

I can see contexts where the word 'intelligence' would encompass both parts. In the context of thermodynamics or information theory I think intelligence is more about the prediction part because pure matter or even computers don't really have preference of outcome so the second part is mostly irrelevant to their 'intelligence'.

The original article seems to take low entropy as a stated goal or preference of the system which is not a bad sub goal in that it is thermodynamically efficient and saves usable energy longer but I don't think being thermodynamically efficient should be the only goal of an intelligent system.


> "I feel that when it comes to making AI, a better definition is something along the lines of "intelligence is the ability to make good choices when faced with a decision"."

What defines a good choice? What defines a bad choice? Saying "A true machine intelligence is one that makes good decisions" is a tautology -- it doesn't define anything. Rather, it just reframes one vague question into another.

Intelligence is notoriously difficult to describe, so don't feel bad about it if you keep thinking of circular definitions. It's a tricky topic.

Also, IMO there's a big problem with rational agents: In order to be actually useful/implementable, they must be defined in a specific logical framework. You can define a rational agent in a vague sense without any specific domain, but doing so does not solve any problems -- it again just results in tautology. But the problem with choosing a logical framework is shown by Godel's famous theorem. Simply put, any such agent in a well-defined domain will be confined to that domain of reasoning. To me, a machine that excels in some area of planning but can never think outside the box, is not intelligent on the same level as humans.

Anyone advocating rational agent AI as true machine intelligence IMO will need to either find a flaw and disprove Godel's incompleteness theorem, or provide some system of logic that encapsulates all human reasoning (absurdly impossible IMO).

On the other hand, I personally think true machine intelligence will be solved much more "organically", far separated from formal logic and more related to fuzzy pattern matching than rigid formal optimization. I really like this definition of intelligence:

http://lesswrong.com/lw/vb/efficient_crossdomain_optimizatio...

Though I wouldn't claim it's the ultimate one, as it's still a bit vague. I think the key lies in what he describes as "cross domain" -- what is colloquially called "thinking outside the box", because this is the only thing humans seem extremely good at that every computer AI to date has failed at.

Humans are capable of transcending formal logical systems and finding new truths that are unprovable from the original formal system. How is this possible? Godel proved that a formal system cannot ever determine this by itself, from axioms, without being inconsistent.

IMO a true machine intelligence will also need to be capable of this formal-system-transcending property (which you can also call "cross domain thinking", or "thinking outside the box", or "creativity", or whatever).


For a rational agent, a decision is good or bad based on it's preferences, which are described mathematically as part of the definition of the rational agent. So I don't believe it's tautological.

I do agree with your statement that intelligence is difficult to describe. And for the record I never claimed the rational agent approach will achieve "true machine intelligence". Frankly, I don't even know what "true" machine intelligence is or would be. What I am very interested in however is how to create an AI that is capable of fooling a person into thinking it's intelligent, in other words an AI that can pass the Turing test. For that purpose I find the concept of a rational agent to be very helpful, in that it provides a solid foundation to work from.

I'm also not worried about Godel's theorem at all. The statements that are undecidable are very obscure and basically never come into play in human decision making. I think Godel's result has very little practical application, if it has any at all; I certainly don't think it's relevant for AI. I agree that choosing a logical framework to work in presents many problems, but I don't agree that you have to disprove Godel's theorem to create an AI that can think outside the box.


I also don't think you have to disprove Godel's theorem to create an AI that thinks outside the box, however, I believe you would have to for it to be a rational agent. A rational agent by definition operates strictly within the domain of a formal system. Thus by definition, a rational agent is in direct conflict with the notion of "thinking outside the box."

Of course any simulation is implemented in some formal system, but that's not what I'm saying. I'm saying the inherent problem with a rational agent is that they maximize utility within a formal system -- that is what they do, that is their entire purpose. Once we start talking about rational agents that can "think outside the box", or think beyond the formal system in which they're defined, we're no longer talking about rational agents.

I agree with your belief that we may some day build AIs that think outside the box, I just don't see how it can be a rational agent. I think it will be much closer to a neural network, or probabilistic reasoning model, or some extremely "organic" or "fluid" device from which intelligent behavior organically emerges. Because if intelligence emerges from a strictly formalized framework, then Godel's theorems prove some very crippling limitations on what's possible within that fixed framework.

FYI there is definitely a connection between Godel's theorem and intelligence, though it may not be immediately obvious. I would highly recommend this as fun learning material: http://ocw.mit.edu/high-school/courses/godel-escher-bach/


Wait, wasn't a rational agent simply an agent that optimized a certain result given a set of knowledge about it and it's environment? I mean in engineering and physics, you do try to build the perfect something, but you also realize you will never ever build the perfect something. You'll never build the perfect computer, the perfect op amp, nor the perfect rational agent, just as you'll never find the biggest number, because there's no upper bounds on these kind of things.


Try to implement a rational agent (the AI or mathematically formalized variety) in software, and you'll see what I mean. You have to define some utility function precisely, and some algorithm to maximize it.

The problem with this is it won't really produce a "general intelligence" that can think outside the box, because it will always be maximizing some utility function defined in some rigid formal system. In other words, it will be completely unable to "understand" things outside of the formal system in which you define it.


Thank you for knocking down the Gödel canard. People who say humans are not similar able have a vague faith that humans are far smarter than they really are.

The existence of "Uhprovably true statements" is less of a weakness in logic's ability to organize the universe, and more that there are always less meaningful yet constructible statements, like "This statement is not provable."


> "People who say humans are not similar able have a vague faith that humans are far smarter than they really are."

Prenote: I apologize in advance if my entire response is off-topic, because my English parser was completely unable to take anything meaningful from this sentence.

When you have a system of axioms, both consistent and complete, which proves all useful properties of arithmetic and algebra, let us know.

Whether or not you are able to understand the deepness of connection between Godel's theorems, formal systems, self-similarity, and intelligence, you can't deny that the more statistical / 'organic' AIs are overwhelmingly more successful than rigidly logical ones. For example, try implementing voice recognition analytically, and let us know your results (have fun with that).

The point is that although some people would like to believe otherwise, human brains (and nonhuman brains) are NOT logical reasoning machines. Humans are not rational agents. It still surprises me how many people are desperately grasping to rational agents and similar frameworks to try to build general intelligences entirely from logic. I point out Godel's theorems as one type of insight into why this road will lead you nowhere. You're free to pursue it as much as you like, just as animals are free to run in circles chasing their tail.

You can't build an AI capable of thinking outside the box, by specifically building that AI in the box of a particular logical framework. Logic itself has limitations. Logic is powerful, but not entirely in of itself -- logic is powerful as a tool extended by the mind of its creator(s). The more meaningful question in AI, is how do we create logical concepts to begin with?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: