Hacker News new | past | comments | ask | show | jobs | submit login
Physicist proposes new way to think about intelligence (insidescience.org)
241 points by alexwg on April 19, 2013 | hide | past | favorite | 137 comments



This sounds like nonsense to me. First of all, what exactly do they mean by "intelligent behavior"? I looked in the paper but I couldn't find where they define what they are trying to model. They do however use the vague expression "remarkably sophisticated behaviors associated with the human ‘‘cognitive niche,’’ including tool use and social cooperation."

Tool use and social cooperation, sounds interesting. Let's see, that seems to be figures 3 and 4 from page 4. Once you get over the idea of modeling an animal with a disc, it seems a little bizarre that figures 3 or 4 have anything to do with intelligent behavior. Figure 3 is about one disc bouncing off another disc so that a third disc goes into a cylinder. I would say that doesn't really capture the essence of how "non-human animals" use tools. Then the social cooperation example is about a big disc attached to a string, that moves differently when 2 smaller discs touch it at the same time, and then under some specific configuration the discs have "social cooperation" to move the big disc. As far as I can tell, whether or not moving the big disc is an intelligent decision or not seems to not matter in this example.

We then get very bold claims, such as "physical agents driven by causal entropic forces might be viewed from a Darwinian perspective as competing to consume future histories."


Here's some intelligent behaviors that can be simply modelled as entropy: http://scalablegamedesign.cs.colorado.edu/wiki/Collaborative...

That's a very simple model that can be used, in that example, to make a number of ghost agents work together to converge on a pac-man, splitting themselves up to cover every exit. The code to do it is very small, smaller than A*, could also be used to do pathfinding, and is based essentially on the ideas of entropy.


The idea is that these models could be extended to really big complex problems with the same basic principle. Therefore, it seems that intelligence of arbitrary complexity can be achieved with these algorithms.

I don't think that claim is very bold at all, it is merely an observation. The idea that the algorithm tries to maintain entropy is very much like life as we know it. Life as we know it are mechanisms that despite the continuous dissipation of entropy in the universe around it try to achieve local points of high entropy and sustain those.

In less intelligent animals, the mechanism to resist entropy loss is defined genetically. They can not adapt to new environments because their decision making is rigidly defined in their genes. (i.e. they are like simple computer programs)

In more intelligent animals, like ourselves, instead of deriving our decisions (mainly) from our genes, we derive them from an organ (whose design is derived from genes) we call the brain (I've heard there's more decision making organs). Separating the decision making from the the dna is what makes us so highly adaptable to new environments.

There our likeness to this algorithm lies, like us this algorithm can sustain its entropy no matter what the environment is (as long as its got the tools to do so).


It's pretty clear that we are not the only animals with brains, so there are other animals who use their brains to make decisions, such as how to hunt and when to sleep and so on. We make what is in our opinion more sophisticated decisions, but that arguably is a human-centric conceit. There are ant colonies that span continents, and they aren't building any nuclear weapons. Are we really sure we are more intelligent than them?

I don't think it is fair to say that our brain makes us more adaptable to new environments than other species. Compared to the many single-celled species that have survived hundreds of millions of years, the bacteria that can survive hibernation for millions of years, and our current inability to agree on the existence of global warming, is it realistic to project the survival of our species beyond even the next thousand years?


The brain is an instrument that allows a species to be more adaptable to new environments, of course there are other ways to achieve adaptability, being simple and generic is one of them. Being a symbiote/parasite of an adaptable species (like bacteria are of us) is another.

I think the battle between complex organisms and simple organisms is an ongoing one. At the moment we rely on the simple organisms on our insides to protect us against the simple organisms on the outside. If there is any threat to the survival of our species it lies there. A nuclear holocaust or a few degrees of global warming isn't going to do it, unless you define 'our species' as 'inhabitants of new york'.

Wether we are more intelligent that some other intelligent animals is dependent on how you measure intelligence. If you define it as having many offspring in many continents, I'm not sure ants would win, but I'm not sure humans would either.


What I feel makes us different from animals is not being able to adapt, as you pointed out (many animals exhibit learned behavior and conscious decision making), it's not even being self-conscious (it's been proven animals can be too), it's in my opinion the ability to observe and act on your own thought-streams.

And this is deeply linked to language, as language is a way to receive informations you don't already have and coerce your thoughts streams (or not) along them to make predictions that are not based on experience. In my opionion it is very important to make this distinction between predictions learned by experience and predictions learned by the books. I think we here can measure this difference very well.

To me it looks like this added layer of brain cells we have acts as an observer on life happening on the floor below, kind of a god of your own brain, acting not on experience-learned behaviour but solely on a combination of representation learned from others (connections created willfully from other's discourse) and abstractions made from adaptive learning done on signals coming from the layers below (I envision an isolated system whose only object of observation is regular thoughts and whose only sense is signals coming through connections to layers producing those thoughts).

It feel likes what makes us apart is first this simple fact we learn from our own though patterns. This enables us to make what (at least) feels like free decisions based on abstracted representation and not mere immediate experience. This enables us to think outside the box of our own experiences and form models of other's. This enables us to delibarately picture in our minds things that we have never experienced, or things that cannot be experienced (think abstract models in sciences). This finally enables us to accept information from our peers as valid signals to model our though-paths on. If this layer was anything real, to me it would mainly be an experience or information-sharing device. This thing that makes possible, and makes useful, the kind of social structure we know and live in.

I may be rehashing common knowledge, I don't know.


The article is kind of written in gibberish. It also implies that the idea of linking thermodynamics to intelligence is novel when a lot of experts actually see it as a fundamental of thermodynamics and the universe. It's even possible that this physicist is naive about the whole debate an thinks that he discovered this.

However while the idea is not new, it does seem like he made a software implementation of some some aspects of it and this could be interesting.

I find the link between thermodynamics and intelligence very interesting. Here is how it works according to me. First the best definition of intelligence IMO is the ability to predict the unknown (whether because it's hidden or because it's in the future) from current knowledge.

In order to have 'intelligence' you have to have some information in your head. That is to say, a part of you has to be physically correlated with a part of the world, some particles in your head have to have patterns that approximately and functionally 'mirror' part of the world.

Thermodynamics is about order and this is also about particles having properties that correlate with each other. This means that in a low entropy situation knowing something about a particle tells you something about some other particles. Intelligence would by useless if the universe had too high entropy. You can't get much insight from white noise even if you are very smart.

There is a saying that "knowledge is power". This is truer than you might think. It is true in a very physical sense.

For example, take the thermodynamic textbook example of a container with a gas on one side and void on the other. If there is no barrier between the two side the gas should move to fill the void and settle in the higher entropy case of evenly filling the space. Thermodynamics says that you would need to spend energy to push the gas back to one side.

However, if you were to put a wall in the middle of the container with a little door large enough to let a molecule go through, and you knew exactly the position and velocity of each molecule in the container, you could open and close the door at exactly the right time when a molecule is about to go from, say, left to right through the door and close it when one is about to go right to left. Using very little energy you could get all molecules to end up on one side of the container.

This should violate the second law of thermodynamics but it does not! Why is that you ask? It's because the knowledge you have of the position and velocity of all these molecules is a source of low entropy. Knowledge is low entropy and the correlation between the particles in your head and the real world is what allows you to make predictions and extract useful energy from things.

Note that an interesting comp sci result of this is that low entropy information is more easy to compress and the more a system is good at predicting data from other data, the better it is at compressing such that there is also a tight link between the concepts of compression algorithms and artificial intelligence. But that's another story.


I might add that what made me understand these concepts is the writings of physicist and probability theorist E.T. Jaynes, especially his unpublished manuscripts: http://bayes.wustl.edu/etj/node2.html

I think if he would have been alive at the right time these would have been blog posts. Before reading them, I had taken an intro class in thermodynamics which at left me completely confused.

Read THE EVOLUTION OF CARNOT'S PRINCIPLE ( http://bayes.wustl.edu/etj/articles/ccarnot.pdf ) for incredible insights on how Carnot pioneered thermodynamics by trying to optimize steam engines.

Also if you think you dislike statistics and probabilities but you like math in general his book might change your mind: Probability Theory: The Logic of Science. Free draft: http://omega.albany.edu:8008/JaynesBook.html

Amazon: http://www.amazon.com/Probability-Theory-Science-T-Jaynes/dp...

In fact understanding his stance on probabilities, the mind projection fallacy in particular might be prerequisite to understand thermodynamics, the fundamental point being that entropy is not really directly a property of matter but more of a meta property that is about knowledge or information which is taken to mean correlations across aggregate matter.


Just so you know, before I started spending all of my time programming I was a physicist, and statistical physics was my best subject.

Anyway, I kind of object to your definition of intelligence as "the ability to predict the unknown". Of course it comes down to personal preference, but I feel that when it comes to making AI, a better definition is something along the lines of "intelligence is the ability to make good choices when faced with a decision". In other words, I like to come at AI using the definition of rational agent[1].

Machine learning is the subject that involves predicting the unknown, and there is a ton of linear algebra involved. I am not an expert at ML, so it's possible that the second law in some form plays a role, but if so I have not come across that yet in my studies.

Now in as much as the second law applies to AI, I am skeptical, but your comment has convinced me to keep an open mind and look into it more carefully. Entropy is merely the log of the probability distribution, so when we talk about maximizing entropy for decision making, what probability distribution should we use, and should we use different distributions for different situations, and if so how do we decide which one to use?

A much more relevant subject for AI in my opinion is game theory, which is really a theory of decision making. Finding the optimal strategy for navigating a decision tree can be a hard problem, and just because maximizing entropy works to find solutions for some types of decision problems, doesn't mean that it's a magic bullet that will always, or even frequently, work out.

[1] http://en.wikipedia.org/wiki/Rational_agent


"I feel that when it comes to making AI, a better definition is something along the lines of "intelligence is the ability to make good choices when faced with a decision"

I love it. Mine is even more constructivist (and Wittgensteinian). Something like "intelligence is the ability to detect patterns and act accordingly."

The problem, as many philosophers have noted, is that such definitions -- in fact, most definitions -- make it hard to argue that the thermostat is not intelligent (or conscious, or sentient). And that, of course, is at the heart of Turing's most famous argument.

I, myself, find it hard to argue my way around what is essentially a "walks like a duck, talks like a duck" argument when it comes to these matters -- which is to say, defining something in terms of its activities, behaviors, likenesses. That perhaps tilts practical AI toward the statistical, though, and it's hard for me to believe that that's the whole story.

I have never presented the arguments back-and-forth on this issue without watching my students collapse into confusion and desperation.


The great personal insight is when one finally discards prejudice and bigotry and realizes that a thermostat is intelligent, and that a neuron is a "molecule" of intelligence.


I think your definition of intelligence is good and is simply broader than mine.

I split your definition in two parts and only labelled the first part 'intelligence'. part 1: Predicting the unknown which can be the probabilities of outcome of your actions and such. part 2. Evaluating these outcomes and their likelihoods to make a decision based on some utility or preference function.

I can see contexts where the word 'intelligence' would encompass both parts. In the context of thermodynamics or information theory I think intelligence is more about the prediction part because pure matter or even computers don't really have preference of outcome so the second part is mostly irrelevant to their 'intelligence'.

The original article seems to take low entropy as a stated goal or preference of the system which is not a bad sub goal in that it is thermodynamically efficient and saves usable energy longer but I don't think being thermodynamically efficient should be the only goal of an intelligent system.


> "I feel that when it comes to making AI, a better definition is something along the lines of "intelligence is the ability to make good choices when faced with a decision"."

What defines a good choice? What defines a bad choice? Saying "A true machine intelligence is one that makes good decisions" is a tautology -- it doesn't define anything. Rather, it just reframes one vague question into another.

Intelligence is notoriously difficult to describe, so don't feel bad about it if you keep thinking of circular definitions. It's a tricky topic.

Also, IMO there's a big problem with rational agents: In order to be actually useful/implementable, they must be defined in a specific logical framework. You can define a rational agent in a vague sense without any specific domain, but doing so does not solve any problems -- it again just results in tautology. But the problem with choosing a logical framework is shown by Godel's famous theorem. Simply put, any such agent in a well-defined domain will be confined to that domain of reasoning. To me, a machine that excels in some area of planning but can never think outside the box, is not intelligent on the same level as humans.

Anyone advocating rational agent AI as true machine intelligence IMO will need to either find a flaw and disprove Godel's incompleteness theorem, or provide some system of logic that encapsulates all human reasoning (absurdly impossible IMO).

On the other hand, I personally think true machine intelligence will be solved much more "organically", far separated from formal logic and more related to fuzzy pattern matching than rigid formal optimization. I really like this definition of intelligence:

http://lesswrong.com/lw/vb/efficient_crossdomain_optimizatio...

Though I wouldn't claim it's the ultimate one, as it's still a bit vague. I think the key lies in what he describes as "cross domain" -- what is colloquially called "thinking outside the box", because this is the only thing humans seem extremely good at that every computer AI to date has failed at.

Humans are capable of transcending formal logical systems and finding new truths that are unprovable from the original formal system. How is this possible? Godel proved that a formal system cannot ever determine this by itself, from axioms, without being inconsistent.

IMO a true machine intelligence will also need to be capable of this formal-system-transcending property (which you can also call "cross domain thinking", or "thinking outside the box", or "creativity", or whatever).


For a rational agent, a decision is good or bad based on it's preferences, which are described mathematically as part of the definition of the rational agent. So I don't believe it's tautological.

I do agree with your statement that intelligence is difficult to describe. And for the record I never claimed the rational agent approach will achieve "true machine intelligence". Frankly, I don't even know what "true" machine intelligence is or would be. What I am very interested in however is how to create an AI that is capable of fooling a person into thinking it's intelligent, in other words an AI that can pass the Turing test. For that purpose I find the concept of a rational agent to be very helpful, in that it provides a solid foundation to work from.

I'm also not worried about Godel's theorem at all. The statements that are undecidable are very obscure and basically never come into play in human decision making. I think Godel's result has very little practical application, if it has any at all; I certainly don't think it's relevant for AI. I agree that choosing a logical framework to work in presents many problems, but I don't agree that you have to disprove Godel's theorem to create an AI that can think outside the box.


I also don't think you have to disprove Godel's theorem to create an AI that thinks outside the box, however, I believe you would have to for it to be a rational agent. A rational agent by definition operates strictly within the domain of a formal system. Thus by definition, a rational agent is in direct conflict with the notion of "thinking outside the box."

Of course any simulation is implemented in some formal system, but that's not what I'm saying. I'm saying the inherent problem with a rational agent is that they maximize utility within a formal system -- that is what they do, that is their entire purpose. Once we start talking about rational agents that can "think outside the box", or think beyond the formal system in which they're defined, we're no longer talking about rational agents.

I agree with your belief that we may some day build AIs that think outside the box, I just don't see how it can be a rational agent. I think it will be much closer to a neural network, or probabilistic reasoning model, or some extremely "organic" or "fluid" device from which intelligent behavior organically emerges. Because if intelligence emerges from a strictly formalized framework, then Godel's theorems prove some very crippling limitations on what's possible within that fixed framework.

FYI there is definitely a connection between Godel's theorem and intelligence, though it may not be immediately obvious. I would highly recommend this as fun learning material: http://ocw.mit.edu/high-school/courses/godel-escher-bach/


Wait, wasn't a rational agent simply an agent that optimized a certain result given a set of knowledge about it and it's environment? I mean in engineering and physics, you do try to build the perfect something, but you also realize you will never ever build the perfect something. You'll never build the perfect computer, the perfect op amp, nor the perfect rational agent, just as you'll never find the biggest number, because there's no upper bounds on these kind of things.


Try to implement a rational agent (the AI or mathematically formalized variety) in software, and you'll see what I mean. You have to define some utility function precisely, and some algorithm to maximize it.

The problem with this is it won't really produce a "general intelligence" that can think outside the box, because it will always be maximizing some utility function defined in some rigid formal system. In other words, it will be completely unable to "understand" things outside of the formal system in which you define it.


Thank you for knocking down the Gödel canard. People who say humans are not similar able have a vague faith that humans are far smarter than they really are.

The existence of "Uhprovably true statements" is less of a weakness in logic's ability to organize the universe, and more that there are always less meaningful yet constructible statements, like "This statement is not provable."


> "People who say humans are not similar able have a vague faith that humans are far smarter than they really are."

Prenote: I apologize in advance if my entire response is off-topic, because my English parser was completely unable to take anything meaningful from this sentence.

When you have a system of axioms, both consistent and complete, which proves all useful properties of arithmetic and algebra, let us know.

Whether or not you are able to understand the deepness of connection between Godel's theorems, formal systems, self-similarity, and intelligence, you can't deny that the more statistical / 'organic' AIs are overwhelmingly more successful than rigidly logical ones. For example, try implementing voice recognition analytically, and let us know your results (have fun with that).

The point is that although some people would like to believe otherwise, human brains (and nonhuman brains) are NOT logical reasoning machines. Humans are not rational agents. It still surprises me how many people are desperately grasping to rational agents and similar frameworks to try to build general intelligences entirely from logic. I point out Godel's theorems as one type of insight into why this road will lead you nowhere. You're free to pursue it as much as you like, just as animals are free to run in circles chasing their tail.

You can't build an AI capable of thinking outside the box, by specifically building that AI in the box of a particular logical framework. Logic itself has limitations. Logic is powerful, but not entirely in of itself -- logic is powerful as a tool extended by the mind of its creator(s). The more meaningful question in AI, is how do we create logical concepts to begin with?


I haven't read the original paper, but (speaking as a physicist) I'm always skeptical when physicists leap into extremely different disciplines and claim to have a grand insight that its specialists have never considered before.

It could always be true: physics really does have a lot to contribute in many areas! But the burden of proof is very, very much on the physicists to justify the relevance of their approach.

In this case, based only on this little summary, I have to wonder whether they are really explaining intelligence or whether they are identifying some broader natural principle for which intelligent behavior is just one manifestation. There may be something quite deep here, but I'm holding off on deciding what exactly it is.


Welcome to modern physics, where physicists do unfalsifiable and imprecise things based on bad philosophy. Anything goes as long as it gets you attention anywhere.

https://news.ycombinator.com/item?id=5562156

HNers need to more critical of Science articles. This is where we need good criticism not on some poor hacker's pet project.

Edit 1

"Additionally, a company he founded is exploring commercial applications of the research in areas such as robotics, economics and defense."

That is a major red flag when people posit a universal AI theory and try to sell to the hapless government tech based on it.

http://en.wikipedia.org/wiki/Thinking_Machines_Corporation


> Welcome to modern physics, where physicists do unfalsifiable and imprecise things based on bad philosophy. Anything goes as long as it gets you attention anywhere.

Thank you! I've been saying that for a long time, but much less eloquently.


Your comments seem strangely insensitive, given that I already said that I'm a physicist (and presumably a modern one).

As I've already indicated, there's reason to be skeptical when physicists stretch past the usual boundaries of the discipline. (Though it still could make important contributions.) But modern physics as modern physics is, by and large, solid science.


I am also a physicist doing AI. The comment was not aimed at you! I did not mean modern physics (QM,Relativity etc)! That is rock solid!

What I meant was physics which is closer to the Time Cube theory than rock solid mathematics on a spectrum of math and science.

"As I've already indicated, there's reason to be skeptical when physicists stretch past the usual boundaries of the discipline."

There is reason to be skeptical when anybody proposes a grand unifying theory for something as grand and difficult as intelligence!! Extraordinary claims require extraordinary evidence!

This has been going since 1957 with so many young lives lost (sometimes literally) due to techniques bordering on charlatanism. Sorry to be harsh, I love AI and want it to succeed. It won't succeed if people mix pursuing knowledge with pursuing fame and $.

http://www.wired.com/techbiz/people/magazine/16-02/ff_aimyst...


It's turtles all the way down, until you reach Philosophy


... At which point it's still turtles, but nobody can really agree on what a turtle is, or what it means for something to be turtles, or what it means for something to be anything, including (but not limited to) turtles, or what it means for something to mean something. By the time the subject is settled -- if it ever is settled -- the turtle is dead. And so is God.

(For some definitions of "turtle", "dead", "God", and most especially "is".)


Wasn't arguing about the definition of "is" a topic when the Linguists got bored and tried to invade the Philosophy Departments?


>I haven't read the original paper, but (speaking as a physicist) I'm always skeptical when physicists leap into extremely different disciplines and claim to have a grand insight that its specialists have never considered before.

That's bad, because the most interesting breakthroughs came from cross-disciplinary work.


I'm curious as to the basis of your statement. I believe the opposite to be true. Gene theory, the general theory, anti-biotics, the structure of DNA, the function of neurons, and the transistor all came from specialists. Is it not the rare breakthrough that comes from cross-disciplinary work?

The cross-disciplinary breakthrough is often characterized as unifying, and hence rightfully celebrated. But they are neither the norm nor the most influential.


the structure of DNA, the function of neurons

Crick, Hodgkin and Huxley were all physicists poaching in other fields. In fact, relatively few of the people who laid the foundations in molecular biology were trained as biologists.


> Crick, Hodgkin and Huxley were all physicists

This recollection is incorrect across the board.

Crick had only a bachelors in physics (which does not a physicist make), and his postgraduate training was in medical research. Huxley, an autodidact, had his background in medicine. I'm not sure about Hodgkin but I suspect he was also not a physicist.


Crick was perhaps halfway into a Ph.D. in physics when the war broke out, during which he basically did very applied physics for the British. After the war, he did a Ph.D. under Max Perutz on X-ray diffraction. Crystallography is very useful in biology, but it is itself very far from a biological topic. Crick wrote of experiencing extreme impedance mismatch when he switched from physics to biology and, perhaps most significantly, Watson often referred to Crick as 'the physicist' or to 'having him do the physics' (sorry, I don't have the full text in front of me) in The Double Helix, suggesting that his approach to biological problems was primarily a physical one. It is true that Crick did work under the Medical Research Council, but it is somewhat misleading to say that he was doing medical research in the conventional sense. He was studying the biophysics of cytoplasm, which is not something that people trained in medicine do.

I admit that Hodgkin and Huxley are harder cases: both had early inclinations to physics and then got sidetracked into physiology. Both of them spent the war doing things you'd have a physicist doing during a war, and afterwards did their their famous work on the action potential. I read the Hodgkin and Huxley papers, but I read them in a course in the math department-- they contain some of history's finest examples of mathematical modeling, relying heavily on dynamical systems and circuit theory. It is a regrettably rare biologist who wants to go near those papers. It is manifestly not the work of traditionally trained biologists.

Disciplinary boundaries in mid-century Britain were somewhat different then, and I realize the danger of playing No True Biologist. But the mainstream of biology is only very recently acknowledging the importance of the physical approach: Hodgkin and Huxley could have been Hodgkin and Huxley without a lick of genetics or ethology, but they couldn't have been Hodgkin and Huxley without the cable equation.


I would add the fun fact that the applied physics work that Crick was doing was experimental work in fluid mechanics when a bomb fell through the roof of his laboratory and destroyed his experiment! This was certainly a factor in prompting his to switch fields.

I would also add Max Delbruck to your list of turn of the last century physicists cum molecular biologists and the physicist members of the RNA tie club: http://en.wikipedia.org/wiki/RNA_Tie_Club.

I also have a question for the GP. What exactly makes a physicist so? I have "merely" a masters degree in physics, and although I consider myself to be a theoretical biologist, I am often called a physicist by my collaborators (some of whom are physicists themselves). What about a degree actually matters in this descriptive title?


> Crick, Hodgkin and Huxley were all physicists poaching in other fields.

And Fisher was a statistician.


And Mendel was a monk.


Most likely, an unifying framework of nature would come from cross-disciplinary work. That's almost a tautology. Unfortunately we're stuck in a world where isolated people playing with narrow albeit very detailed views of nature, try to capture the big picture all on their own.

I'm glad physicists can try their luck at AI. I'd be glad other disciplines, even remote, do too. Foreign concepts often bring with them new outlooks on problems that are very fertile and rejuvenating. A little bit more humility wouldn't hurt though.


> That's bad, because the most interesting breakthroughs came from cross-disciplinary work.

True, but I think the signal-to-noise ratio is also exceptionally low. The most amazing breakthroughs come from insights that span disciplines, but it's also easy to think you have an insight when what you really have is a mistake. I'm not weighing in on this one in particular, but as a general principle I think caution is appropriate in cases like this.


Physicists are doing quite well in economics, according to economist Steve Keen in his book Debunking Economics.


Agreed

"My proposal is that intelligence is a process that attempts to capture future histories"

'future histories' is just ... the future! .... so his great proposal is 'intelligence predicts the future'? Well duh!


Errr no. His proposal is that intelligence is a process that maximizes the number of available future histories. Just capturing future histories is meaningless. The example he gives of an upright rod clearly illustrates the difference. Really what he's saying is that intelligence seeks to maintain maximum possibilities for the future.


He just set a goal of keeping the potential energy maximized for the rod, just like a big blue set a goal of winning at chess 30 years ago, both by 'predicting the future'.

I don't see anything new here except the phrase 'future histories' which is not even defined because it just means 'the future'.

As soon as I saw it claiming it can win at the stock market I knew it was BS.


It's not "predicting" the future, it's "maximizing" the number of potential futures. In chess, the more moves you have available the better you are doing, generally. But another measure (probably better although I'm not a chess expert) is how few moves your opponent has available. Strictly speaking it sounds like this strategy would emphasize keeping future moves available for both you and your opponent. This is a more idealistic future for many real-world situations but I'm not sure I'd call it the definition of "intelligence".


Yeah, seems to me their idea can be summarized as "in the absence of a specific goal, keep options open".

Which is generally good advice, but only "in the absence of a specific goal".


I think that's what is different between machine and life. Machine always has specific goal. Life never has.


Well, most life has a tendency to reproduce (inherited from ancestors who all reproduced). Aka evolution.


@klipt: Reproduction is not a goal. It's requirement of the physical system. There would be no evolution without reproduction.


It sounds like crazy nonsense but there's just enough in it to sound crazy awesome.

I've got like, an undergraduate understanding of how this works. Does anyone have a link to an ungated copy?



Whelp, that's completely unreadable to me. Undergraduate computer science fail.

I got as far as "the model of this physical process of entropy can act as a search function for lower entropic states, which given some poking, can resemble problem solution states".

It's the poking part that is problematic.

>To better understand this classical-thermal form (11) of causal entropic forcingwe simulated its effect [12,13] on the evolution of the causal macrostates of a variety of simple mechanical systems: (i) a particle in a box, (ii) a cart and pole system, (iii) a tool use puzzle, and (iv) a social cooperation puzzle

I am wildly unqualified to comment on this paper, but it's suspicious that you would jump to a "tool use puzzle" and a "social cooperation puzzle". It yields a really fun result and a fun way of talking about cognition, but it seems unsurprising to say that once you describe a problem in the physical terms of this framework, your framework can then solve the problem, as in,

>We modeled an animal as a disk (disk I) undergoing causal entropic forcing, a potential tool as a smaller disk (disk II) outside of a tube too narrow for disk I to enter, and an object of interest as a second smaller disk (disk III) resting inside the tube

The notion that currently understood physical models can act as a learning system, given the right parameters, is really cool and exciting, and there's a great simplicity and metaphoric appeal to it.


phill, this is hacker news, and if someone finds you're wildly unqualified to comment on something, don't worry, he'll be vocal about it and explain exactly why he thinks you're unqualified to comment about it. That's why I love this site. On any similar site aggregator, I'd get cat pictures and people telling me my opinions are stupid because my face is stupid.

More towards the topic, I'd say these kind of articles have to be taken with a grain of something, preferably salt or cynism, depending on taste. I'm usually more of a material science/electronics technology buff, and news articles in that area will not hestitate to present a relatively minor paper as a breakthrough in battery technology that will solve all our energy problems in the next 5 years. But then you read on, compare it against the existing technologies and you realize that they just did something funny to the diodes that increases power density at the cost of usuage time. Somewhat handy in some applications, but not a game changer.

PS: You're certainly not the youngest here.


Then there's lesswrong.com, where I've never felt qualified to even comment at all.


This is a lot like the "efficient cross-domain optimization" definition from Eliezer Yudkowsky. It looks like they even use the same information-theoretic measure of optimization power across the distribution of possible futures. http://lesswrong.com/lw/vb/efficient_crossdomain_optimizatio...


This is fascinating. It proposes that intelligence is about maximizing the control we have over future events, ie maximize the entropy of the system. Intuitively this aligns well with how I think about many strategic options or inflections in life/business - you try to take the path which maximizes future options. Lots of interesting applications, will be interested to see how well they manage to communicate this work across different disciplines.


I think it's about trying to minimize entropy (disorder) of the system. Maximum entropy is maximum disorder (which the universe tends toward naturally), such as when the pendulum hangs below the cart without any intervention.


Why is this down voted? It is correct. The article worded it wrong. A system with lower entropy has more future states than a system with high entropy.


Yeah, I had to parse the article a couple times because the wording is wrong. It seems to me that entropica is trying to minimize the entropy of the system. Life is all about low entropy.


Indeed. I'd say intelligence tries to keep its environment in a state it can predict.


Good one: Ralf Der and colleagues proposed homeokinesis as such a principle in 1999. That turned out not to be enough, because the agent would then have an incentive to move into a corner and just hide. That renders the world very predictable, but clearly not a good strategy on the long run.

So, they generalized the method to find a predictable future, but in very unstable states. Later, Der and Ay generalized this towards maximization of predictive information - for that, it is not sufficient to predict to environment, but there also must be something nontrivial to predict. This gives rise to a number of highly interesting behaviours in many of their scenarios.

So, yes, your idea is a good one...


I think the trick to understanding this is that, in order for an intelligent being to maintain it's internal homeostasis, it must (in an entropic sense) really screw up it's surrounding environment. Thus, if you look at the whole system, the intelligent being is maximizing entropy. Phrased another way: Because it wants to maximize it's own homeostasis, it has to work extra hard to achieve this. Work creates entropy when looking outside the whole system, and bam - you have entropy maximization.

I'm not sure I'm right, but this logic works for me...


Research just observes that if you try to keep your environment in the state in which you have maximum number of possible futures you can choose from by acting in specific way, then your behavior leading to this state seems intelligent.


Well, the abstract of the actual paper (linked up-thread) uses the phrase "entropy maximization". I still don't understand entropy after taking a physics class on it, but I did gather it's more complicated than "order" vs "disorder" in any intuitive sense.


It is also seems compatible with some experimental parts of economy, such as the study of soldiers of different countries trading MRE which suggest there is a value in diversity, not just taste or quantity.

If there wasn't, why would say french soldiers be exchanging their wine and pate de foie gras MREs - especially after having satisfied their curiosity of what other MREs taste like, ie after the first trade?


Your analysis is not fully accurate, if I understand the paper correctly; I believe you do not maximize the actual8 entropy of the system. You maximize the potential* future entropy of the system. That's something entirely different.

There is actually a line of research where channel capacity of the actuation/perception channel is being used to model that (it's also named "empowerment", in analogy to the social science term, but it is measured in terms of information theoretical quantities). Here, one does not only model the entropy of the future, but in fact that amount of entropy about the future that the agent can systematically control. Or, more precisely, how well the agent is able to systematically control its future (which allows it maximal future entropy - but only entropy it can actually itself generate!)

See e.g. Klyubin et al. 2005, 2008, Capdepuy et al. 2007 and later, Jung et al. 2011, Salge et al. 2012. Indeed, maximizing the control over potential futures (empowerment) seems to work well for a range of scenarios, including the discovery of points of high centrality in mazes and graphs, as well as pole-balancing, acrobot balancing, bicycle balancing examples, in short, survival scenarios; also for discovering new object manipulation modes, driving sensor evolution models, self-organizing multiagent collective behaviour and a few other cases. It does not work well for systems with prespecified goals, systems with funnels/bottleneck transitions, in short, for cases where you have to relinquish potential futures to commit to a decision. This needs to be treated with a different principle.

The information-theoretic treatment takes into account action routes with controllable dynamics, it will avoid state space regions with more noise and thus less controllability. Importantly: it is the "potential" to do something, it does not enforce to actually carry out the option. Having the option is all that counts. Like in chess, the "threat" is more effective than the "carrying out".

Disclaimer: I am one of the authors of the empowerment work.


I'm surprised they got through the whole article without mentioning Nietzsche's der Wille zur Macht (will to power), which seems to be qualitatively identical (or at least very similar) to this proposal. Not bad for a philosopher from the 1800s.


It seems to me that Nietzsche work is frequently discussed yet rarely read.

I read Zarathustra with a goal of non casual reading - it took me 6 months, due to what is IMHO the complexity of the issues discussed. One has to frequently stop at every paragraph to ponder whether one's understanding is correct, and how it articulates with the other paragraphs.

If on average, it takes more time indeed to read Nietzsche than any other author, there might be an adverse selection against reading Nietzsche's work - to maximise utility, if one is to judge utility by the amount of book read as I frequently see in goals list (1 book a week, etc)

A consequence could be that among those who are discussing his work, especially outside philosophy (CS, math, etc.) very few people have actually read any of it - or a whole book.


> One has to frequently stop at every paragraph to ponder whether one's understanding is correct, and how it articulates with the other paragraphs.

Which is interesting, because Nietzsche is generally seen as one of the more accessible of the philosophers from around that period.

I highly recommend reading Arthur Schopenhauer's work as well which heavily influenced Nietzsche. It predates Nietzsche's work by about 30yrs but it is very similar (although much more nihilistic).

http://en.wikipedia.org/wiki/Arthur_Schopenhauer


These guys have math and experimental results; Neitzsche was, at best, far more vague.


That was the first thing that occurred to me, too.

As the earlier discussion about chess suggests, interactions with other intelligent agents complicate matters. An intelligent agent shouldn't just attempt to maximise the number of possible states -- such an agent may well choose to be passive, and allow the other agent to run the show, which isn't a very useful outcome. Rather, an intelligent agent should attempt to maximise the number of outcomes WHICH IT CAN DETERMINE. And that is the will to power.


The top comment currently says "This sounds like nonsense to me", its first child agrees and add something like "everybody new it since ages".

Well, for one I like this article. I wonder if we developers could use entropy instead of debt to explain the necessity of refactorings and abstractions.

When you say that there is a lot of technical debt, the boss would assume that it is like financial debt: as long as you are still running forward, you can go to the banks and fix the issue with them. It is reversible.

But technical "debt" is not reversible. One wrong lazy "if customer = John Doe" (instead of a proper configurable flag) paves the way to other wrong ifs, and it becomes very fast an irrecuperable mess. This is more like growing system's entropy: a broken mirror won't reassemble, or spilled water won't come back to the bowl, as the Chinese say (for a lost hymen).

Using entropy instead of debt, provided people understand the concept#, we could by showing that increasing the complexity of a system the wrong way (adding ifs because "no time") is drastically and irreversibly reducing the number of possible futures.

# Obviously the weakness of my point...


Going off topic...

The "technical debt" term was created to explain the situation in a way that your boss (that knows a bit about money) could understand and make usefull analogies. Framing it in terms of entropy, altough more precise won't achieve any of those goals.


Catching up to early 20th century western philosophy. Relevant links:

- Heidegger/World-disclosure (http://en.wikipedia.org/wiki/Being_and_Time) - Nietzsche's will to power (http://en.wikipedia.org/wiki/Will_to_power) - Process and Reality (http://en.wikipedia.org/wiki/Process_and_Reality)


"Trying to capture as many future histories as possible" (keeping your options open) harmonizes well with Paul Graham's view on procrastination (http://paulgraham.com/procrastination.html) and the tenet of "put off decisions as long as possible".

UPDATE: Evidently several smart people have had this intuition. I just remembered who said, "Never make a decision until you have to" -- it was Randy Pausch in the "The Last Lecture" (http://www.youtube.com/watch?v=ji5_MqicxSo).

And "delay commitment until the last responsible moment" is also an agile principle (http://www.codinghorror.com/blog/2006/10/the-last-responsibl...).


Yes, there is also something similar in game theory. You're losing if your adversary reduces your branching factor.


After a quick glance, this sounds a lot like information theoretic empowerment, which has been around for more than five years:

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjourna...

https://uhra.herts.ac.uk/dspace/bitstream/2299/6712/1/901919...

https://uhra.herts.ac.uk/dspace/bitstream/2299/9659/1/ContiP...

Also the concept of homeokinesis is quite similar and also not quoted in this work:

http://www.informatik.uni-leipzig.de/~der/Forschung/homeoki....


Thanks for posting these. The third link in particular applies a very similar approach to the inverted pendulum example, in depth and with clarity. I also note the modest concluding paragraph for comparison to the grand claims of the OP.

"Finally, we conclude that it would be desirable to transfer some of the generality of the underlying empowerment concept into methods for empowerment calculation in systems combining continuous and discrete dynamics, especially systems of real-world relevance."


Cognitive science states that intelligence is all about finding patterns, and patterns reduce randomness, or what can be expressed in terms of entropy. What's new here? Capturing future-histories? As long as the patterns can describe the time evolution of steps ahead, the patterns would reduce the complexities of what needs to be described, or what Wissner-Gross may say by "capturing as many future-histories as possible." In my humble opinion, Wissner-Gross just rephrased/repackaged the classical sense of intelligence already embraced by many scientists.


Agree.

This work claims that there is no goal in their implementation of intelligence, but forget the goal of "capturing as many future-histories as possible". With a right amount of tolerance for vagueness, it is not that hard to convince oneself that this goal can explain a lot of behaviors. But saying the same thing with new words is not very useful in cognitive science.

A more interesting question to me is this: could Dr. Wissner-Gross use his own theory to explain physicists' (including his own) obsession with cognitive science, a field they know next to nothing?


I'm no expert in either physics or cog neuroscience, but it sounds similar by name to the Free Energy Principle of brain function.

http://neuroticthought.tumblr.com/post/9044776197/is-this-a-...


If this is a new piece of scientific research, why is there already product marketing around it? The website has named the product "Entropica", and has a demo video that is (while interesting) pure marketing.

New research ideas do not typically come packaged as a product for sale.


This isn't just about keeping options open. Entropy can be seen as a measure of available useful energy. It seems that it would be natural for human intelligence to have evolved to maintain as much useful energy as possible, since that would no doubt aid survival.

If that's true, I don't think it's as much that the developed process mimic intelligence, as much as that both (human) intelligence and this process act to meet the same ends.


I think this research is the missing part of the puzzle of constructing truly intelligent (even sentient) artificial life.

We can't do it the way evolution did it because evolution does not work on individuals and we don't have enough computing power to simulate whole population of artificial human sized brains and let them figure out on their own that they should maximize their available useful resources while still reproducing and evolving towards better architecture. That would lead to intelligence same way real evolution reached it but we neither have power nor time for that. Besides, even if we succeeded ... do we really want to share the world with thousands AI-s all trying to maximize useful resources available to them? Competition breeds innovation but also breeds war.

If you design AI and give it a specific goal you never get intelligence, you are just getting useful machine that achieves the goal brilliantly. With this discovery we finally know how to set the goal to be "act intelligent"

If you artificially set the goal to be to maximize the possible futures that the AI can make real from the present point we can get entity that behaves in a way we recognize as intelligent.

We just have to remember to be really sure that this AI models world correctly enough to know that more potential futures could be realized by keeping animals (including humans) around and ... you now ... not make them suffer.


I will just leave this here: http://xkcd.com/793


http://www.entropica.com/ has a very nice, short, video demonstrating the AI.


Completely blown away after watching that video by the sheer range of tasks this AI can accomplish on its own. Learning to walk, balance a stick, using tools, co-operating, playing pong, and...even buying stocks low and selling high! And all of this without being programmed or given a goal to do so. If true this could be a huge breakthrough indeed. Would really like to learn more about the core technology and will be following this more closely.


I'm far from being blown away. Researchers in AI have demonstrated AI systems with a "blank mind" that can learn by interacting with its environment given some reinforcement learning. They often operate in simple toy examples, but once confronted with the real, complex, messy reality, they fail. I found mostly two reasons: Either their mechanism is too simple and can't solve other than simple problems or it's (computationally) too complex and doesn't scale.

So I'm not saying this approach is bad, but we have yet to see whether it does more than simple examples.


From what I can gather the difference here seems to be the claim that a simpler, more fundamental principle may explain the motivation behind all intelligent behavior. A tall claim no doubt - so I agree that it needs to be validated with more complex examples.


Where do you see it learning to walk? You can't equate the inverted pendulum problem with upright walking. It is neat if it can choose it's own reference level ("goal") for a classical control problem -- but that's a long way from "learning to walk".


Granted that their model for upright walking is very basic. But if their basic premise is sound and valid and taking their claims at face value, it could potentially explain how more complex behaviors like walking may emerge from the simpler examples in their demo.


Thanks!


The connections are slim, but it sounds a bit like the NES AI, which played Mario pretty successfully just by "making bits go up" (more or less).


No you are right. You can compare this to almost any AI that uses a similar strategy, to predict the future and maximize some goal. The cool thing about this is that it defines it's goal as something general purpose enough to create interesting behaviors in a lot of different situations.

But the hard part of AI is actually predicting the future in the first place. Exploring a search tree with billions and billions of possibilities. Or worse, trying to figure out how the world works in the first place from a limited set of observations, then doing that. Figuring out the goal of the AI was never really the hard part.


> The cool thing about this is that it defines it's goal as something general purpose enough to create interesting behaviors in a lot of different situations.

Yep, that is the first thing that stood out to me about both the NES AI and this new paper. Both are defined in ways that are so far removed from the tasks they've been put to that they seem to seek out their own goals relevant to the context.


This seems remarkably similar to Jeff Hawkins' memory-prediction framework theory of the brain in his seminal work, On Intelligence[1].

[1]: http://www.amazon.com/gp/aw/d/B000GQLCVE/


This is interesting because it connects intelligence directly to the concept of life as a locally anti-entropic system (gravity can break an egg, but only a chicken can make a new egg). It implies that intelligence is an aspect of all life, which fits the latest research into animal behavior and intelligence.


>The proposal requires that a system be able to process information and predict future histories very quickly in order for it to exhibit intelligent behavior.

Well isn't that basically the problem with all attempts at artificial intelligence? Processing information quickly and predicting the future is far from simple.


It will be interesting to draw psychologists who research human intelligence, especially those who do so from an evolutionary psychology perspective, into this discussion. I aspired to be a physicist back in my high school days, and yet today spend most of my time with academics among psychologists. It's good to see physics prompt some thinking about the nature of human intelligence.

A bibliography I like about current research on human intelligence from the classical and avant garde points of view in psychology:

http://en.wikipedia.org/wiki/User:WeijiBaikeBianji/Intellige...


I came here hoping to find intelligent commentary on the article. Instead, it's mostly uninformed criticism.

The most promoted comments are basically "I didn't carefully read or try hard to understand this paper, but everything I think I know up to now says that it can't be very interesting."

I don't really understand the paper either. But given that it was reviewed and published in a good journal, and given that it's by a guy who's evidently quite smart, and given that it seems carefully written, it surely deserves better than cursory dismissal.

If you want to go off half-cocked, try Reddit.


Science requires constant criticism especially when people claim grand things. Don't assume people here are idiots and have not read the paper.

Every year there are dozens of papers claiming a grand theory of AI.


I'm not assuming that. In fact, I assumed the opposite and it was after I read most of the comments that it became evident that most people here weren't engaging with the article in any kind of substantive or even remotely sympathetic way (and, believe it or not, good academic reading requires that one adopt the most charitable interpretation of the author's claims -- at least to begin with).

And, please don't assume that I don't understand how science or AI theories work, either. If you email me I'd be happy to explain my bonafides.


You should apply your criticism to yourself. The top ranked commenter now has direct comments about the paper. You are just saying that we should take the paper positively because of the author and venue and you are talking about credentials and not about the content of the paper :)


Hardly. The words "nonsense" and "gibberish" are being bandied about -- anyone who uses those words for an article that's been reviewed and published in a good journal needs to have strong justification for such derision. No such justification was found. The criticisms sound not that different from politicians who condemn scientists for studying fruit flies and worms.

I never said to take the paper positively. Giving something a charitable reading is different from agreeing with it. If I stop on the road and say to you that I think there's gas nearby and ask you where it is, and you say "we're surrounded by nitrogen", then, yes, you have produced a correct parsing of the sentence. But you have not performed a good faith effort to understand what I'm trying to say on my terms.

The difference between a superficial reading of a text and a deep dive is obvious, at least to most academics. I had assumed it would also be obvious to the people who frequent this forum.


AI has been and is always full of such papers. That explains the attitude. People are just tired of snake oil. Note: I am not saying that this is snake oil.

If a new theory comes up and it is being reported in the press before anything convincing has been built, it is more efficient to be negative and suspicious.

But wait a minute, people here have actually read the paper. Look at the top comments.


Very interesting. The big idea, as I see it, is that intelligent systems try to maintain as many possible futures as they can - to keep their options open.

People often think that all living beings are driven by the need to reproduce, but the need to survive - to maintain homeostasis, to keep one's shape in a complex environment - is even more fundamental, I think.

If you extend the idea of living beings attempting to keep their own metabolism and structure fixed, you realise that if they have the capability, they'll also try and keep their environment fixed.


The need to reproduce is more fundamental, and there are plenty of examples in nature of animals sacrificing homeostasis in order to reproduce -- male black widow spiders, for example. It's obvious why this would be the case; that's how evolution works. Keeping your future options open is a nice heuristic, but it's not necessarily an end goal.


Reproducing captures more future states than not reproducing, since it allows some of your collected information to extend past your death.


You can explain almost any process with this theory, and that's why it's useless. Trying to "capture more future states" is not the reason certain traits evolve, they evolve simply because they are more likely to survive and reproduce.


Not exactly: there are things that work quite well with these ideas, and others don't. Check out my comments about "empowerment" further above for examples.

Thus, the authors' proposals are emphatically not vacuous - but they are probably not that new, either, if I have properly understood what they are doing.

Disclaimer: I am one of the authors of the empowerment papers.


Why any living thing reproduces at all is a much more fundamental question than e.g. why men have nipples or snakes don't have legs.


Because things that do reproduce quickly out-reproduce and out-compete things that don't.


>>>The big idea, as I see it, is that intelligent systems try to maintain as many possible futures as they can - to keep their options open.

Sounds both like the U.S. Constitution (which I've never considered an algorithm until now) and Nassim Taleb's idea of anti-fragility.


I emailed this article to Marko (http://markorodriguez.com/), and he summerized it nicely as, "Always ensure many degrees of freedom."


As referenced in the paper, Verlinde proposed that gravity is an entropic force:

http://en.wikipedia.org/wiki/Entropic_gravity

I highly recommend Verlinde's brilliant paper, it's an easy read:

http://arxiv.org/abs/1001.0785

The theory remains controversial, but it's a valuable attempt to put holographic principles center stage.


When I was working on developing algo for shape recognition for my iPad app (Lekh Diagram), I thought in terms of entropy too. But my understanding is just opposite. In my understanding, intelligent system mostly tries to minimize the entropy. intelligent system tries to find out pattern, perform task with some pattern which ,IMO, minimizes the disorder.


Does anyone have an ungated copy of the paper?





"If it has legs, it has legs!"

Whether or not you find this science rigorous, it is the kind of thing that leads to a "wow! my mind is blown" a-la so many smokey college dorm room nights.

Similar to the anthropic principle, these concepts are cool but hard to use.

Nevertheless I do like the way this one gives us a new way at justifying the value of 'interesting' stuff that we might be otherwise interested in discarding.

For example: the impulse on HN to discard this research... whether or not it's valid, it is certainly an interesting example of the point of the research itself. If this stuff has legs, it'll have legs. Mind blown? Bleh maybe for a second right?


Just think, these guys could do for human behavior what Newton did for chemistry, Kelvin for geology, Seitz for oncology, Fomenko for history and what a number of other physicists have attempted to do for climate science.


  Keep your options open
I don't think it models intelligence; but maybe it models wisdom. The prediction part requires intelligence.

I can understand physicists getting excited over a simple mathematical model that explain a lot. Like Einstein thinking relativity was so beautiful, it must be true. But a difference there was unexplained empirical data (speed of light constant in all directions) - which we don't have for intelligence.

That is, it's not that we haven't solved the problem of intelligence; but that we don't understand the question. (sometimes dna seems terribly wise).


"Keep your options open" is also called "analysis paralysis" when it is unwise.


> Allowing the rod to fall will _lower_ the entropy of the system??


Yes. I don't know the exact dynamics of their toy problem, but thinking about it this way can help: When the rod falls over, it will stay there from then on. In terms of the number of possible future states, there are less possibilities for the system.


This is a stunningly elegant proposal. It seems to apply to any field as well, which inspires confidence.

The idea of all Life as simply the Evolution fitness function over the maximization of Entropy...!


Unfortunately, I haven't had the time to delve into AI, but one interesting conversation I had the other day we discussed one difference between (current) software and people is that a code duplication checker would miss a number of things a human could look at and recognize as obviously similar, if not identical. Applying things like Bayes can help (to an extent), but the question is, where is the line drawn? If we get a piece of software that can automatically refactor, does that count as AI?


As an AI researcher, I rate this paper as CRANKY http://www.crank.net/about.html


I first read the sentence "it's not science as usual," as "it's not science, as usual."


I suppose they're right in a sense. Intelligence and entropy are like opposing forces.


How frustrating these days to hit a science paywall. The original article would be worth reading.


I believe that with enough creativity, everything can be modeled under thermodynamics.


In the diagram, I like how everything is approximated as a homogeneous sphere.


I think Rodolfo Llinás would somehow agree with this.


ads on this site are of bikini girls.. don't really trust it.


everybody has a different definition of intelligence.

personally, i don't consider the vast majority of humans to be intelligent.

if we are trying to create artificial intelligence, it's probably a bad idea to use humans as a model.


This could be big.


It would be interesting if someone other than the author submitter had that opinion. Nice astroturf attempt, though, alexwg/alexvr.


Note to self: don't post an enthusiastic comment on someone's thread if they have a similar name. I was really puzzled as to why I got so many downvotes on that comment.

You're wrong, though. I'm not the "author submitter." Believe it or not, Alex is a common name.

And what is an "astroturf attempt"?




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: