Tool use and social cooperation, sounds interesting. Let's see, that seems to be figures 3 and 4 from page 4. Once you get over the idea of modeling an animal with a disc, it seems a little bizarre that figures 3 or 4 have anything to do with intelligent behavior. Figure 3 is about one disc bouncing off another disc so that a third disc goes into a cylinder. I would say that doesn't really capture the essence of how "non-human animals" use tools. Then the social cooperation example is about a big disc attached to a string, that moves differently when 2 smaller discs touch it at the same time, and then under some specific configuration the discs have "social cooperation" to move the big disc. As far as I can tell, whether or not moving the big disc is an intelligent decision or not seems to not matter in this example.
We then get very bold claims, such as "physical agents driven by causal entropic forces might be viewed from a Darwinian perspective as competing to consume future histories."
That's a very simple model that can be used, in that example, to make a number of ghost agents work together to converge on a pac-man, splitting themselves up to cover every exit. The code to do it is very small, smaller than A*, could also be used to do pathfinding, and is based essentially on the ideas of entropy.
I don't think that claim is very bold at all, it is merely an observation. The idea that the algorithm tries to maintain entropy is very much like life as we know it. Life as we know it are mechanisms that despite the continuous dissipation of entropy in the universe around it try to achieve local points of high entropy and sustain those.
In less intelligent animals, the mechanism to resist entropy loss is defined genetically. They can not adapt to new environments because their decision making is rigidly defined in their genes. (i.e. they are like simple computer programs)
In more intelligent animals, like ourselves, instead of deriving our decisions (mainly) from our genes, we derive them from an organ (whose design is derived from genes) we call the brain (I've heard there's more decision making organs). Separating the decision making from the the dna is what makes us so highly adaptable to new environments.
There our likeness to this algorithm lies, like us this algorithm can sustain its entropy no matter what the environment is (as long as its got the tools to do so).
I don't think it is fair to say that our brain makes us more adaptable to new environments than other species. Compared to the many single-celled species that have survived hundreds of millions of years, the bacteria that can survive hibernation for millions of years, and our current inability to agree on the existence of global warming, is it realistic to project the survival of our species beyond even the next thousand years?
I think the battle between complex organisms and simple organisms is an ongoing one. At the moment we rely on the simple organisms on our insides to protect us against the simple organisms on the outside. If there is any threat to the survival of our species it lies there. A nuclear holocaust or a few degrees of global warming isn't going to do it, unless you define 'our species' as 'inhabitants of new york'.
Wether we are more intelligent that some other intelligent animals is dependent on how you measure intelligence. If you define it as having many offspring in many continents, I'm not sure ants would win, but I'm not sure humans would either.
And this is deeply linked to language, as language is a way to receive informations you don't already have and coerce your thoughts streams (or not) along them to make predictions that are not based on experience. In my opionion it is very important to make this distinction between predictions learned by experience and predictions learned by the books. I think we here can measure this difference very well.
To me it looks like this added layer of brain cells we have acts as an observer on life happening on the floor below, kind of a god of your own brain, acting not on experience-learned behaviour but solely on a combination of representation learned from others (connections created willfully from other's discourse) and abstractions made from adaptive learning done on signals coming from the layers below (I envision an isolated system whose only object of observation is regular thoughts and whose only sense is signals coming through connections to layers producing those thoughts).
It feel likes what makes us apart is first this simple fact we learn from our own though patterns. This enables us to make what (at least) feels like free decisions based on abstracted representation and not mere immediate experience. This enables us to think outside the box of our own experiences and form models of other's. This enables us to delibarately picture in our minds things that we have never experienced, or things that cannot be experienced (think abstract models in sciences). This finally enables us to accept information from our peers as valid signals to model our though-paths on. If this layer was anything real, to me it would mainly be an experience or information-sharing device. This thing that makes possible, and makes useful, the kind of social structure we know and live in.
I may be rehashing common knowledge, I don't know.
However while the idea is not new, it does seem like he made a software implementation of some some aspects of it and this could be interesting.
I find the link between thermodynamics and intelligence very interesting. Here is how it works according to me. First the best definition of intelligence IMO is the ability to predict the unknown (whether because it's hidden or because it's in the future) from current knowledge.
In order to have 'intelligence' you have to have some information in your head. That is to say, a part of you has to be physically correlated with a part of the world, some particles in your head have to have patterns that approximately and functionally 'mirror' part of the world.
Thermodynamics is about order and this is also about particles having properties that correlate with each other. This means that in a low entropy situation knowing something about a particle tells you something about some other particles. Intelligence would by useless if the universe had too high entropy. You can't get much insight from white noise even if you are very smart.
There is a saying that "knowledge is power". This is truer than you might think. It is true in a very physical sense.
For example, take the thermodynamic textbook example of a container with a gas on one side and void on the other. If there is no barrier between the two side the gas should move to fill the void and settle in the higher entropy case of evenly filling the space. Thermodynamics says that you would need to spend energy to push the gas back to one side.
However, if you were to put a wall in the middle of the container with a little door large enough to let a molecule go through, and you knew exactly the position and velocity of each molecule in the container, you could open and close the door at exactly the right time when a molecule is about to go from, say, left to right through the door and close it when one is about to go right to left. Using very little energy you could get all molecules to end up on one side of the container.
This should violate the second law of thermodynamics but it does not! Why is that you ask? It's because the knowledge you have of the position and velocity of all these molecules is a source of low entropy. Knowledge is low entropy and the correlation between the particles in your head and the real world is what allows you to make predictions and extract useful energy from things.
Note that an interesting comp sci result of this is that low entropy information is more easy to compress and the more a system is good at predicting data from other data, the better it is at compressing such that there is also a tight link between the concepts of compression algorithms and artificial intelligence. But that's another story.
I think if he would have been alive at the right time these would have been blog posts. Before reading them, I had taken an intro class in thermodynamics which at left me completely confused.
Read THE EVOLUTION OF CARNOT'S PRINCIPLE ( http://bayes.wustl.edu/etj/articles/ccarnot.pdf )
for incredible insights on how Carnot pioneered thermodynamics by trying to optimize steam engines.
Also if you think you dislike statistics and probabilities but you like math in general his book might change your mind: Probability Theory: The Logic of Science.
Free draft: http://omega.albany.edu:8008/JaynesBook.html
In fact understanding his stance on probabilities, the mind projection fallacy in particular might be prerequisite to understand thermodynamics, the fundamental point being that entropy is not really directly a property of matter but more of a meta property that is about knowledge or information which is taken to mean correlations across aggregate matter.
Anyway, I kind of object to your definition of intelligence as "the ability to predict the unknown". Of course it comes down to personal preference, but I feel that when it comes to making AI, a better definition is something along the lines of "intelligence is the ability to make good choices when faced with a decision". In other words, I like to come at AI using the definition of rational agent.
Machine learning is the subject that involves predicting the unknown, and there is a ton of linear algebra involved. I am not an expert at ML, so it's possible that the second law in some form plays a role, but if so I have not come across that yet in my studies.
Now in as much as the second law applies to AI, I am skeptical, but your comment has convinced me to keep an open mind and look into it more carefully. Entropy is merely the log of the probability distribution, so when we talk about maximizing entropy for decision making, what probability distribution should we use, and should we use different distributions for different situations, and if so how do we decide which one to use?
A much more relevant subject for AI in my opinion is game theory, which is really a theory of decision making. Finding the optimal strategy for navigating a decision tree can be a hard problem, and just because maximizing entropy works to find solutions for some types of decision problems, doesn't mean that it's a magic bullet that will always, or even frequently, work out.
I love it. Mine is even more constructivist (and Wittgensteinian). Something like "intelligence is the ability to detect patterns and act accordingly."
The problem, as many philosophers have noted, is that such definitions -- in fact, most definitions -- make it hard to argue that the thermostat is not intelligent (or conscious, or sentient). And that, of course, is at the heart of Turing's most famous argument.
I, myself, find it hard to argue my way around what is essentially a "walks like a duck, talks like a duck" argument when it comes to these matters -- which is to say, defining something in terms of its activities, behaviors, likenesses. That perhaps tilts practical AI toward the statistical, though, and it's hard for me to believe that that's the whole story.
I have never presented the arguments back-and-forth on this issue without watching my students collapse into confusion and desperation.
I split your definition in two parts and only labelled the first part 'intelligence'.
part 1: Predicting the unknown which can be the probabilities of outcome of your actions and such.
part 2. Evaluating these outcomes and their likelihoods to make a decision based on some utility or preference function.
I can see contexts where the word 'intelligence' would encompass both parts. In the context of thermodynamics or information theory I think intelligence is more about the prediction part because pure matter or even computers don't really have preference of outcome so the second part is mostly irrelevant to their 'intelligence'.
The original article seems to take low entropy as a stated goal or preference of the system which is not a bad sub goal in that it is thermodynamically efficient and saves usable energy longer but I don't think being thermodynamically efficient should be the only goal of an intelligent system.
What defines a good choice? What defines a bad choice? Saying "A true machine intelligence is one that makes good decisions" is a tautology -- it doesn't define anything. Rather, it just reframes one vague question into another.
Intelligence is notoriously difficult to describe, so don't feel bad about it if you keep thinking of circular definitions. It's a tricky topic.
Also, IMO there's a big problem with rational agents: In order to be actually useful/implementable, they must be defined in a specific logical framework. You can define a rational agent in a vague sense without any specific domain, but doing so does not solve any problems -- it again just results in tautology. But the problem with choosing a logical framework is shown by Godel's famous theorem. Simply put, any such agent in a well-defined domain will be confined to that domain of reasoning. To me, a machine that excels in some area of planning but can never think outside the box, is not intelligent on the same level as humans.
Anyone advocating rational agent AI as true machine intelligence IMO will need to either find a flaw and disprove Godel's incompleteness theorem, or provide some system of logic that encapsulates all human reasoning (absurdly impossible IMO).
On the other hand, I personally think true machine intelligence will be solved much more "organically", far separated from formal logic and more related to fuzzy pattern matching than rigid formal optimization. I really like this definition of intelligence:
Though I wouldn't claim it's the ultimate one, as it's still a bit vague. I think the key lies in what he describes as "cross domain" -- what is colloquially called "thinking outside the box", because this is the only thing humans seem extremely good at that every computer AI to date has failed at.
Humans are capable of transcending formal logical systems and finding new truths that are unprovable from the original formal system. How is this possible? Godel proved that a formal system cannot ever determine this by itself, from axioms, without being inconsistent.
IMO a true machine intelligence will also need to be capable of this formal-system-transcending property (which you can also call "cross domain thinking", or "thinking outside the box", or "creativity", or whatever).
I do agree with your statement that intelligence is difficult to describe. And for the record I never claimed the rational agent approach will achieve "true machine intelligence". Frankly, I don't even know what "true" machine intelligence is or would be. What I am very interested in however is how to create an AI that is capable of fooling a person into thinking it's intelligent, in other words an AI that can pass the Turing test. For that purpose I find the concept of a rational agent to be very helpful, in that it provides a solid foundation to work from.
I'm also not worried about Godel's theorem at all. The statements that are undecidable are very obscure and basically never come into play in human decision making. I think Godel's result has very little practical application, if it has any at all; I certainly don't think it's relevant for AI. I agree that choosing a logical framework to work in presents many problems, but I don't agree that you have to disprove Godel's theorem to create an AI that can think outside the box.
Of course any simulation is implemented in some formal system, but that's not what I'm saying. I'm saying the inherent problem with a rational agent is that they maximize utility within a formal system -- that is what they do, that is their entire purpose. Once we start talking about rational agents that can "think outside the box", or think beyond the formal system in which they're defined, we're no longer talking about rational agents.
I agree with your belief that we may some day build AIs that think outside the box, I just don't see how it can be a rational agent. I think it will be much closer to a neural network, or probabilistic reasoning model, or some extremely "organic" or "fluid" device from which intelligent behavior organically emerges. Because if intelligence emerges from a strictly formalized framework, then Godel's theorems prove some very crippling limitations on what's possible within that fixed framework.
FYI there is definitely a connection between Godel's theorem and intelligence, though it may not be immediately obvious. I would highly recommend this as fun learning material: http://ocw.mit.edu/high-school/courses/godel-escher-bach/
The problem with this is it won't really produce a "general intelligence" that can think outside the box, because it will always be maximizing some utility function defined in some rigid formal system. In other words, it will be completely unable to "understand" things outside of the formal system in which you define it.
The existence of "Uhprovably true statements" is less of a weakness in logic's ability to organize the universe, and more that there are always less meaningful yet constructible statements, like "This statement is not provable."
Prenote: I apologize in advance if my entire response is off-topic, because my English parser was completely unable to take anything meaningful from this sentence.
When you have a system of axioms, both consistent and complete, which proves all useful properties of arithmetic and algebra, let us know.
Whether or not you are able to understand the deepness of connection between Godel's theorems, formal systems, self-similarity, and intelligence, you can't deny that the more statistical / 'organic' AIs are overwhelmingly more successful than rigidly logical ones. For example, try implementing voice recognition analytically, and let us know your results (have fun with that).
The point is that although some people would like to believe otherwise, human brains (and nonhuman brains) are NOT logical reasoning machines. Humans are not rational agents. It still surprises me how many people are desperately grasping to rational agents and similar frameworks to try to build general intelligences entirely from logic. I point out Godel's theorems as one type of insight into why this road will lead you nowhere. You're free to pursue it as much as you like, just as animals are free to run in circles chasing their tail.
You can't build an AI capable of thinking outside the box, by specifically building that AI in the box of a particular logical framework. Logic itself has limitations. Logic is powerful, but not entirely in of itself -- logic is powerful as a tool extended by the mind of its creator(s). The more meaningful question in AI, is how do we create logical concepts to begin with?
It could always be true: physics really does have a lot to contribute in many areas! But the burden of proof is very, very much on the physicists to justify the relevance of their approach.
In this case, based only on this little summary, I have to wonder whether they are really explaining intelligence or whether they are identifying some broader natural principle for which intelligent behavior is just one manifestation. There may be something quite deep here, but I'm holding off on deciding what exactly it is.
HNers need to more critical of Science articles. This is where we need good criticism not on some poor hacker's pet project.
"Additionally, a company he founded is exploring commercial applications of the research in areas such as robotics, economics and defense."
That is a major red flag when people posit a universal AI theory and try to sell to the hapless government tech based on it.
Thank you! I've been saying that for a long time, but much less eloquently.
As I've already indicated, there's reason to be skeptical when physicists stretch past the usual boundaries of the discipline. (Though it still could make important contributions.) But modern physics as modern physics is, by and large, solid science.
What I meant was physics which is closer to the Time Cube theory than rock solid mathematics on a spectrum of math and science.
"As I've already indicated, there's reason to be skeptical when physicists stretch past the usual boundaries of the discipline."
There is reason to be skeptical when anybody proposes a grand unifying theory for something as grand and difficult as intelligence!! Extraordinary claims require extraordinary evidence!
This has been going since 1957 with so many young lives lost (sometimes literally) due to techniques bordering on charlatanism. Sorry to be harsh, I love AI and want it to succeed. It won't succeed if people mix pursuing knowledge with pursuing fame and $.
(For some definitions of "turtle", "dead", "God", and most especially "is".)
That's bad, because the most interesting breakthroughs came from cross-disciplinary work.
The cross-disciplinary breakthrough is often characterized as unifying, and hence rightfully celebrated. But they are neither the norm nor the most influential.
Crick, Hodgkin and Huxley were all physicists poaching in other fields. In fact, relatively few of the people who laid the foundations in molecular biology were trained as biologists.
This recollection is incorrect across the board.
Crick had only a bachelors in physics (which does not a physicist make), and his postgraduate training was in medical research. Huxley, an autodidact, had his background in medicine. I'm not sure about Hodgkin but I suspect he was also not a physicist.
I admit that Hodgkin and Huxley are harder cases: both had early inclinations to physics and then got sidetracked into physiology. Both of them spent the war doing things you'd have a physicist doing during a war, and afterwards did their their famous work on the action potential. I read the Hodgkin and Huxley papers, but I read them in a course in the math department-- they contain some of history's finest examples of mathematical modeling, relying heavily on dynamical systems and circuit theory. It is a regrettably rare biologist who wants to go near those papers. It is manifestly not the work of traditionally trained biologists.
Disciplinary boundaries in mid-century Britain were somewhat different then, and I realize the danger of playing No True Biologist. But the mainstream of biology is only very recently acknowledging the importance of the physical approach: Hodgkin and Huxley could have been Hodgkin and Huxley without a lick of genetics or ethology, but they couldn't have been Hodgkin and Huxley without the cable equation.
I would also add Max Delbruck to your list of turn of the last century physicists cum molecular biologists and the physicist members of the RNA tie club: http://en.wikipedia.org/wiki/RNA_Tie_Club.
I also have a question for the GP. What exactly makes a physicist so? I have "merely" a masters degree in physics, and although I consider myself to be a theoretical biologist, I am often called a physicist by my collaborators (some of whom are physicists themselves). What about a degree actually matters in this descriptive title?
And Fisher was a statistician.
I'm glad physicists can try their luck at AI. I'd be glad other disciplines, even remote, do too. Foreign concepts often bring with them new outlooks on problems that are very fertile and rejuvenating. A little bit more humility wouldn't hurt though.
True, but I think the signal-to-noise ratio is also exceptionally low. The most amazing breakthroughs come from insights that span disciplines, but it's also easy to think you have an insight when what you really have is a mistake. I'm not weighing in on this one in particular, but as a general principle I think caution is appropriate in cases like this.
"My proposal is that intelligence is a process that attempts to capture future histories"
'future histories' is just ... the future! .... so his great proposal is 'intelligence predicts the future'? Well duh!
I don't see anything new here except the phrase 'future histories' which is not even defined because it just means 'the future'.
As soon as I saw it claiming it can win at the stock market I knew it was BS.
Which is generally good advice, but only "in the absence of a specific goal".
I've got like, an undergraduate understanding of how this works. Does anyone have a link to an ungated copy?
I got as far as "the model of this physical process of entropy can act as a search function for lower entropic states, which given some poking, can resemble problem solution states".
It's the poking part that is problematic.
>To better understand this classical-thermal form (11) of causal entropic forcingwe simulated its effect [12,13] on the evolution of the causal macrostates of a variety of simple mechanical systems: (i) a particle in a box, (ii) a cart and pole system, (iii) a tool use puzzle, and (iv) a social cooperation puzzle
I am wildly unqualified to comment on this paper, but it's suspicious that you would jump to a "tool use puzzle" and a "social cooperation puzzle". It yields a really fun result and a fun way of talking about cognition, but it seems unsurprising to say that once you describe a problem in the physical terms of this framework, your framework can then solve the problem, as in,
>We modeled an animal as a disk (disk I) undergoing causal entropic forcing, a potential tool
as a smaller disk (disk II) outside of a tube too narrow for disk I to enter, and an object of interest as a second smaller disk (disk III) resting inside the tube
The notion that currently understood physical models can act as a learning system, given the right parameters, is really cool and exciting, and there's a great simplicity and metaphoric appeal to it.
More towards the topic, I'd say these kind of articles have to be taken with a grain of something, preferably salt or cynism, depending on taste. I'm usually more of a material science/electronics technology buff, and news articles in that area will not hestitate to present a relatively minor paper as a breakthrough in battery technology that will solve all our energy problems in the next 5 years. But then you read on, compare it against the existing technologies and you realize that they just did something funny to the diodes that increases power density at the cost of usuage time. Somewhat handy in some applications, but not a game changer.
PS: You're certainly not the youngest here.
So, they generalized the method to find a predictable future, but in very unstable states. Later, Der and Ay generalized this towards maximization of predictive information - for that, it is not sufficient to predict to environment, but there also must be something nontrivial to predict. This gives rise to a number of highly interesting behaviours in many of their scenarios.
So, yes, your idea is a good one...
I'm not sure I'm right, but this logic works for me...
If there wasn't, why would say french soldiers be exchanging their wine and pate de foie gras MREs - especially after having satisfied their curiosity of what other MREs taste like, ie after the first trade?
There is actually a line of research where channel capacity of the actuation/perception channel is being used to model that (it's also named "empowerment", in analogy to the social science term, but it is measured in terms of information theoretical quantities). Here, one does not only model the entropy of the future, but in fact that amount of entropy about the future that the agent can systematically control. Or, more precisely, how well the agent is able to systematically control its future (which allows it maximal future entropy - but only entropy it can actually itself generate!)
See e.g. Klyubin et al. 2005, 2008, Capdepuy et al. 2007 and later, Jung et al. 2011, Salge et al. 2012. Indeed, maximizing the control over potential futures (empowerment) seems to work well for a range of scenarios, including the discovery of points of high centrality in mazes and graphs, as well as pole-balancing, acrobot balancing, bicycle balancing examples, in short, survival scenarios; also for discovering new object manipulation modes, driving sensor evolution models, self-organizing multiagent collective behaviour and a few other cases. It does not work well for systems with prespecified goals, systems with funnels/bottleneck transitions, in short, for cases where you have to relinquish potential futures to commit to a decision. This needs to be treated with a different principle.
The information-theoretic treatment takes into account action routes with controllable dynamics, it will avoid state space regions with more noise and thus less controllability. Importantly: it is the "potential" to do something, it does not enforce to actually carry out the option. Having the option is all that counts. Like in chess, the "threat" is more effective than the "carrying out".
Disclaimer: I am one of the authors of the empowerment work.
I read Zarathustra with a goal of non casual reading - it took me 6 months, due to what is IMHO the complexity of the issues discussed. One has to frequently stop at every paragraph to ponder whether one's understanding is correct, and how it articulates with the other paragraphs.
If on average, it takes more time indeed to read Nietzsche than any other author, there might be an adverse selection against reading Nietzsche's work - to maximise utility, if one is to judge utility by the amount of book read as I frequently see in goals list (1 book a week, etc)
A consequence could be that among those who are discussing his work, especially outside philosophy (CS, math, etc.) very few people have actually read any of it - or a whole book.
Which is interesting, because Nietzsche is generally seen as one of the more accessible of the philosophers from around that period.
I highly recommend reading Arthur Schopenhauer's work as well which heavily influenced Nietzsche. It predates Nietzsche's work by about 30yrs but it is very similar (although much more nihilistic).
As the earlier discussion about chess suggests, interactions with other intelligent agents complicate matters. An intelligent agent shouldn't just attempt to maximise the number of possible states -- such an agent may well choose to be passive, and allow the other agent to run the show, which isn't a very useful outcome. Rather, an intelligent agent should attempt to maximise the number of outcomes WHICH IT CAN DETERMINE. And that is the will to power.
Well, for one I like this article. I wonder if we developers could use entropy instead of debt to explain the necessity of refactorings and abstractions.
When you say that there is a lot of technical debt, the boss would assume that it is like financial debt: as long as you are still running forward, you can go to the banks and fix the issue with them. It is reversible.
But technical "debt" is not reversible. One wrong lazy "if customer = John Doe" (instead of a proper configurable flag) paves the way to other wrong ifs, and it becomes very fast an irrecuperable mess. This is more like growing system's entropy: a broken mirror won't reassemble, or spilled water won't come back to the bowl, as the Chinese say (for a lost hymen).
Using entropy instead of debt, provided people understand the concept#, we could by showing that increasing the complexity of a system the wrong way (adding ifs because "no time") is drastically and irreversibly reducing the number of possible futures.
# Obviously the weakness of my point...
The "technical debt" term was created to explain the situation in a way that your boss (that knows a bit about money) could understand and make usefull analogies. Framing it in terms of entropy, altough more precise won't achieve any of those goals.
- Heidegger/World-disclosure (http://en.wikipedia.org/wiki/Being_and_Time)
- Nietzsche's will to power (http://en.wikipedia.org/wiki/Will_to_power)
- Process and Reality (http://en.wikipedia.org/wiki/Process_and_Reality)
UPDATE: Evidently several smart people have had this intuition. I just remembered who said, "Never make a decision until you have to" -- it was Randy Pausch in the "The Last Lecture" (http://www.youtube.com/watch?v=ji5_MqicxSo).
And "delay commitment until the last responsible moment" is also an agile principle (http://www.codinghorror.com/blog/2006/10/the-last-responsibl...).
Also the concept of homeokinesis is quite similar and also not quoted in this work:
"Finally, we conclude that it would be desirable to transfer some of the generality of the underlying empowerment concept into methods for empowerment calculation in systems combining continuous and discrete dynamics, especially systems of real-world relevance."
This work claims that there is no goal in their implementation of intelligence, but forget the goal of "capturing as many future-histories as possible". With a right amount of tolerance for vagueness, it is not that hard to convince oneself that this goal can explain a lot of behaviors. But saying the same thing with new words is not very useful in cognitive science.
A more interesting question to me is this: could Dr. Wissner-Gross use his own theory to explain physicists' (including his own) obsession with cognitive science, a field they know next to nothing?
New research ideas do not typically come packaged as a product for sale.
If that's true, I don't think it's as much that the developed process mimic intelligence, as much as that both (human) intelligence and this process act to meet the same ends.
We can't do it the way evolution did it because evolution does not work on individuals and we don't have enough computing power to simulate whole population of artificial human sized brains and let them figure out on their own that they should maximize their available useful resources while still reproducing and evolving towards better architecture. That would lead to intelligence same way real evolution reached it but we neither have power nor time for that. Besides, even if we succeeded ... do we really want to share the world with thousands AI-s all trying to maximize useful resources available to them? Competition breeds innovation but also breeds war.
If you design AI and give it a specific goal you never get intelligence, you are just getting useful machine that achieves the goal brilliantly. With this discovery we finally know how to set the goal to be "act intelligent"
If you artificially set the goal to be to maximize the possible futures that the AI can make real from the present point we can get entity that behaves in a way we recognize as intelligent.
We just have to remember to be really sure that this AI models world correctly enough to know that more potential futures could be realized by keeping animals (including humans) around and ... you now ... not make them suffer.
So I'm not saying this approach is bad, but we have yet to see whether it does more than simple examples.
But the hard part of AI is actually predicting the future in the first place. Exploring a search tree with billions and billions of possibilities. Or worse, trying to figure out how the world works in the first place from a limited set of observations, then doing that. Figuring out the goal of the AI was never really the hard part.
Yep, that is the first thing that stood out to me about both the NES AI and this new paper. Both are defined in ways that are so far removed from the tasks they've been put to that they seem to seek out their own goals relevant to the context.
Well isn't that basically the problem with all attempts at artificial intelligence? Processing information quickly and predicting the future is far from simple.
A bibliography I like about current research on human intelligence from the classical and avant garde points of view in psychology:
The most promoted comments are basically "I didn't carefully read or try hard to understand this paper, but everything I think I know up to now says that it can't be very interesting."
I don't really understand the paper either. But given that it was reviewed and published in a good journal, and given that it's by a guy who's evidently quite smart, and given that it seems carefully written, it surely deserves better than cursory dismissal.
If you want to go off half-cocked, try Reddit.
Every year there are dozens of papers claiming a grand theory of AI.
And, please don't assume that I don't understand how science or AI theories work, either. If you email me I'd be happy to explain my bonafides.
I never said to take the paper positively. Giving something a charitable reading is different from agreeing with it. If I stop on the road and say to you that I think there's gas nearby and ask you where it is, and you say "we're surrounded by nitrogen", then, yes, you have produced a correct parsing of the sentence. But you have not performed a good faith effort to understand what I'm trying to say on my terms.
The difference between a superficial reading of a text and a deep dive is obvious, at least to most academics. I had assumed it would also be obvious to the people who frequent this forum.
If a new theory comes up and it is being reported in the press before anything convincing has been built, it is more efficient to be negative and suspicious.
But wait a minute, people here have actually read the paper. Look at the top comments.
People often think that all living beings are driven by the need to reproduce, but the need to survive - to maintain homeostasis, to keep one's shape in a complex environment - is even more fundamental, I think.
If you extend the idea of living beings attempting to keep their own metabolism and structure fixed, you realise that if they have the capability, they'll also try and keep their environment fixed.
Thus, the authors' proposals are emphatically not vacuous - but they are probably not that new, either, if I have properly understood what they are doing.
Disclaimer: I am one of the authors of the empowerment papers.
Sounds both like the U.S. Constitution (which I've never considered an algorithm until now) and Nassim Taleb's idea of anti-fragility.
I highly recommend Verlinde's brilliant paper, it's an easy read:
The theory remains controversial, but it's a valuable attempt to put holographic principles center stage.
Whether or not you find this science rigorous, it is the kind of thing that leads to a "wow! my mind is blown" a-la so many smokey college dorm room nights.
Similar to the anthropic principle, these concepts are cool but hard to use.
Nevertheless I do like the way this one gives us a new way at justifying the value of 'interesting' stuff that we might be otherwise interested in discarding.
For example: the impulse on HN to discard this research... whether or not it's valid, it is certainly an interesting example of the point of the research itself. If this stuff has legs, it'll have legs. Mind blown? Bleh maybe for a second right?
Keep your options open
I can understand physicists getting excited over a simple mathematical model that explain a lot. Like Einstein thinking relativity was so beautiful, it must be true. But a difference there was unexplained empirical data (speed of light constant in all directions) - which we don't have for intelligence.
That is, it's not that we haven't solved the problem of intelligence; but that we don't understand the question. (sometimes dna seems terribly wise).
The idea of all Life as simply the Evolution fitness function over the maximization of Entropy...!
personally, i don't consider the vast majority of humans to be intelligent.
if we are trying to create artificial intelligence, it's probably a bad idea to use humans as a model.
You're wrong, though. I'm not the "author submitter." Believe it or not, Alex is a common name.
And what is an "astroturf attempt"?