Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"We already know the equations that we'd need to use do general intelligence"

Not only am I pretty sure we don't know how to build a general intelligence I'm pretty sure that nobody really knows what kind of approach would be most likely to succeed.

Having said that, I would love to be proved wrong on this one - so as you specifically say that the necessary techniques have already been published perhaps you could give the relevant references?



There's an algorithm developed by Marcus Hutter called AIXI, which makes provably optimal decisions. Unfortunately(?) it's also uncomputable, but computable approximations exist including a Monte Carlo variant: http://www.vetta.org/2009/09/monte-carlo-aixi/. As the paper notes it scales extremely well; to get better results you just throw more computing power at it.


That's great, but making "optimal decisions" in some defined state space where the quality of various options is evaluable is a really different problem to general intelligence.


Nice to see they are using Pac-Man as an example domain - when I was doing AI research in this kind of field in an engineering department and was rather unpopular for suggesting that we should forget about working on complex domains (nuclear power stations) and focus on something a bit more manageable - my, actually quite serious, suggestion was Tetris. :-)


Indeed, AIXI is the algorithm I was referring to, and Monte Carlo AIXI is the approximation.

As hugh3 mentioned in a sibling comment (http://news.ycombinator.com/item?id=2479211), 'making "optimal decisions" in some defined state space where the quality of various options is evaluable is a really different problem to general intelligence'. While I definitely agree with this statement to some extent (namely, a powerful MCAIXI setup is not necessarily going to display any intelligence that's remotely human, at least without a lot of other stuff going on in the system), the concerning thing is that it should almost certainly be enough to get a system reasoning about its own design, since its code is a well defined state space where quality is evaluable (depending how the programmer decides to have it evaluate quality).

To end up with a dangerous runaway "AI" on our hands, we don't need AI that we'd consider intelligent or useful. All it takes for a runaway is an AI that is good at improving itself, working effectively at optimizing a metric that approximates "get better at improving yourself". AIXI approximations should be plenty powerful to do this with the amount of computing power we'll have in ~20 or 30 years (at the very least, there's a big enough chance that we really have to take it seriously).

This is one of the reasons Eliezer Yudkowsky is so keen on extending decision theory, so that we can get some idea what we should be actually be trying to approximate in order to have a decent shot at doing self-improvement safely.

The best way to sum up my concern is that (unboundedly) self-improving programs make up a tiny fraction of program-space that we can't quite hit with today's technology. Of that sliver of program space, there's a much smaller sliver that contains "programs that won't kill us." There's another sliver that contains "programs that have useful side effects". We need to make sure that the first "AI" that we create lies in the miniscule intersection, "self improving programs that do something useful [1] and won't kill us", and that's a terrifyingly small target to shoot at, so we had better work strenuously to make sure that when once it's feasible to create any of these programs our aim is good enough to hit the safe and useful ones.

[1] We need to find self improving programs that are useful early on because we'll need to use them as our "shield" against any malicious self-improvers that will inevitably be developed later. There's a significant first-mover advantage in AI, and even a small head start would probably make it difficult or impossible for a second AI to become a global threat if the first AI didn't want to allow it.


So AIXI is just old stochastic optimal control (decision theory, controlled Markov processes, R. Bellman's work, etc.) plus a way from Solomonoff to make extrapolations. Okay.

For practice, there have been related ideas from D. Bertsekas and R. Rockafellar.

For actual computing, the problem remains the curse of dimensionality: This curse is so bad that, for a brute force approach, which is what AIXI is, or really nearly anything general in stochastic optimal control on big problems, a few more decades of Moore's law still won't scratch the surface.


There are projects that simulate neurons at the biological level not just the neural network approximation which have demonstrated great results. Theses simulations work for simple organisms and scaling things up is not really a CS problem.

PS: It's assumed that there are shortcuts to AI, but the absolutely worst possible case is a QM simulation of each cell in a body and it's environment and we do have the math for that even if the computational power is hundreds or even thousands of years in the future at current computing growth rates.


I do quantum mechanical simulations for a living, and I really don't see that kind of thing ever working. Luckily I don't think you'd need a full quantum mechanical simulation to get an artificial brain working anyway.

But this raises the next problem: even if I did build myself a copy of my brain (overlooking the ethical issues in doing so), it's still not any damn smarter than I am. And if I can't figure out how to build a smarter brain than mine then it can't figure it out either, so we're still stuck.

This is the big hole in the "singularity" scenario -- there's no reason to think that brains are capable of building ever-smarter brains.

We don't even know, really, what that would mean. What would a smarter version of my brain look like? Would it be like my brain except capable of juggling more symbols at once? Or would it be like my brain except less likely to jump to dumb conclusions?


And if I can't figure out how to build a smarter brain than mine then it can't figure it out either, so we're still stuck.

For one thing, it's pretty likely that blindly expanding the size of your neocortex would increase your intelligence greatly without any major architectural changes - just pack more neurons in there in roughly the same pattern, or expand the depth of the cortical column (that might be more difficult architecturally, though, I don't think the structure is a homogeneous vertically as it is horizontally).

You're right, though, it's not necessarily true that even if we can improve our own design that means that by continuing to improve it we can achieve neverending gains. It is possible that at some point we'll reach a threshold, and we won't be quite smart enough to pass over it to the next level of intelligence.

But I think it's rather unlikely that we're close to that barrier - if we can beat our own intelligence at all, we should see a roughly exponential increase for at least a little while.


"This is the big hole in the "singularity" scenario -- there's no reason to think that brains are capable of building ever-smarter brains."

One thing that could be done with simulated brains is save a copy, tinker with it and see what happens. If it doesn't work out, just restart the simulation.

This is not so easy to do with biological human brains, but may be quite simple and straightforward with simulated human brains.

Of course, there are very serious ethical issues in doing that kind of tinkering and "rebooting" even of simulated brains. But the technological capability to do that will be there should someone decide to (and someone likely will).

"We don't even know, really, what that would mean. What would a smarter version of my brain look like?"

Does it have to be smarter? How about faster?

It may be possible to speed up a brain simulation to the point where it's thinking 10 times faster than humans. Or maybe 100 times faster. Who knows where the limits lie?

If you had 10 or 100 lifetimes to think about improving your brain, do you think you might come up with some promising ideas?


I would hope that as part of the process of creating these simulations we would gain an element of understanding into how our own general intelligence actually works, hopefully then allowing a degree of architectural upgrading to occur (i.e. not just faster but qualitatively better).

Once that happens, and assuming that there is a sequence of possible upgrade paths, then I would expect something like a Vingean singularity to occur.

Hopefully whatever entity does this will be an Iain M Banks fan and will appreciate that being nice to us slow dull bags of meat could be mutually entertaining.


>> ... if I can't figure out how to build a smarter brain than mine then it can't figure it out either

You mean, just like complex processors are impossible, because you personally couldn't design one, since there are just too many work years in doing that?

>>there's no reason to think that brains are capable of building ever-smarter brains.

What happens when you let groups of people build sub-systems? And then the groups iterate over those sub-systems?

Not to mention hardware speedups?

I know enough to be a bit in awe of quantum simulations, so I em a bit shocked to not see a better argument outside your own area of expertise. :-)


Link to an example of a simple organism simulation, based on that organism's nervous system. I have never seen one.


An example of a nematode simulation:

The purpose of this web site is to promote research and education in computational approaches to C. elegans behavior and neurobiology. This tiny animal has only 302 neurons and 95 muscle cells, making an anatomically detailed model of the entire body and nervous system an attainable goal. Physiological information is still incomplete, but computer simulations can help direction experiments to questions which are most relevant for understanding the neural control of behavior.

http://www.csi.uoregon.edu/projects/celegans/

As to scaling things up:

The IBM team's latest simulation results represent a model about 4.5% the scale of the human cerebral cortex, which was run at 1/83 of real time. The machine used provided 144TB of memory and 0.5PFLop/s [petaFLOPS per second].

Turning to the future, you can see that running human scale cortical simulations will probably require 4 PB of memory and to run these simulations in real time will require over 1 EFLop/s [exaFLOPS per second]. If the current trends in supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.

http://www.theregister.co.uk/2009/11/18/ibm_closer_to_thinki...

PS: A counter argument on the IBM simulation: http://www.popsci.com/technology/article/2009-11/blue-brain-...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: