Hacker News new | comments | show | ask | jobs | submit login
Vicarious Raises $15M to Build Software That ‘Learns Like A Human’ (techcrunch.com)
45 points by thedoctor 1517 days ago | hide | past | web | 33 comments | favorite



Could be the last startup ever, but I am not betting my money on it. Would be cool though if they re-named the company to Last Startup :)

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." - I. J. Good


We seriously considered Last Invention as an alternative name, and actually still own lastinvention.com. For a bunch of reasons, we decided on Vicarious instead. :)


IIRC, the reasons were:

- Depending on how you read it, Last Invention can sound really ominous.

- LI is a bit of a mouthful to say, whereas Vicarious is a single word

- People generally responded better to Vicarious as a name when we asked around

Never heard of The Last One...


I'll guess this is one of the reasons? http://en.wikipedia.org/wiki/The_Last_One_(software)


Lets' not forget that if they actually pull that off without taking extreme precautions, we're all dead. http://singularity.org/files/IE-EI.pdf


A long, long time ago, in a country far, far away there was a game company that was on the brink of bankruptcy, and with the last efforts they created a game they called Final Fantasy. The series has now 14 games.


The founder Dileep George was a founder/advisor at Numenta. I believe it was his paper on HTMs that led to its founding. He left in 2010, but curious as to how Recursive Cortical Networks differ from Numenta's product, which is moving waaay more slowly than they would like.


Thanks for pointing out the connection to Numenta. I was looking for it in the article, but it had nothing.


My immediate reaction is to be skeptical. AI has been tried before and has failed many, many, times.

Having said that, I think this exactly the sort of innovation timeline real venture capitalists should be considering - funding real R&D that could have a revolutionary impact even if the odds are against it.

If I had a lot of money that I really didn't need I would invest it. That is meant to be a compliment.

Best of luck. This is what smart people should be doing.


There are quite a lot of people working on this 'problem' of AGI; amateurs, academics and commercial. The 'only way' this would be possible currently is like a Watson; some way of approximating intelligence / learning while not emulating it directly with neurons. While AGI amateurs think we are actually close to that right now, there is not much else pointing in that direction.

So let's see what they come up with :) Anyone has more information?

Edit: searching for "Recursive Cortical Network" is a good start.


Software that learns like a human is like a car that gallops like a horse. I don't see the point.


It means that it can learn by its own - so you don't have to program every behavior and data that it has access to. Of course, it's way easier said than done.

BTW, horse-robots do exist[1]. 4 legs are very useful on a rough terrain.

[1] http://www.youtube.com/watch?v=x3G-UE1HwGI


What good is learning software if you don't have the hardware to run it on (and probably wont for a decade) to attain even mammalian levels of intelligence?


Presumably you're referring to the neuron-level simulations that crop up in the news every time some university gets a new supercomputer? Maybe there are some shortcuts.

Just like 10 years ago the only conceivable way of rendering diffuse light was with raytracing or radiosity simulations. Now we have modern techniques like ambient occlusion that makes it reasonably cheap to do in a rasterizer in realtime.


We can simulate approximations of intelligence now, and those will continue to get better all the time, however, like raytracing, we simply don't have the technology to do real AI at the human scale right now.


My point is that should there exist some higher level structure in the brain that is more amenable to simulation than the equivalent quantity of low-level synapses and neurons, then all bets are off about exactly what sort of computer is required to simulate it. The brain has to be replicable by biological processes - who knows what tricks it has to pull to achieve its function? Maybe it's possible to come up with an 'optimizing compiler' that tosses the junk.


Makes me think back to UltraHLE, and how that was a quantum leap over existing N64 emulators when it came out. Similar approach - UltraHLE didn't emulate every detail of the system. It took a higher-level approach.

Obviously not the same thing as emulating the brain :-) but just what came to mind.


For anyone not familiar: http://en.wikipedia.org/wiki/UltraHLE#The_HLE_technique

Here's hoping the brain is implemented in C ;-)


We don't have the right software. But if we had the right software, maybe current hardware would be plenty. There's no way to tell until we see the software.


Maybe it's OK if we don't have the HW now. Sort of like in the 1980s, people we're writing accounting or physics software that needed math co-processors n' stuff -- stuff that didn't quite exist in the early 80s on PCs but people went ahead with the software anyways. Eventually the HW worked out.

I think it'll work out. And a decade's not soooo long.


Cool, that gives them 10 years to prepare.

Big advances take time. I'm thrilled that Dustin and Founders Fund are willing to take on investments that have a long time horizon. That's where big breakthroughs will come from.


That's a big assumption - perhaps they can solve this problem and actually make better algorithms.


What makes you say that the hardware doesn't exist?? I don't see that in the article.


http://theness.com/neurologicablog/index.php/artificial-brai...

That's just one example. We're currently using a machine with 10k cores to model just a piece of the brain of a rat.

Neurons fire at up to 1kHz, which means, theoretically, that a modern 4-core CPU could potentially model 5 million neurons, in some sort of crazy 'breaking the laws of physics and ignoring synaptic modeling' scenario.

So then we would only need roughly 80 thousand cores to simulate the human brain, ignoring RAM.

Keep in mind that's an extremely rough (and terribly bad) guess, which makes a disgustingly bad number of broad assumptions that have no basis in reality. Its only purpose is to show how much better evolution is than us at computing.


> Neurons fire at up to 1MHz

I think you mean 1 kHz.


Oh crap. Yes, indeed. That changes everything around by three orders of magnitude but it's still a daunting problem.


[deleted]


Three.


The company to get all of this right will be the first two trillion dollar company.


I'm going to be thrilled if it looks like they're even within shooting distance of solving this problem, but for the time being I'm skeptical. I hope they prove me wrong.


I was just speaking generally. I know nothing about this company apart from their self promotional material and ego shouting.


Has any consideration been given to making sure the AGI is designed and built to have goals that won't lead it to improve its intelligence vastly beyond human levels, and then do something very bad to humanity? Like, say, use the atoms that make up our bodies to make more processors to increase its computational capacity.


Any idea/paper depicting this "recursive cortical network"?


$15M won't be enough.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: