Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Using GPU accelerated neural networks for games AI (theengineer.co.uk)
9 points by jakozaur on June 21, 2009 | hide | past | favorite | 4 comments



This makes me sad. AI doesn't need faster number crunching, it needs a more interconnected architecture.


What are you talking about? You pulled this out of nowhere.

Boltzmann machines are undirected architectures, and have been around since the eighties, courtesy of Geoff Hinton and collaborators. Here is some new work on the topic: http://www.cs.toronto.edu/~hinton/absps/dbm.pdf

The more interconnected architectures are much slower to train and use for inference. Hence, we use restricted architectures to improve speed. (RBMs = restricted boltzmann machines, which are a component of any current NetFlix-prize top contender.)

AI does need faster number crunching. Matrix-multiplies are really slow. I can't train more than 10K neurons on desktop hardware. Faster hardware has driven a lot of AI innovation.


I meant hardware architecture... as in non-von neumann architecture.

That's really neat though.


Why not simulate a more interconnected architecture by just running really really fast?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: