Hacker News new | past | comments | ask | show | jobs | submit login

There is a lot of room for improvement with the implementation. The way we are using deep neural networks at the moment is exellent for prototyping, but far from optimal. For instance, this paper http://arxiv.org/abs/1511.00363 shows that you can replace floating point operations by simple bitwise operations without losing too much precision in DNNs for image recognition. Together with a better (that is, compiled, instead of interpreted) representation of the inference step I would expect an order of magnitude improvement at a small loss in precision. More software tuning, especially the kind of low-level optimizations that most chess programs do, should yield another big improvement.

Finally, the hardware we are using to run these programs is insane. Sure the silicon is approaching some hard physical limits, but your processor spends most of that power trying to make old programs run fast...

My prediction is that with enough ressources it is possible to write a Go AI which runs on general purpose hardware that's manufactured on current process nodes and fits in your pocket.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: