Hacker News new | past | comments | ask | show | jobs | submit login

> Google's AlphaGo also learned from screen pixels.

It did not. It received the state of the board as one array, another board state array for capture/komi (since Go does have global state which is not visible purely from the board representation), and a few additional features to help it out with stuff like ladders. It was architected with convolutional layers, but over the Go grid, not pixels. See the AlphaGo paper pg11 for the exact structure of the input: http://www.postype.com/files/2016/04/08/16/05/03384c91046e8e...

Could it have learned from pixels (augmented by the additional necessary global state)? Sure. But that would've been a waste of computation since the visual layout of a Go board is fixed and static, unlike Atari games.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: