Hacker News new | past | comments | ask | show | jobs | submit login

What is state of the art in reinforcement learning right now?

https://arxiv.org/abs/1602.01783

Is there a way to deal with "sparse" training data (state, action, reward) triples -- sparse in "state"?




Looks like the "UNREAL" (https://arxiv.org/abs/1611.05397), "Learning to reinforcement learn" (https://arxiv.org/abs/1611.05763) and "RL^2" (https://arxiv.org/abs/1611.02779) are state of art in pure RL for now.

Finally there is a trend of using recurrent neural network as a top component of the Q-network. Perhaps we will see even more sophisticated RNNs like DNC and Recurrent Entity Networks applied here. Also we'll see meta-reinforcement learning applied to a curriculum of environments.


The crazy thing is that these stacked model architectures are starting to become another layer of "lego blocks" so to speak.


That paper was 10 months ago. There have been many RL papers in the meantime, but sparsity is only a problem with respect to reward, not state or action, from what I can see.


You didn't answer my question. :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: