Hacker News new | past | comments | ask | show | jobs | submit login

From ChatGPT, although personally I think this list is bit old but should be at the 60% mark at the very least Deep Learning:

AlexNet (2012) VGGNet (2014) ResNet (2015) GoogleNet (2015) Transformer (2017) Reinforcement Learning:

Q-Learning (Watkins & Dayan, 1992) SARSA (R. S. Sutton & Barto, 1998) DQN (Mnih et al., 2013) A3C (Mnih et al., 2016) PPO (Schulman et al., 2017) Natural Language Processing:

Word2Vec (Mikolov et al., 2013) GLUE (Wang et al., 2018) ELMo (Peters et al., 2018) GPT (Radford et al., 2018) BERT (Devlin et al., 2019)




You are getting downvoted because this list if from ChatGPT, but as a researcher in the field, this list is actually really good, except for perhaps the SARSA and GLUE papers, which are less generally relevant. I would add WaveNet, the Seq2Seq paper, GANs, some optimizer papers (e.g. Adam), diffusion models, and some of the newer Transformer variants.

I'm very confident that this is pretty much what any researcher, including Ilya, would recommend. It really isn't hard to find those resources, they are simply the most cited papers. Of course you can go deeper into any of the subfields if you desire.


As a hobbyst I would add Tsetlin Machines, DreamerV3, DiffusER and most RL from DeepMind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: