I'll probably slow down at some point, but I think atm reading stuff gives me more ideas or general understanding what I want to work on and what not.
There are drawbacks as well since some papers have a lot of time to read in depth, and a day is def not enough to get a proper understanding.
Re your advice, that's a great point! How do you select a papers outside of your comfort zone?
Hm that's difficult. Automatic speech recognition (ASR) is probably by now my comfort zone.
So already most pure DL papers are out of this zone, but I anyway many of them, when I find them interesting. Although I tend to find it a bit boring when you just adopt next-great-model (e.g. Transformer, or whatever comes next) to ASR, but most improvements in ASR are just due to that. You know, I'm also interested in all these things like neural turing machine, although I never really got a chance to apply them to anything I work on. But maybe on language modeling. Language modeling is anyway great, as it is simple conceptually, you can directly apply most models to it, and (big) improvements would usually directly carry over to WER.
Attention-based encoder-decoder models started in machine translation (MT). And this was anyway sth part of our team did (although our team was mostly divided into the ASR and MT team). And since that came up, it was clear that this should in principle also work on ASR. It was very helpful to get a good baseline from the MT team to work on, and then to reimplement it in my own framework (by importing model parameters in the end, and dumping hidden state during beam search, to make sure it is 100% correct). And then take most recent techniques from MT, and adapt them to ASR. Others did that as well, but I had the chance to use some more recent methods, and also things like subword units (BPE) which was not standard in ASR by then. Just adopting this got me some very nice results (and a nice paper in the end). So I try to follow up on MT sometime to see what I can use for ASR.
Then out of own interest, I'm also interested in RL. And there are some ideas you can also take over to ASR (and have been already). Although this is somewhat limited. Min expected WER training (like policy gradient) has independently already developed in the ASR field, but it's interesting to see relations, and adopt RL ideas. E.g. actor critic might be useful (has already be done, but only limited so far).
Another field, even further away, is computational neuroscience. I have taken some Coursera course on this, and regularly read papers, although I don't really understand them in depth. But this is sth which really interests me. I'm closely following all the work by Randall O'Reilly (https://psychology.ucdavis.edu/people/oreilly). E.g. see his most recent lecture (https://compcogneuro.org/).
This already keeps me quite busy. Although I think all of these areas can really help me advance things (well, maybe ASR, although in principle I would also like to work on more generic A(G)I stuff).
If I would have infinite time, I would probably also study some more math, physics and biology...
It's probably hard to estimate an impact of reading outside of your field, but this definitely sounds like a good idea. A positive bonus here is that you get more exposure to how people write and talk about research in different areas, and I find it super useful. I've recently read about Curry-Howard correspondence (https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...), and it was mind-blowing both in terms of what they talk about and how they talk about it.
On the negative side, it's often quite hard to understand only because the terminology is different.
Re Neural Turing Machines, there's been an interesting resurgence of the field working on algorithmic tasks (check out this amazing survey https://arxiv.org/abs/2102.09544).
I'll probably slow down at some point, but I think atm reading stuff gives me more ideas or general understanding what I want to work on and what not. There are drawbacks as well since some papers have a lot of time to read in depth, and a day is def not enough to get a proper understanding.
Re your advice, that's a great point! How do you select a papers outside of your comfort zone?