Hacker News new | past | comments | ask | show | jobs | submit login

Another thing I'd like to ask from my fellow colleagues: please at least to some extent detail in your papers the things that you've tried that didn't work. I see this in my field (computer vision / deep learning) from time to time, but very rarely.

There are typically ablation studies which aim to determine how much each of the _successful_ improvements contributes to the result, but there's almost never any mention of things that looked promising on paper but didn't pan out in practice, nor is there any discussion of the reasons why, even though the authors often have a good idea post-facto.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: