
Redditor learns hard truths about machine learning - whalabi
https://www.reddit.com/r/programmerhumor/comments/axi87h/_/ehtzp34
======
cheeko1234
Wow. Reminds of this article from a year ago:

[https://petewarden.com/2018/03/19/the-machine-learning-
repro...](https://petewarden.com/2018/03/19/the-machine-learning-
reproducibility-crisis/)

 _In many real-world cases, the researcher won’t have made notes or remember
exactly what she did, so even she won’t be able to reproduce the model. Even
if she can, the frameworks the model code depend on can change over time,
sometimes radically, so she’d need to also snapshot the whole system she was
using to ensure that things work. I’ve found ML researchers to be incredibly
generous with their time when I’ve contacted them for help reproducing model
results, but it’s often months-long task even with assistance from the
original author._

 _Why does this all matter? I’ve had several friends contact me about their
struggles reproducing published models as baselines for their own papers. If
they can’t get the same accuracy that the original authors did, how can they
tell if their new approach is an improvement? It’s also clearly concerning to
rely on models in production systems if you don’t have a way of rebuilding
them to cope with changed requirements or platforms._

------
halfjoking
Here's my machine learning "hard truth" I posted on reddit:

[https://www.reddit.com/r/ProgrammerHumor/comments/92s0du/me_...](https://www.reddit.com/r/ProgrammerHumor/comments/92s0du/me_trying_to_learn_machine_learning_after_being_a/)

Normally I wouldn't post a meme here, but I think it goes well with this
person's experience.

