Hacker News new | past | comments | ask | show | jobs | submit login
Towards Reproducible Research with PyTorch Hub (pytorch.org)
109 points by rstoj 4 months ago | hide | past | web | favorite | 4 comments



I love that the tooling for ML experimentation is becoming more mature. Keeping track of hyperparameters, training/validation/test experiment test set manifests, code state, etc is both extremely crucial and extremely necessary. I can't count how many times I've trained a great model only to lose the exact state and be unable to reproduce it. It's extremely frustrating. When I found sacred (https://github.com/IDSIA/sacred) it changed my team's workflow in a very positive way. We already have this approach of saving default experiment workbenche images, but formalizing it is much nicer.


sacred is interesting. how does it compare to MLflow?


This is like an app store of ml models which is pretty cool. There are a couple of tooling around ML that more of compliments the above than being redundant.

-- Open Source --

Mlflow : https://github.com/mlflow/mlflow

Polyaxon : https://github.com/polyaxon/polyaxon

Modelchimp(mine): https://github.com/ModelChimp/modelchimp

ModelDB : https://github.com/mitdbg/modeldb

Sacred : https://github.com/IDSIA/sacred

-- Non open source --

Cometml : https://www.comet.ml/

Weights and Biases : https://www.wandb.com/

MissinglinkAi : https://missinglink.ai/


This will also make things easier for people writing algorithms on top of one of the base models.

You start with something simple but fast like resnet18 and once the general aproach works you replace it with something better/slower by changing a single line.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: