
Towards Reproducible Research with PyTorch Hub - rstoj
https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/
======
kradroy
I love that the tooling for ML experimentation is becoming more mature.
Keeping track of hyperparameters, training/validation/test experiment test set
manifests, code state, etc is both extremely crucial and extremely necessary.
I can't count how many times I've trained a great model only to lose the exact
state and be unable to reproduce it. It's extremely frustrating. When I found
sacred ([https://github.com/IDSIA/sacred](https://github.com/IDSIA/sacred)) it
changed my team's workflow in a very positive way. We already have this
approach of saving default experiment workbenche images, but formalizing it is
much nicer.

~~~
swuecho
sacred is interesting. how does it compare to MLflow?

------
samzer
This is like an app store of ml models which is pretty cool. There are a
couple of tooling around ML that more of compliments the above than being
redundant.

\-- Open Source --

Mlflow : [https://github.com/mlflow/mlflow](https://github.com/mlflow/mlflow)

Polyaxon :
[https://github.com/polyaxon/polyaxon](https://github.com/polyaxon/polyaxon)

Modelchimp(mine):
[https://github.com/ModelChimp/modelchimp](https://github.com/ModelChimp/modelchimp)

ModelDB :
[https://github.com/mitdbg/modeldb](https://github.com/mitdbg/modeldb)

Sacred : [https://github.com/IDSIA/sacred](https://github.com/IDSIA/sacred)

\-- Non open source --

Cometml : [https://www.comet.ml/](https://www.comet.ml/)

Weights and Biases : [https://www.wandb.com/](https://www.wandb.com/)

MissinglinkAi : [https://missinglink.ai/](https://missinglink.ai/)

------
panpanna
This will also make things easier for people writing algorithms on top of one
of the base models.

You start with something simple but fast like resnet18 and once the general
aproach works you replace it with something better/slower by changing a single
line.

