Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doesn't that depend on the implementation? There's a trade-off between performance and determinism for sure, but if determinism is what you want then it should be possible.


If you fix random seeds, disable dropout, and configure deterministic kernels, you can get reproducible outputs locally. But you still have to control for GPU non-determinism, parallelism, and even library version differences. Some frameworks (like PyTorch) have flags (torch.use_deterministic_algorithms(True)) to enforce this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: