Hacker News new | past | comments | ask | show | jobs | submit login

Are people seeing it work well in GPU/pydata land and creating multiplatform docker images?

In the data science world, conda/mamba was needed because of this kind of thing, but a lot of room for improvement. We basically want lockfile, incremental+fast builds, and multi-arch for these tricky deps.






It works transparently. The lock file is cross-platform by default. When using pytorch, it automatically installs with MPS support on macOS and CUDA on Linux; everything just works. I can't speak for Windows, though.

Works better than poetry for cuda-versioned pytorch. I don't have overlap with your other domains unfortunately (ML, not data science).

Thanks!

I think the comparison for data work is more on conda, not poetry. afaict poetry is more about the "easier" case of pure-python, and not native areas like prebuilt platform-dependent binaries. Maybe poetry got better, but I typically see it more like a nice-to-have for local dev and rounding out the build, but not that recommended install flow for natively-aligned builds.

So still curious with folks navigating the 'harder' typical case of the pydata world, getting an improved option here is exciting!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: