Hacker News new | past | comments | ask | show | jobs | submit login

This looks really cool. Are there ways to orchestrate jobs? Like having one notebooks output trigger another based on some logic? I'm imagining running a bunch of different deep learning models on separate notebooks, or running the same model on different chunks of a piece of data in parallel.



Yes, you should follow best practices and isolate each job to the smallest task possible (and then reuse components). We have this functionality in 2 flavors, you can define hooks as part of your pipelines(https://docs.ploomber.io/en/latest/api/spec.html#id2), in addition you can define this dependency as part of your jobs DAG (https://docs.ploomber.io/en/latest/get-started/basic-concept...), e.g get the data, clean it, train the model and test it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: