A brief overview:
1. Workflows steps share a running context, with access to data they need require.
2. Steps in the workflow (builders) are chained together based on a topologically sorted built from the predefined input & output.
3. No servers spin up (like Conductor/Cadence) - the orchestrator is low level and meant for simplifying business logic.
4. Before/After listeners for each step.
Would love to hear your thoughts and feedback!
- Why would someone use this instead of Airflow/Cadence/Temporal/Databuilderframework?
- What does this look like when it's used? Most frameworks provide some kind of example project, you should too.
- Related, but more specifically, what does the `IDataStore` interface contract mean? Beyond the two functions that I have to implement, are there any considerations related to the overall performance/scalability/durability of the system? Would it make sense to use a disk-backed store, or Redis, or Postgres?
- How do I observe the system? Which workflows are running, which have failed, what the current state is, etc. Are there metrics? Logs?
All of this is based on the assumption you want people to adopt this framework. If it's just a cool side project, that's fine too, but you should probably try to set that expectation in the README.