I used to be impressed with these corporate techblogs and their internal proprietary systems, but not so much anymore. Because code is a liability.
I would rather use off-the-shelf open source stuff with long history of maintenance and improvement, rather than reinvent the cron/celery/airflow/whatever, because code is a liability. Somebody needs to maintain it, fix bugs, add new features. Unless I get +1 grade promotion and salary/rsu bump, ofc.
People need to realize that code is a liability, anything that is not the business critical stuff that earns/makes $$$ for the company is a distraction and resource sink.
Isn't this exactly WHY this blog post exists? They are open sourcing this software so that they don't have to maintain it all internally anymore.
They had a need that an existing "off-the-shelf open source" project didn't solve, so they created this an are now turning it into an "off-the-shelf open source" project so they can keep using it without having to maintain it entirely themselves.
How are these open source tools supposed to be created in the first place? This is the process, someone has to do it
Absolutely this. It literally happened to me with a Netflix OSS we were using at work. I found a bug that was biting us, opened a ticket with a PR attached with a possible fix and got an answer after a few months "ah yeah we fixed this in our internal version time ago, thanks, will merge it now".
Indeed, this is not open-source: this is public-source. They don't really open the project to external contribution, they just publish their code and continue the project as their tool. They will not have incentive to add features that are not useful to their business even if it useful to the community (if provided by a PR for example), because all the developers of the project are employed by the same company and this company doesn't have any reason to review and fix code that is not part of their business.
> Indeed, this is not open-source: this is public-source. They don't really open the project to external contribution
It's open source, and they don't have to accept external contributions. Terms have a well-defined meaning, please refrain from calling open source code not open source, and not open source code, open source.
I think the contention here is more about whether it's an open project— does it have an open bugtracker, an open project management structure, clear governance, etc.
It not having those things is fine, and eventually someone may still take the source and create an open project around it. But understanding that is a Netflix project helps calibrate people's understanding around whether the model when you find a bug is going to be "fork, fix, and run the fork indefinitely" or "fork, fix, contribution accepted, drop fork and return to upstream."
So Netflix expects open source community to pick up the maintenance tab ?
I understand how open source proejcts are born, but I struggle to see what is novelty of this project. Just another Java CRUD app with some questionable design choices that are only applicable to netflix:
1. They claim it is distributed system, but it is just a regular Java crud with SQL backend
2. Java-like DSL with parser and classloader (why? Just why?)
> So Netflix expects open source community to pick up the maintenance tab?
Isn’t this the deal with all open source? They are giving something (the code and access to the project) in return for help maintaining it?
No one is being forced to do anything. It is not like there is some open source contributor somewhere now saying, “oh damn, now I have to maintain this, too?”
If people like it and find value in it, they can help contribute to the project in ways they want. Netflix gets to use those contributions, in return for letting people use their contributions. That is just how open source works.
You are making great points. This is power of Netflix marketing and branding that they are considered as cutting edge tech company. In reality most of Netflix Java projects are pretty mediocre Enterprise Java stuff. Last year or so they have mandated Spring Boot as their development platform for all their web services.
This is exactly same stack I have to deal daily and management reason is it is lowest common denominator that works well with 3-month contract developer to deliver Nth micro service whose sole job is to call another service.
> So Netflix expects open source community to pick up the maintenance tab ?
I think the notion of open sourcing a project, is you are literally asking at the community for help and that the community will naturally help you with the maintenance.
An alternative for a code based workflow like Temporal are Dapr workflows (https://docs.dapr.io/developing-applications/building-blocks...), where you write a set of code based activities into a graph and these can have fan-in, fan-out, sequential patterns etc. Its all code in several supported languages and because Dapr also has building block APIs for pub/sub, service invocation, secrets as well as connecting to underlying infrastructure you can combine workflow with these APIs to build an overall solution.
> So Netflix expects open source community to pick up the maintenance tab ?
In fairness, the very nature of open source is that the community is only going to pick up the maintenance tab if the value they're getting out of it is worth it.
This is an extreme point of view, that is tightly connected to the MBA-driven min-maxing of everything under the sun.
I am glad that there are folks who aren't afraid to code new systems and champion new ideas. Even in the corporate sense, mediocre risk averse solutions will only take you so far. The most profitable companies tend to be quite daring in their tech.
Code is not a liability. Code is what makes a company move its gears.
Code being a liability is not a contradiction with code being what makes a company move its gears. The trucks of a delivery service are a liability (requiring maintenance, deprecation accounting, fuel), but are also the only thing that lets the company deliver. A delivery company should own as few trucks as necessary, and no fewer. Any company should publish/run/maintain as little code as necessary, and no less.
Trucks are literally an asset - you can't do depreciation on a liability.
The only way a 'truck' could be a liability is a lease for said truck.
There are plenty of economically rationale reasons why a company may own more trucks they strictly need to manage delivery. For example wanting to handle seasonal bursts, wanting to ensure reliability, preparing for an expansion, being able to lease capacity to other businesses.
Actually you can go replace truck with server and you describe what made AWS make initial sense.
Assets can also be liabilities. The mortgages in a mortgage backed security is both an asset and a liability, as was only too well demonstrated in 2008... It's an asset in the security portfolio, but until you sell the security, it's a liability for whomever is securitizing it.
In the GFC the government literally created the Troubled _Asset_ Relief Program. Those MBSs were assets and didn't magically become liabilities.
The problem was the market value of those assets plummeted because no one expected them to generate the agreed upon cash flows because the underlying loans were going into correlated defaults. Despite all this the only party that saw the mortgage as a liability is the individual who's responsibility it was to make a monthly payment on said mortgage.
Outside of swaps and other derivatives financial instruments and other properties don't magically switch from being an asset to being a liability based on random external factors.
This conversation is like accountants talking about processes, threads, fibers and context switching... very imprecisely.
> Outside of swaps and other derivatives financial instruments and other properties don't magically switch from being an asset to being a liability based on random external factors.
I wasn't saying they switch; I'm saying they can be both an asset and a liability. Liability isn't strictly an accounting term. It also can refer to something that acts as a disadvantage. Illiquid assets whose valuation can be volatile can be a liability.
I'm not using liability in the accounting context, but in the colloquial one.
> a person or thing whose presence or behavior is likely to cause embarrassment or put one at a disadvantage.
Code is absolutely a liability. Code deteriorates as conditions change, and unchanged code also becomes more vulnerable in a way that conventional objects can't.
Liabilities are obligations of a company to pay money owed to a lender as a result of a previous transaction.
You are describing an operating expense which has an entirely different nature than a liability.
'comes with a maintenance liability' is a handwaving statement that means practically nothing without a ton of contextual information. A true liability has a contractual set of obligations to pay defined amounts on a agreed upon schedule. No one is going to come after you for not changing the oil on your truck, try missing payments on a lease.
> No one is going to come after you for not changing the oil on your truck
Several parties will come after you for not changing the oil on your semi-truck that is being used professionally for freight,
starting with your driver, your insurance company, and the US Department of Transportation (DOT), specifically the Federal Motor Carrier Safety Administration (FMCSA), with whom you have m have to provide maintenance records. Trucking is a highly regulated industry, and after Crowdstrike, software engineering is only going to get more regulated, not less.
For trucking company owning and developing trucks makes sense.
But does it make sense for a trucking(streaming) company to create own plumbing equipment? I’d rather use Plumbers Supply Inc that every other company uses from Plumber Depot or use open-source-plumbers.com, because I am not in a plumbing business
The margin on trucking could be so much higher than plumbing that most plumbers could never afford the R&D necessary to advance flushing tech. Big truck operates at a scale where they materially benefit from better flushing, so they take their truck dollars and pour them into their own plumbing lab. Big truck sees this as a competitive advantage that no one else is positioned in the market to unlock. They may one day enter the general plumbing space and disrupt waste management, at their option not obligation of course.
This describes Google and Amazon perfectly - while you can armchair quarterback their biz decisions they are definitely doing well for themselves.
Amazon actually steals a lot of open source and repackages it as a “managed AWS service”, they literally deployed managed Airflow as soon as it became popular.
The whole aws reinvent is repackaged whatever open source project is trending, hiding control plane from the user and instead expose it via AWS control plane and charge people per usage instead of per server
Accept they also provide the security, billing/invoicing, IaaC, support, provisioning, scaling, list goes on.
As for pay per server vs pay per usage. Heck you know Amazon actually bills the team who caused the cost. And gives finance a report on how much each team is spending and on what. Good luck doing that on prem.
The question is how much do they give back to the open source community, after making boatloads of $$$ off of opensource contributions and whether their model is sustainable and healthy for the FOSS movement
But thinking of those trucks primarily as a liability is exactly the kind of mindset that leads to companies minimizing their liabilities instead of maximizing their potential.
Especially when the cost of minimizing (long hours, unsafe conditions) is not felt by decision makers, and may not materialize for a while, but the benefits of maximizing their potential is felt directly and immediately.
Incentives are everything. That's why managers are so careful when applying them to their own jobs.
Using open-source is a liability too, with added problems of code licensing conflicts, supply chain attacks, zero-day vulnerabilities, relying on maintainers that don’t work for you, etc.
Not open source is a liability too, with added problems of code licensing conflicts, supply chain attacks, zero-day vulnerabilities, relying on maintainers that don't work for you, etc... ;-)
Off-the-shelf open source stuff is often the product of big companies open sourcing internal tools though. Airflow, which you name check, is a great example of this. Temporal is another example in the space. Someone has to be dumb enough to build new stuff
airflow and Temporal has teams dedicated to maintain and extend their system. And these systems are business critical for astronomer/temporal, respectively.
And they develop them in a way that works for many customers and use cases, not just netflix.
But for netflix this is just another auxillary system, out of many others. Just a nice GUI to schedule cron jobs basically, does it make sense to sink resources into custom cron?
To be fair, I doubt Maestro will take off like Airflow did.
Airflow filled a void of an easier orchestrator for Big Data with a prettier UI than the competitors of the time (Oozie, Luigi), implementing some UX patterns which had been tested at scale at Facebook with data swarm.
Seems like you have some experience with the orchestrator offerings. Airflow still the way to go, or would you recommend something else for someone just starting down the path of selecting and implementing a data orchestrator?
I haven't used Airflow for years but it used to be quite clunky, not sure how much it's improved since. I'd look into Prefect and/or Dagster first, both are more modern alternatives built with Airflow's shortcomings in mind.
> with long history of maintenance and improvement,
That is a huge load bearing statement.
Do you plan on any contributions back to the community yourself?
Build vs. buy is always an important conversation but claiming that the 'buy'-side path has perfectly 0 maintenance and reliability costs reeks of naivety.
If I needed container orchestration I would use k8s. I can improve it and even propose patches/bugs or chip into opensource maintainers fund. I wont write my own orchestrator, especialy being in a streaming business.
Thats what I meant, doesnt even necessarily Build-vs-Buy, but rather Use-Open-Source-and-Contribute or Reinvent-the-wheel-for-L6-promo-and-then-opensource ??
Would the world be better with 10 workflow orchestrator systems or one mature?
Netflix is building a workflow orchestrator not a container orchestrator. The viable alternative would be Airflow or maybe something like Temporal. K8s alone isn't going to meet the need in this case.
Does the world need another workflow orchestrator? Who knows - some folks at Netflix seem willing to pay a handful of engineers $ to do so. Good luck to them
> anything that is not the business critical stuff
That's an important qualifier. For skilled teams in performance-critical domains, the inflection point where any outside code becomes a low-quality/low-control liability is not that far.
100%. Very few times are these systems built as robustly as external folks who earn a profit on building robustness. Best example of course being Stripe. But I see this from everything from visual snapshot testing tools to custom CI workflows. The good thing is you can always rely on competitive market dynamics to price the off the shelf solution down to a reasonable margin above maintenance costs.
> open source stuff with long history of maintenance and improvement
improvement and maintenance is continent on usage, and having been used at Netflix, this project is in a better place to have already faced whatever bug you are worried about (and let's be real, 99% of applications wont ever get the luck to exercise code paths sophisticated enough to find bugs Netflix has not found already).
You might be unnecessarily projecting here. You don't have evidence to support that open sourcing this might have been for any other reason than it is simply good for the community to have.
This is a naive view, other people’s code is even more of a liability. Look at crowdstrike and opensource infiltrations. Using opensource software doesn’t magically grant you security nor stability.
Code that you own and intimately understand is less of a liability than some 3rd party dependency (paid or free). Stitching together a patchwork of dependencies is not likely the optimal result. The more aligned your codebase is with the problem you're trying to solve the better, and if functionality is core to your business better to own than borrow or rent.
I very much disagree with this take-- and the more I've experienced throughout my career the more I'm sure of it.
Companies spend an IMMENSE amount of time and effort adapting sometimes subpar off the shelf solutions to fit their infra and pay an ongoing tax w/ increasing tech debt trying to support them. Often something bespoke and smaller + more tailored would unlock significantly more productivity if the investment is made consciously.
Any code that is written has both assets and liabilities. But to claim it is a distraction and resource sink is a very, very bad take. Every decision to build something in-house needs to be done thoughtfully and deliberately.
3rd parties are also a liability. Pick your poison. Trust in unknown individuals, trust in megacorps, or trust your own people. Choosing wisely is why people get paid the big bucks.
I wonder how many iterations we will need before engineers are happy with a workflow solution. Netflix had multiple solutions before Maestro, such as metaflow. Uber built multiple solutions too. Amazon had at least a dozen internal workflow engines. It's quite curious why engineers are so keen on building their own workflow engines.
Update: I just find it really interesting that many individuals in many companies like to build workflow engines. This is a not deriding comment towards anyone or Netflix in particular. To me, such observation is worth some friendly chitchat.
The issue is that "workflow orchestration" is a broad problem space. Companies need to address a lot of disparate issues and so any solution ends up being a giant product w/ a lot of associated functionality and heavily opinionated as it grows into a big monolith. This is why almost universally folks are never happy.
In reality there are five main concerns:
1. Resource scheduling-- "I have a job or collection of jobs to run... allocate them to the machines I have"
2. Dependency solving-- If my jobs have dependencies on each other, perform the topological sort so I can dispatch things to my resource scheduler
3. API/DSL for creating jobs and workflows. I want to define a DAG... sometimes static, sometimes on the fly.
4. Cron-like functionality. I want to be able to run things on a schedule or ad-hoc.
5. Domain awareness-- If doing ETL I want my DAGs to be data aware... if doing ML/AI workflows then I want to be able to surface info about what I'm actually doing with them
No one solution does all these things cleanly. So companies end up building or hacking around off the shelf stuff to deal with the downsides of existing solutions. Hence it's a perpetual cycle of everyone being unhappy.
I don't think that you can just spin up a startup to deliver this as a "solution". This needs to be solved with an open source ecosystem of good pluggable modular components.
The issue indeed is that "workflow orchestration" is a broad problem space. I would argue that the solution is not this:
> I don't think that you can just spin up a startup to deliver this as a "solution". This needs to be solved with an open source ecosystem of good pluggable modular components.
But rather more specialized tools that solve specific issues.
What you describe just sounds like a better implemented version of Airflow or the over 100 other systems that are actively trying to be this today (Flyte, Dagster, Prefect, Argo Workflows, Kubeflow, Nifi, Oozie, Conductor, Cadence, Temporal, Step Functions, Logic Apps, your CI system of choice has their own, need I continue, that is not even scratching the surface). Most of those have some sort of "plugin" ecosystem for custom code, in varying degrees of robustness.
For what it is worth, everyone and their mom thinks they can make and wants to be this orchestrator. It's a problem that is just so generic and such a wide net that you end up with annoying-to-use building blocks because everyone wants to architecture astronaut themselves into being the generic workflow orchestration engine. The ultimate system design trap: Something so fundamentally easy to grok and conceptualize that you can PoC one in hours or days, but near infinite possibilities of what you can do with it, resulting in near infinite edge cases.
Instead, I'd rather companies just focus on the problem space that it lends itself to. Instead of Dagster saying "Automate any workflow" and try to capture that space, just make building blocks for data engineering workflows and get really good at that. Instead of Github Actions being a generic "workflow engine" just have it really good at making CI workflow building blocks.
But we can't have it that way. Because then some architecture astronaut will come around and design a generic workflow engine for orchestrating your domain specific workflow engines and say that you no longer need those.
Actually I think I just convinced myself that what you are suggesting actually IS the right way. If companies just said "we will provide an Airflow plugin" instead of building their own damn Airflow this would be easy. But we won't ever have that either. What we really need is some standards around that. Like if CNCF got together and got tired of this and said "This is THE canonical and supported engine for Kube workflows, bring your plugins here if you want us to pump you up". That might work. They've usually had better luck with putting people in lockstep in the Kube ecosystem at least than Apache has historically for more general FOSS stuff. Probably because the problem space there is more limited.
We rolled our own workflow engine and it almost crashed one of our unrelated projects for having so many bugs and being so inflexible.
I’m starting to think workflow engines are somewhat of a design smell.
It’s enticing to think you can build this reusable thing once and use it for a ton of different workflows, but besides requiring more than one asynchronous step, these workflows have almost nothing in common.
Different data, different APIs, different feedback required from users or other systems to continue.
Probably so, but the real design smell seems to be thinking of a workflow engine as a panacea for sustainable business process automation.
You have to really understand the business flow before you automate it. You have to continuously update your understanding of it as it changes. You have to refactor it into sub-flows or bigger/smaller units of work. You have to have tests, tracer-bullets, and well-defined user-stories that the flows represent.
Else your business flow automation accumulates process debt. Just as much as a full-code-based solution accumulates technical debt.
And, just like technical debt, it's much easier (or at least more interesting) to propose a rewrite or framework change than it is to propose an investment in refactoring, testing, and gradual migrations.
It’s likely because we haven’t yet found a workflow engine/orchestrator thats capable of handling diverse tasks while still being easy to understand and operate.
It’s really easy to build a custom workflow engine and optimize it for specific use cases. I think we haven’t yet seen a convergence simply because this tool hasn’t yet been built.
Consider the recent rise of tools that quickly dominated their fields: Terraform (IaC), Kubernetes (distributed compute). Both systems are hella complex, but they solve hard problems. Generic workflow engines are complex to understand and difficult to operate and offer a middling experience so many folks don’t even bother.
It inherently asks for a custom implementation because it's almost like workflows are just how you'd have to code and run everything anyway. Conceptually: why wouldn't we want to reconnect to any work we are currently in progress of, just like in a videogame where if we lose connection for a splitsecond, we want to be able to keep going where we left off? So therefore we must save the current step persistently and make sure that we can resume work and never lose it. Workflow engines also do no magic: They still just run code and if it fails in a place that we didn't manually checkpoint (=by making it into a separate task/workflow/function/action/transaction that is persistable) then we still lose data so.. at that point why not just try doing it this way everywhere whether it's running in a "workflow engine" or not. Before "workflow engines" we already had db transactions but those were mostly for our benefit so we don't mess up the db with partial inserts. Although so far what I've seen in open source workflow engines is that they don't let you work with user input easily, it's sad how they all start a new thread and then just block the thread while it waits for the user to send something. This is obviously not how you'd code a crud operation. In my opinion this is a huge drawback of current workflow engines. If this was solved, we should literally do everything as a workflow I think. Every form submission from the user could offer to let the user continue where he left off and we saved all his data so "he can reconnect to his game" (to revive the videogame metaphor I started with)
I wrote my own because I wanted to learn about DAG and toposort and had some ideas about what nodes and edges in the workflow meant (IE, does data flow over edges? Or do the edges just represent the sequence in which things run? Is a node a bundle of code, does it run continuously, or run then exit?). I almost ended up with reflow, which is a functional-programming approach based on python, similar to nextflow, but I found that the whole functional approach to be extremely challenging to reason about and debug.
Often times what happens is the workflow engine is tailored to a specific problem and then other teams discover the engine and want to use it for their projects, but often need some additional feature, sometimes which completely up-ends the mental model of the engine itself.
We all have different use-cases. We also have a workflow engine at work but that's because we wanted immediate execution. From submit to execute time can be 100 ms on our system, which makes it also work well for short jobs. Usually, the task coordinator overhead is greater than that on these things.
These things tend to be fairly complex and require lots of integration with various services to get working. I think it's a little more organic to start building something simple and end up progressively adding more than implementing one from scratch (unless there are people around with experience)
Just one of the questions I have regarding this -- China has nearly 1.4 billion people, and barely any of them use any of the services here. Instead, they have their own video platforms. And you tell me that none of those platforms use at least the same amount of traffic of Prime Video? I doubt it.
I found the report the statistic is from [0]. But note that it says "by app," so I don't think it's actually all traffic, just the top apps. Their reported source is data from 300m customers in different regions.
> When Pornhub and other porn sites can deliver orders of magnitude more data across the world with much simpler systems, you know it's all bullshit.
That's nothing. My dedicated server delivers two orders of magnitude greater traffic than Pornhub (and everything in the Mindgeek network really). And I don't even need the cloud. Just better engineering.
Founder of https://windmill.dev here which share many similarities with Maestro.
> Maestro is a general-purpose, horizontally scalable workflow orchestrator designed to manage large-scale workflows such as data pipelines and machine learning model training pipelines. It oversees the entire lifecycle of a workflow, from start to finish, including retries, queuing, task distribution to compute engines, etc.. Users can package their business logic in various formats such as Docker images, notebooks, bash script, SQL, Python, and more. Unlike traditional workflow orchestrators that only support Directed Acyclic Graphs (DAGs), Maestro supports both acyclic and cyclic workflows and also includes multiple reusable patterns, including foreach loops, subworkflow, and conditional branch, etc.
You could replace Maestro with Windmill here and it would be precisely correct. Their rollup is what we call the openflow state.
Main differences I see:
- Windmill is written in Rust instead of Java.
- Maestro relies on CockroachDB for state and us Postgresql for everything (state but also queue). I can see why they would use CockroachDB, we had to rollout our own sharding algorithms to make Windmill horizontally scale on our very large scale customer instances
- Maestro is Apache 2.0 vs Windmill AGPL which is less friendly
- It's backed by Netflix so infinite money but although we are profitable, we are a much smaller company
- Maestro doesn't have extensive docs about self-hosting on k8s or docker-compose and either there is no UI to build stuff, or the UI is not yet well surfaced in their documentation
But overall, pretty cool stuff to open-source, will keep an eye on it and benchmark it asap
Why do I need to "sync" with windmill? Why is there an IDE built into windmill? Why is this so convoluted? It's like it's starting with the goal of lock-in before even developing a good product or finding market fit.
Thanks for the great comparison! While Meastro is Apache licensed, if it depends on CockroachDB, Cokroach itslef isn't even Open Source, so that isn't great. I would rather have an AGPL codebase than a non open source dependency. Of course overtime some one could add alternative DB support.
I'm a bit confused about what is going on here: This project appears to use Netflix/conductor [0]. But you go to that repo, you see it has been archived, with a message saying it is replaced by Netflix's internal non-OSS version, and by unmentioned community forks – by which I assume they mean Orkes Conductor [1]. But this isn't using Orkes Conductor, it looks like it is using the discontinued Netflix version `com.netflix.conductor:conductor-core:2.31.5` [2] – and an outdated version of it too.
My impression of the code base, is I felt like it needed a lot of work to run in a non-Netflix environment. Which is part of why the project I was working on ended up abandoning Conductor – we were going to embed Conductor in our product as a workflow engine, we ended up building our own workflow engine from scratch instead. Another team did end up using it for some internal use cases, but scalability/reliability/etc are less of a concern for internal use cases as opposed to customer-facing ones.
And then Netflix abandons it – and then they open source something else which depends on an old version of it – well, I'm happy they open source anything, but it fits with my earlier impression – throwing stuff over the fence which can be a struggle to adopt in an outside environment. Still, throwing it over the fence is better than not releasing it at all.
I can tell you coming from Netflix that we have not abandoned Conductor. In fact, we have more that 5x the usage of Conductor across the Company in the last 6 months alone. The news link above is a bit misleading.. We archived the project, and its now managed at conductor-oss/conductor project given that there are a few thousand companies using it and there are a few companies not managing this project and its now graduated to a new foundation, just like many projects before at Netflix (Iceberg for eg)
Anyone here use Activebatch? To me it is the best software I wish had an equivalent for non enterprise users. I have tried and tried to use other "competitors" but Activebatch's simplicity of just attaching a simple MS SQL DB, installing the Windows GUI and execution agent is just click, click, click and now you have a robust GUI based automation environment where you don't have to use code...or if you want, go ahead and use code in any language if you want...but you don't have to.
Airflow may be robust but it is hidden behind a complexity fence that prevents most from seeing whatever its true capability may be. The same goes for other "open source" competitors.
Why can't someone just develop a robust DB backed GUI first system?
I have tried online services as well, they pale in comparison. I guess the cost of maintaining extensions is what kills simpler paid offerings?
Its a complete shame that ActiveBatch is walled off behind a stupid enterprise sales model. This has prevented this wonderful piece of software from being picked up by the wider community. Its like a hidden secret. :/
Advice: don’t rely on any tool open-sourced by Netflix. They have a long history of dropping support for things after they’ve announced them. Someone got a checkmark on their promotion packet by getting this blog post and code sharing out the door, but don’t build your business on a solution like this.
The issue we hit with Temporal - again and again - is that it's very under-documented, and it's something you install at the core of your business, yet it's really hard to understand what is going on, through all the layers and through the very obtuse documentation.
Maestro has... no documentation? OK Temporal wins by default.
isn’t Maestro an alternative to Airflow, not Temporal? Temporal isn’t a workflow orchestrator. There’s some overlap on the internals but they’re different designs for different use cases.
Rent a pickup truck to transport general items
Win Pickup Company Limited (Thailand)
We provide pickup trucks and professional porters throughout Bangkok and throughout Thailand. We provide moving services for moving houses, dormitories, warehouses, offices, general product delivery, refrigerated product transport, frozen product delivery, etc.
We are a cold storage truck service center in Bangkok and throughout Thailand. Transportation service for temperature controlled goods, refrigerated goods, frozen goods. With a refrigerated pickup truck/large four-wheeled refrigerated truck Good service and confidence in professionalism in cold storage transportation. And there is a pickup truck rental service, refrigerated car rental service. Available 24 hours a day.
This is a really great-looking project. I know I've considered building (a probably worse) version of exactly this on almost every mixed ML + Data Engineering project I've ever worked on.
I'm building something in the space (orchestra) so here's my take:
Folks making stuff open source and building in the open is obviously brilliant, but when it comes to "orchestrators" (as this is, and identifies) there is already so much that has been before (Airflow and so on) it's quite hard to see how this actually adds anything to the space other than another option nobody is ever going to use in a commercial setting.
Is this meaningfully different from Conductor (which they archived a while back)? Browsing through the code I see quite a few similarities. Plus the use of JSON as the workflow definition language.
Interesting. My team recently built a thing for managing long running, multi-machine, restartable, cascading batch jobs in an unrelated vehicle. Had no idea it was a category.
The name Maestro has already been used for a workflow orchestrator which I worked on back in 2016. That maestro is SQL-centric and infers dependencies automatically by simply examining the SQL. It's written in Go and is BigQuery-specific (but could be easily adjusted to use any SQL-based system).
Well, if you're so unimaginative as to call your cloud platform "<companyname> cloud platform", it's not the fault of the second company whose name also starts with a G.
Hello fellow ex-employee of that bank. I was in a segment governed by PCI, and they wouldn't even let us touch Gaia in fear of the whole thing being declared in scope
They did use Temporal at Netflix, they gave a couple presentations 2 years ago. I think this is very much not-Temporal because it relies on a DSL instead of workflow as code.
I don't know if it's a scale-thing, I'm not a workflow expert but this seems more in line with the map-reduce of yore, as in you get some big fat steps and you coordinate them, although you could have coarse-grained activities in Temporal workflows.
I'd be curious to see what the tradeoffs are between the two and if they still have usages for Temporal. Maybe Maestro is better for less technical people? Latency? Scale?
Again, another misleading comment here. Given the freedom and responsibility culture at Netflix. There is one team/person who has been using and promoting Temporal usage and all temporal conferences. Conductor is the main orchestration product at Netflix and will continue to be the case. We have increased the usage 5x in the last 6 months alone.
I'm sure this is very nice, but the article reads as if written by AI. The first thing I'd want to see is an example workflow (both code and configuration) in a realistic use case. Instead, there's a lot of "powerful and flexible" language, but the example workflow doesn't come until halfway down, and then it's just foobar
slightly off topic, but there is dire need for a scientific "workflow manager" built to FAANG engineering standards attuned for the needs of academia (ie primarily designed to facilitate execution of DAGs on clusters). The airflows of the world have complex unnecessary features and require extensive kitbashing to plug into slurm and the academic side of things is a huge mess. Snakemake comes the closest but suffers from massive feature creep, a bizarre specification DSL (superset of python) and blurred resource requirement abstraction boundaries.
Academia better to learn k8s and one of the k8s-native workflow orchestrators. This is as close to FAANG grade and open source as they can get, and arguably a bit better than this repo
I considered Nextflow before begrudgingly settling on snakemake for my current project. Didn't record why... possibly because snakemake was already a known quantity and I was under time pressure or because I felt the task DAG would be difficult to specify in WDL. It's certainly the most mature of the bunch.
Nobody wants to write or debug groovy, especially scientists who are used to python. It also causes havoc on a busy SLURM scheduler with its lack of array jobs (heard this is being fixed soon).
Anyone have a recommendation for a workflow orchestrator for single server deployments? Looking at running a project at home and for certain pieces think it would be easiest to orchestrate with a tool like Maestro or Airflow but they’re basically set up to run in clusters with admins to manage them.
Windmill is pretty lightweight and easy to deploy. https://www.windmill.dev/ you can configure it to have a single worker on the same server as the ui and database.
It says one of the big differentiators with 'traditional workflow orchestrators' is that is supports cyclic graphs. But BPMN (and the orchestrators using it) also supports loops.
Whats the difference of this and enqueue work into a queue then waiting for a job to pick it up at a scheduled time? Not saying build a Kafka cluster to serve this but most cloud providers have queuing tools.
Putting work in a queue is only the start. Most organizations start there and gradually write ad hoc logic as they discover problems like dependencies, retries, & scheduling.
Dependencies: what can be done in parallel and what must be done in sequence? For example, three tasks get pushed in the queue and only after all three finish a fourth task must be run.
Retries: The concept is simple. The details are killer. For example, ifa task fails, how long should the delay between retries be? Too short and you create a retry storm. Forget to add some jitter and you get thundering hoards all retrying at the same time.
Scheduling: Because cron is good enough, until it isn't.
A good workflow solution provides battle tested versions of all of the above. Better yet, a great workflow solution makes it easier to keep business logic separate from plumbing so that it's easier to reason about and test.
I had the same question and asked Claude Sonnet, see its answer below.
In the context of a workflow engine like Netflix Conductor, a workflow refers to a structured sequence of tasks or activities that need to be executed to complete a specific business process or achieve a particular goal. [...]
To give you a concrete example, imagine an e-commerce platform's order processing workflow:
1. Validate order
2. Check inventory
3. Process payment
4. If payment successful:
a. Reserve inventory
b. Initiate shipping
c. Send confirmation email
5. If payment fails:
a. Cancel order
b. Notify customer
In this workflow, each step could be a separate microservice or function. The workflow engine would orchestrate the execution of these steps, handling the flow of data between them, managing any errors or retries, and ensuring the entire process completes successfully.
Would you like me to elaborate on any specific aspect of workflows in this context? Or perhaps you're curious about how they're implemented or managed in practice?
I would rather use off-the-shelf open source stuff with long history of maintenance and improvement, rather than reinvent the cron/celery/airflow/whatever, because code is a liability. Somebody needs to maintain it, fix bugs, add new features. Unless I get +1 grade promotion and salary/rsu bump, ofc.
People need to realize that code is a liability, anything that is not the business critical stuff that earns/makes $$$ for the company is a distraction and resource sink.