It's as if a lot of ML framework authors believe that most users are researchers... in reality, data is rarely clean, rarely in the right format, and usually needs to be intermingled and transformed with other data before it can be useful.
To avoid pipeline jungles, teams need to agree to certain API's that their data processing code will follow e.g scikit-learn helped many people standardized around fit/predict/transform for their machine learning algorithms. In the future, I expect we'll see this expand to other parts of the process, such as feature engineering.
Towards that goal, I work on an open source library trying to do this for feature engineering called Featuretools. You can check it out here: https://github.com/FeatureLabs/featuretools/
The thing about an ML system as such is that such a system is intended to turn big mounds of data into a predictions/classification without a human having to directly considered the multitude of questions otherwise addressed in large scale software design. IE, a multitude of boundaries and criteria are replaced by one criteria - "it works". The thing is that this set of boundaries and criteria still exists even if they individual setting the system considers the situation solved. This manifests both as the world changing over time and as other people not being perhaps as satisfied with the results of the system as those who created it, this being just two potential gotchas.