Well, how short the feedback loop needs to be is something that can be controlled by the process, and the team sets the process.
I can have the normal flow, which takes minutes and runs all the tests and other stuff I might have on my pipeline (security, performance, etc). I use this for typical day to day work on features. I don't care if the deploy takes 50ms or 5 minutes.
I can have a fast-track for critical production patches. I skip all the main CI steps and just get my code quickly in production. If done right this takes seconds.
I didn't know about dark but I've seen this type of promise too many times, so I'm pretty sure there are many tradeoffs hidden under the nice shiny exterior. I can't know until I try it, but that kind of complexity doesn't all just disappear, i always has a cost even if it's out of sight.
Here's the thing: we have hundreds of thousands of teams and organizations who currently each need to select and define those processes.
Perhaps some of those teams and individuals are experts at determining the correct, minimal continuous integration suite to run on a per-commit basis to minimize time and energy expenditure without compromising correctness.
But I can guarantee that not all (in fact, not many) are, and that they pay maintenance and mental overheads to adhere to those practices.
It feels to me like there is a potentially massive opportunity to design an integrated language-and-infrastructure environment that only re-runs necessary checks based on each code diff.
- "Altered a single-line docstring in a Python file? OK, the data-flow from that infers a rebuild of one of our documentation pages is required, let's do that... done in 2ms"
- "Refactored the variable names and formatting within a C++ class without affecting any of the logic or ABI? OK, binary compatibility and test compatibility verified as unchanged, that's a no-op... done in 0ms"
- "Renamed the API spec so that the 'Referer' header is renamed 'Referrer'? OK, that's going to invalidate approximately a million downstream server and client implementations, would you like to proceed?"
(examples are arbitrary and should imply no specific limitations or characteristics of languages or protocols)
Doing this effectively would require fairly tight coupling between the syntax of the language, ability to analyze dataflows relating to variables and function calls, cross-package dependency management, and perhaps other factors.
Those properties can be achieved during design of a programming language, or they can iteratively be retrofitted into existing languages (with varying levels of difficulty).
Bazel[1] attempts to achieve much of this, although to my understanding it offloads a lot of the work of determining (re)build requirements onto the developer - perhaps a necessary migration phase until more languages and environments provide formats that have self-evident out-of-date status and dependency graphs.
We'll get there and the (sometimes uneasy) jokes will be about how often the software industry used to re-run huge continuous integration pipelines.
Honestly diversity is a good thing. It's good that different people are trying to solve problems at the heart of software in different ways as it creates new perspectives and possibilities. Now, after reading these replies I'm really curious about stuff like darklang and Bazel.
What happens is that people don't understand that most of these higher level solutions are very leaky abstractions, they aren't the silver bullet marketed in medium articles. Yes, they can save you a lot of time and headaches in specific scenarios but when you encounter one of the leaks it can take you weeks to get to the bottom of it.
If the teams doesn't understand what problems the tool is solving and if they have that problem then they might be just cargo-culting. An example of this is kubernetes. I know teams that used kubernetes just because everybody else is using it, they didn't actually need it for their monolithic java spring app. They think they avoided accidental complexity by using kubernetes but in fact they just added accidental complexity to their project. And then they add a new tool that makes it easier to manage the complexity of kubernetes, and so on.
Anyway, I'm probably just a rambling fool and I should appreciate that all this generating and shifting around of accidental complexity will actually mean future job safety for guys like us.
I can have the normal flow, which takes minutes and runs all the tests and other stuff I might have on my pipeline (security, performance, etc). I use this for typical day to day work on features. I don't care if the deploy takes 50ms or 5 minutes.
I can have a fast-track for critical production patches. I skip all the main CI steps and just get my code quickly in production. If done right this takes seconds.
I didn't know about dark but I've seen this type of promise too many times, so I'm pretty sure there are many tradeoffs hidden under the nice shiny exterior. I can't know until I try it, but that kind of complexity doesn't all just disappear, i always has a cost even if it's out of sight.