I don't quite understand why people don't keep it simple. Use short lived branches, and merge to master. Need to do a release? Master is your release. Tag the release when it's ready. Need to make a hot fix onto the current deployed version? Create a release branch from the tag, and then create a new tag and merge back to master.
This combined with semver gets you far. I've yet to find any downsides with this approach.
- There is only one released version in flight at any time
- You don't need to do a lot of validation before release. E.g. if you release on physical media, or need certification, or shoot the program into space and can't patch it.
Basically, it works for web software where you want (or can have) continuous deployment.
For any scenario that doesn't do "continuous deployment", I think having multiple release/maintenance branches is inevitable. In that case, master is the vNext development branch, and each major version will have a branch for stabilization/validation point releases.
Additional, practical issues that teams have to overcome, regardless of "continuous" delivery, are:
* what is being tested by QA/Product Owner/clients etc right now? What do we do when they actually have feedback that needs fixing?
* how to keep your modifications/patches to SomeUpsTreamSoftware manageable?
* How to deal with parallel development by a team of four? Of twelve? Should all work halt until your PR is merged? What if there are conflicts? How to handle longer lived development?
All of these are hardly ever covered by "keep it simple". Turning that "keep it simple" model into a giant mess of rules, guidelines, scripts and whatnots. Basically a very poor version of Git flow.
Yes. Git flow is often overengineered for CD types of workflows. But it does have all the cases and situations covered that you will eventually run into, and that your homebuilt "simple" flow has not.
Edit: to be clear: there are solutions for all my points in your Simple Workflow, certainly. But here's how git-flow helps:
* release-branches are ideal for putting work in front of stakeholders before release. Allowing improvements on the release. Even in a CD workflow, having something that is separated from "mainline" during the release-window is a good idea. A release-branch is just that. having a separate mainline where during that time new releases can be made is smart too: you don't want to be forced to wait for feature X to land before you can roll out some important fix. I.e. release, master and hotfix branches.
* support- branches in addition to feature branches allow easy rebasing when upstream changes and you need time to re-apply your patches and customisations. You can e.g. branch off a "support-3.1.4" off the 3.1.4 release from upstream and apply your feature/awesome-hack-x on that.
* feature branches are ideal for parallel development. Being able to merge into a branch that does not immediately release (separated from mainline) is smart, because it allows multiple team members to merge to and then make a release. i.e. "development" and "feature"
I strongly agree with the parent comments in this thread. There are many situations where a very simple branch strategy isn't enough.
Another one that I haven't seen mentioned yet is when your software has to be deployed to multiple targets, and each needs the same foundation but also some target-specific adjustments. Maybe you're going to run on different OSes and need to use a different UI on each OS. Maybe you're writing firmware for a range of similar products and each product has its own quirks. Maybe you need to change some code on platform X because there's a known compiler bug where the normal code doesn't work. There are plenty of practical circumstances where this sort of thing can happen. You want to control and ideally automate as much of it as you can, for the same reasons as any other deployment process, and more complicated but systematic branching strategies are often useful for that.
Handling variations (e.g. multiple targets) surely isn't the responsibility of the version control system?
If you need to build for multiple targets or have different special versions, than write your build scripts to produce multiple versions. The Linux kernel builds for lots of targets, but there is only one master tree for it.
As ever, the devil is in the details. If you're just talking about using different build tools, sure, maybe you do that at the level of some separate scripts. If you have mostly common code but a few relatively small adjustments you need for each platform, where better to track that you have merged the correct adjustments into any given build than in your VCS? If you have larger differences between platforms, maybe you factor out parts of your code into separate components and then build only the combination of components you need for any given target. These strategies are not mutually exclusive.
Your Linux kernel example might be quite apt here. While the authoritative version may have a single master, how many distros maintain their own fork of the kernel and probably numerous other packages, merging in changes from upstream regularly, but only ever making releases of their distro from their own forks where perhaps they have their own bug fixes or other adjustments applied?
But how does ongoing development work? You have changes, now you need to merge them into every branch. How do you know which ones are going to break the build, how do you know which ones just aren't going to merge due to divergence?
I have a horror of this due to having experienced a strategy where the team took a new (subversion) branch every time they started a new project (i.e. were building a new device in the same family). You had multiple branches with divergent development but lots of common functionality. If you wanted to fix a bug common to all the systems you had to fix it 4 times.
I absolutely believe that there are better ways of implementing that approach than they used, but I also am convinced through experience that you should have one single line of development and everything else is just 'views' on that (e.g. point release branch is just a view master that is frozen, plus some filtered recent history).
To take your Linux kernel example - the various forks (I'm working with the Xilinx Petalinux Kernel now) don't really fit into the normal model. They're a combination of a release branch and a feature branch - they take the Linux Kernel at a certain time, then add some patches on top. Those patches then usually get submitted (indirectly) back to Linus so that they join the master path of development. That means that other people benefit from them, and refactor them when appropriate.
But how does ongoing development work? You have changes, now you need to merge them into every branch.
Yes. That is exactly what you do.
How do you know which ones are going to break the build, how do you know which ones just aren't going to merge due to divergence?
You use tools like unit testing and continuous integration running on all of your key branches, just like you would on a single-master project.
I have a horror of this due to having experienced a strategy where the team took a new (subversion) branch every time they started a new project
That brings back painful memories. I think it's fair to say that in the area of branch management, modern VCSes are much easier to use than Subversion. Even if you're not using them as physically distributed, the same tools for readily transferring work across branches and tracking what has gone where are also useful in other scenarios such as the one we're talking about here.
I also am convinced through experience that you should have one single line of development
I guess this is just a case of everyone's experience being different. My experience has been that projects that try to keep everything on a single branch when the reality is that you will be making several significantly different types of build over the long term can become prohibitively expensive to manage. You don't necessarily have to use a multiple branch strategy to deal with that, or that strategy alone; I mentioned a couple of other useful options earlier on. But trying to use a single master and then make multiple types of build regularly using just informal processes or ad-hoc changes on release branches is a recipe for human error IME. Something has to systematically track and control the divergence or things get really messy.
They're a combination of a release branch and a feature branch - they take the Linux Kernel at a certain time, then add some patches on top. Those patches then usually get submitted (indirectly) back to Linus so that they join the master path of development.
This is exactly what typically happens in the situation I was describing as well.
You would presumably have some root with the common code, and then you maintain your per-target branches off that. Most development work still gets pushed to the common root as usual, and then all targets benefit unless they're specifically overriding the default code in that area (which is handled at the point of merging into the target-specific branch).
Any changes first made on target-specific branches can also be passed up and then back down onto other targets if it turns out that they are more widely useful, or transferred directly to one or more of the other target-specific branches but without applying the change to the common root. However, typically you would do anything like this via some sort of cherry-picking process; you would never normally transfer all of your changes back up by merging a target-specific branch back into the shared root.
It would be a rare project that needs to do more of that type of target-specific adjustments than the Linux kernel, and they don't maintain separate branches per architecture.
> How to deal with parallel development by a team of four? Of twelve? Should all work halt until your PR is merged? What if there are conflicts? How to handle longer lived development?
I'm following the process they described on a team of about 40 developers, and it actually greatly reduces those problems by forcing people to simply not to have long lived branches. Git flow just puts off the pain of working with that many people until later and creates a giant train-wreck when you finally try to incorporate your feature branch.
If you have a large feature, then use feature flags to let you release it to production dark before it finishes. If you want to do a large refactor, either find a way to do it incrementally or keep both the old and new versions in place together until the new version is completely done.
> release-branches are ideal for putting work in front of stakeholders before release
Feature flags are better for this. If you don't want users to see an incomplete feature, that doesn't mean they have to be using a version of the software that doesn't support that feature. You can just explicitly hide the feature from certain classes of users.
I don't see the contradiction. The comment you are replying to included maintenance branches. The point is you only create those branches if and when you need them.
Yep. What you're describing was used at GitHub [1], at least for a while. I've had a lot of success pitching it to small dev shops that have struggled with more complicated branching models, including git flow. I'm pretty sure git flow became popular just because it was the top search result for "how to git".
Other criticisms of git flow and alternative branching procedures have come up over the years [2] and been discussed here [3][4][5].
A good, simple metric for whether your team is using git well or not is how often a developer is forced to play Guitar Hero [6] just to fix up a bad merge or rebase. The ideal answer is "never"; if it's happening more than a couple times a year, something's wrong.
Git flow is something I spent a lot of time theory crafting about years ago. I’d say wasted time, but learning is never a waste and one of the things I learned is the value of deferring complexity whenever possible, especially if there’s no downside.
Using a trunk and tag (that’s what I call it) workflow today doesn’t prevent you from adopting something more complex if the need arises, so it makes a ton of sense for it to be the default on new projects, especially for new developers IMO.
Not the one you're replying to, but there are a few points:
- This flow applies more for a service or a website, where there is no problem of distribution: the distribution is done the moment your user reaches your website/service.
- In the case of a website there is one website. In the case of services there may be multiple versions, but they exist together in the same file tree. Fixing an issue on an old version means fixing the issue on specific files; when they're pushed on master the users will get the fix
- An alternative flow of having to manage multiple versions in parallel is to use feature-flags to compartmentalize sections of code that aren't tested yet or not even functional yet. That way you can have everything on a single branch and make all the code aware of each other. Feature flags can be activated gradually in qa environment, then in staging, then in some customers only, etc... to iron out the bugs as deployment goes. Note that this way of doing is also adapted for traditional installable software, see Firefox and Chrome and their myriads of activation flags.
An alternative flow of having to manage multiple versions in parallel is to use feature-flags to compartmentalize sections of code that aren't tested yet or not even functional yet.
The advantages of feature flags over feature branches are mostly an illusion, though.
Sure, if you're updating your code in co-ordination with updating some external system, such as a database migration, then sometimes deploying in stages with multiple versions of some functionality running concurrently during some of the intermediate stages is more-or-less the only way you can do it without incurring downtime.
But otherwise, even if you have 10 teams and each hides their changes behind a flag and pushes to master instead of a feature branch, you still have an exponentially growing number of combinations and they're still not all being tested in arbitrary combinations. Turning the feature flag on is then little different to merging in a feature branch: it's still a time when things can go wrong. It's just that now they're going wrong on your master branch and screwing things up for everyone else, too.
Your missing the big advantage. Feature flags should be toggled without doing a deployment/release. That way you can turn it on, monitor and turn it off if it doesn't work right. In addition, feature flags should only live until the feature is working. The flag should be removed once that has happened.
Your missing the big advantage. Feature flags should be toggled without doing a deployment/release. That way you can turn it on, monitor and turn it off if it doesn't work right.
So you're essentially advocating testing new code, or new combinations of code changes, in production?
I think what he's saying is from a production standpoint, the new feature is feature-flagged, and that feature flag has the feature disabled. From a test environment standpoint, you can then enable that feature flag/combination of feature flags, test until your heart is content, and only enable in prod if things go pear-shaped. And if they do, well...it can be as simple as turning that feature flag back to disabled, unless state changes prevent that.
It's obviously possible that there's an effective development process in here somewhere that I just haven't come across myself, but I still don't see it.
For a feature flag to be useful compared to the typical setup where you have a feature branch and then rebase that onto master from time to time until you're ready to do a fast-forward merge, you would need to be testing multiple features concurrently before any of them is merged back to master.
Testing all possible combinations of unmerged features in development is an exponentially difficult problem, so surely no-one is doing that. However, even testing selected combinations still requires someone to be aware of everything that is going on in enough technical detail to highlight potential trouble areas in time to take useful action that would become more difficult once some of those features start to get merged in.
Assuming you have a large enough project that co-ordinating different ongoing development work isn't just handled via informal discussions anyway, that would mean someone has to be identifying risky combinations that could benefit from testing in advance, and someone has to be responsible for doing that testing, and someone has to be responsible for addressing any issues that are discovered as a result. Does anyone here work in a development group that actually does this?
Meanwhile, the cost of using feature flags routinely is clear: every time you do any significant development work, you have to bracket all affected code with a new flag, and someone has to coordinate those flags and make sure they're all tidied up with code left in the "on" mode by release time.
Production is your single most important environment. If you aren't testing there, you aren't doing your due diligence to make sure that your software actually works for users.
If you merge into master then it can be unstable from bugs introduced in the merge.
If master can be unstable then you can't seamlessly release and must test.
If you need to keep master stable then it can't be easily merged into by developers. (testing a tag is not enough because iterations with have other changes in the mean time.)
People attempt to skip these issues by embracing the chaos with CI/CD but that's outside the scope of the branching strategy. Maybe the simple flow is the best but it has clear stumbling blocks you will hit.
The question is where do you want to discover them? You either run tests against the feature branch before merging or you have to gate your master branch into a version on a test system before it is promoted to production.
CI/CD can be chaos but the Facebook philosophy is that we would rather you delivered 1000 features per day and be able to quickly fix anything you might break in the process rather than release 1 thing a day that is fully tested. Might not work for your system but it makes sense to them.
You can avoid “bugs introduced by the merge” by requiring a fast forward/semi-linear merge (ie, the source branch must be a superset of the destination branch). Any testing or validation of the source branch is therefore valid for the result branch, as the contents of the source and result are identical.
To actually ensure a fast forward merge you need to get your testing done while no changes can be made to master. If you were to actually enforce this it can really lock up development. If you're doing any manual testing its untenable. This is why release streams and hotfixes are used.
> Each issue in GitHub is 1:1 with a PR that resolves it.
> All normal work is branched directly off latest master.
> Every branch is rebased against master prior to merging.
> Feature flags are heavily used to gate the activation of functionality which is not applicable, not ready, or otherwise in need of being configured per environment+client.
> Checkbuilds complete with success before we merge to master. Nothing is allowed to merge which could impact production without a feature flag or other gatekeeping mechanism in place. We have a labeling system to track which issue+client pair needs a feature flag configured in a certain way so this can be managed at deployment time.
> Hotfixes are managed by branching off the commit deployed to the client+environment of concern. We have a system (in house developed) that tracks all of the current client+environment+hash tuples so it is very easy to know where to stick the shovel every time.
> We effectively consider the commit hash to be the version of our software. Our releases are extremely granular due to the feature set we support for our customers, so commit hash is the best way to manage this. That said, it is not unusual for multiple client+environment pairs to share a common commit hash depending on our release timing (this is purely asynchronous and managed by our project group).
That's about it. There is only 1 permanent master branch. No work branch ever sits longer than ~2 weeks in our development model. We merge code ASAP so that we can iterate on the results.
For reference, we merge to master ~10-30 times per day. I have some plans in the pipeline for implementing a technique that would allow for pushing hundreds or thousands of commits per day if we ever got to that point.
Most people raised the problem of maintaining old releases beyond just hotfixes, which is valid.
Another problem my organization has hit with this approach is that, unless you somehow maintain 0 bugs level, you get to a point where you need to do 2 separate things: fix bugs on features that are already "done", while still developing new features, while integrating those in rapid succession. This is mostly true when preparing for a release while still having to work on features for future releases.
This can lead to the need to have two separate "main" branches, one that is used for stabilization, and another that is used for active development.
To maintain old releases, I just cut a branch before I tag.
Merge to master, short-lived feature branches, and when it's release time, git checkout -b v1.5, git tag -a 1.5.0, and then cherry pick bugfixes onto that branch when it's time for 1.5.1. If you need more time to stabilize for 1.5.0, just add some time & development between cutting the branch and tagging it.
This is close enough to what we do -- and it works -- but since we support multiple releases at a time, it often ends up feeling like merging is pointless. Merging a dev branch into trunk feels good, but honestly most git interactions involve cherry-picking sets of commits to old releases, or to create hotfixes (So much, in fact, that we built some tooling around it to make it easier to not screw up). So then, you start to wonder, why am I special casing this one situation and using merge to trunk, and using cherry pick everywhere else.
This is what we do as well and it works fine.
We do data processing and have a mix of processing which needs to completed in near real time and other processing which we have a bit more leeway on the time.
The real time processing runs on stable releases which are tags created off of a stable branch (as you have suggested) and all the other processing runs off the tip of master.
Separate the deployment from the release. Use feature flags for feature development. Deploy unfinished features if you need to. Remove the feature flags once the feature has been released.
Overengineering addiction. And often times it is hard to debate against someone "smart" wanting to introduce overengineered approaches. After all we are all solving hard problems and if the solution is super simple that would mean the problems we solve aren't hard enough to require the more complex solution...
The difference is that "git flow" is there, documented, often already familiar with engineers. As opposed to something, however simple, that you design yourself.
I don't think the term "overengineering" applies when you take an industry practice off the shelf. If anything is overengineered, it would be the simple solution that you "engineer" in house.
It's probably not just a "these days" problem though. Thinking through my own experience and various anecdotes it seems like fighting over-engineering and complexity are staples of software development.
Consider also that sometimes complexity/engineering really is warranted; either by the domain or by the tooling.
In other words, avoiding complexity-domain mismatch isn't just a "these days" problem, it's a constant consideration in software development.
There’s no downsides, except if you have multiple different releases going to multiple different environments from the same codebase.
Just having one release branch per version was still generating too many errors, so now we have one per version per environment.
This would work much better if the company would get it’s head around continuous deployment, but they just don’t want to release anything without a full round of manual regression testing.
That's exactly what I do. When I joined my current company there was a mix of git flow going on and one project that had three eternal branches which were supposed to correspond to different environments (staging, prod etc.). Nobody really understood what was going on with these repos. The histories were all spaghetti. There were CI pipelines in place that automatically created tags from the source code. It was a complete mess. I spent quite a long time unwinding it all and forcing people to unlearn these silly habits. Each project now has one master branch and, possibly, one or more maintenance branch. Releases are triggered by the addition of a tag.
At first I think people doubted that it could be so simple. It's like they'd been forced to learn something incredibly complex and just assumed they weren't clever enough to know any better. A kind of Stockholm syndrome maybe? We've been running the simplified system for more than a year now and everyone is happy. Not once have we ever needed anything even remotely like git flow again.
What sucks about the tags on GitHub is that there is no way of controlling who can create them. Ideally I would love to see a flow similar to the one for PRs. You could then require approval of other team members and also run GitHub actions that could for example verify the tag format and ensure correct format of the tag message.
There are several scenarios in enterprise companies where you have multiple releases of your product (and all of them are live at the same time) as well as the need to backport fixes to older versions.
Git-flow is the answer in most of those scenarios and the only way to keep your sanity.
I agree, and I strive to make this work in everything I do now.
However, in practice you'll have to deal with legacy software (that doesn't even have to be that legacy), with colleagues that aren't aligned with it, and with people being afraid.
Currently I'm working in a company that mainly does nightly builds and manual testing, but I'm working in a new project and trying to make it work like you say. I am constrained though, they have Jenkins as CI which is a bit painful to work with (as in it'll take me a lot of work to get things where I want them to be). Working with pre-commit, pre-push hooks and a solid IDE goes a long way though.
Risk mitigation and investment; my employer's software does critical mobile network infrastructure, if they have an outage it affects millions of people. So, if our customers upgrade the software, they want to do their due diligence in their particular use case (it's a flexible system).
That said though, software should be built to allow for fast and painless upgrades. Backwards compatibility and many use cases should be tested automatically and constantly. But, it's a big investment to have software like that, and you need to resist a lot of younger, eager developers that want to e.g. introduce a new language or make sweeping changes.
Our customers don't want to spend vast amounts of time qualifying an entire new release when they can accept an isolated fix instead and do more focused testing. They might need the fix for some critical part of their product. If I was them I'd do the same - if the rest ain't broke, why risk fixing it?
If the qualifying process is long and expensive, doing a one-off fix does seem more logical. This is assuming you can be sure the fix really is isolated.
I do feel like a lot of companies over-exaggerate the need for multiple supported releases.
We have a complex product with many aspects that our customers use, and almost all with some customized integrations.
A bugfix to the integration is typically low risk and high impact, and so the client will want to install that ASAP. For example it could be they suddenly changed the values in a single field in an XML file we read, due to upgrades elsewhere in the organization yay, halting the integration.
So we push a fix where we handle the new values, and they want that installed ASAP.
What they do not want pushed out into prod without lots of testing is all the new things we've added elsewhere in our product, or larger changes to existing features.
Releases are created off the master. Only if something is supported for long a branch is created where only fixes are backported, no development happens there.
What the GP suggested I assume is one master-like branch for every deployed version. Otherwise it makes no sense.
This is exactly what I have instituted for my team. I think that actually is Gitlab Flow though.
If you don't use the optional parts of Gitlab Flow like a production branch, what you are left with is feature branches merged to master, and then release tags made from master. The only difference is that both of us don't make the release branch unless it's necessary, we just make the tag.
Unless there is some major database or library or language version change in a feature branch, once that branch is tested and QA’d, the M.O. becomes “merge any feature branches that we’re ready to put live into master and release”.
Long-lived branches are a mess, and rebase flow is crushed by them.
If you can fit this kind of simple Git workflow into your development workflow, I recommend it.
Trunk flow has a lot going for it. Here's a link in case others don't know of/haven't heard about it -- it starts off with a tl;dr summary: https://trunkbaseddevelopment.com/
One of the key elements of trunk flow that I value about it is that "the software you released" is not a living stream. Because of that, it should not be tracked by a long-lived branch with commits that flow in in over time except in rare fix situations. With trunk flow and similar styles, you can always merge to master, and you can always deploy master. You do so by cutting a new branch, and in my opinion, you then build a static artifact. Next time you deploy, you cut another branch, build your artifact, and put it somewhere.
Need to do a hotfix, but there's other work since last release that you just can't ship? Cherry-pick onto the currently-deployed release branch, not onto some long-lived production branch. There's no weird merge-backs, no strange no-op commits in a long-lived history.
trunk-flow is also very simple. For these and other reasons, it's great.
And, some key points about building static deployment artifacts: if you build an artifact to deploy at merge-time to master, you can avoid having your production servers download and execute code for you. You can test your exact artifact before you ever send it to production. You can reproduce your builds. Left-pad-like library unavailability issues can't hit between staging and production. Deployments to production are very fast. You can keep a few artifacts around and roll back quickly and reliably to working states (barring database stuff!). You can deploy two versions to different userbases at the same time. It's very useful. :)
I still cannot get my head round the fact that people don't know about TBD, and that there are substantial numbers of people not doing it.
It's how all source control was done before DVCS. It's pretty much the default workflow with a DVCS - if you read the Git docs and do the simplest things, you're doing TBD! It works perfectly well for most people.
How do people get to the situation where they know about Git Flow, but don't know about TBD? This feels like a huge failure of education or documentation or something.
Trunk based development isn't how all source control was done before DVCS.[1] The Git documentation suggests using long running branches.[2] "A successful Git branching model" was very influential. Git Flow sounds official. GitHub Flow sounds like Git Flow.
Trunk flow often requires feature flagging systems to control what's not ready yet. Feature flagging is an issue to solve in itself. At previous places when I've touched feature flagging it's always been something along the lines of a config file. If the goal is the just filter out code that isn't ready, it's often fine, however if you were in a need to disable a flag you've just enabled as part of your deploy, it requires another deployment.
If you have a service or website you can (and should) do it dynamically with a service. I have seen Flagr [0] used to great effect here. As always you have to strip out flags when the features are permanent.
I work for a company called LaunchDarkly. We're aiming to resolve the issues that come with feature flagging. Worth a look if you're looking for SaaS app.
yeah, this is the one I use. I think of it as the “default” now - it’s been a while since I worked on a team that used anything else, I guess that means it’s popular with small startups
never seen the site, expected to find something i disagree with, but found that it matches the way I've always worked. Merging early and often and using CI is the way to go and avoid being in the merge process as much as possible. Going to share this with people to explain my workflow, great resource
I've got strong feelings about git-flow, but I think the main thing all these "generic flow" solutions are missing is: what are your project requirements?
Some projects need release branches, some projects use continuous deployment, some projects need manual QA, etc. There is no solution that will work for everyone, so unless your flow description starts with assumptions and matching requirements, it's wrong. (for some people reading about it)
I slightly disagree. It doesn't cost anything to prefix: "There are many ways to approach it, which will differ depending on your needs. Here's a solution which we believe will work for most, so we want to name it and make it simple for you to apply." It doesn't take anything away from the rest of the content.
What a clickbaity title; there's no substantial discussion of Git Flow at all; the paragraph they actually spend discussing two minor issues sounds like it was written by someone who has never done serious work in Git Flow.
Should be titled “The sales pitch for GitLab Flow”.
> Git flow forces developers to use the develop branch rather than the master.
No it doesn’t.
> Another frustrating aspect is hotfix and release branches
I beg to differ.
I know what would be frustrating though:
> GitLab Flow is a way to make the relationship between the code and the issue tracker more transparent. Each change to the codebase starts with an issue in the issue tracking system.
... tying my repo to one particular git hosting platform.
I honestly thought they were finally renouncing GitLab flow as a massively misguided idea. They are both flawed. You shouldn't be using branches in your version control system to denote stages in your development process or deployment targets. It totally breaks down when you move to CICD, which should be your goal. Trunk based is the way to go. These git flow things persist because that is what the search engines puke up when people google for git branching.
CD doesn't work for many types of software; it's not uncommon for customers to want only bugfixes (or even security bugfixes only), and that means branches.
GitLab flow is much closer to trunk based development than git flow is. While no one "flow" works for everyone, I think that we are clearly in agreement that the simpler you can make your workflow, the better. The problem is some companies and organizations can't get all the way to trunk-based development based on their own regulatory needs or the type of software they are producing.
That's why we think GitLab flow is a great way to balance both - in fact in the other linked blog post [1] we mention that GitLab flow should allow for CD from either branches or tags. From the main branch would then basically be the same as trunk based development, yes?
There is a need for simplified workflows, but this sounds and looks like an exercise in brand marketing more than a good analysis on how to offer a better Git workflows.
> Git flow forces developers to use the develop branch rather than the master. Because most tools default to using the master, there’s a significant amount of branch switching involved.
I'd like to see this fleshed out, since whatever branch naming convention or roles you use, you will be switching the same amount.
> Another frustrating aspect is hotfix and release branches, which are overkill for most organizations and completely unnecessary in companies practicing continuous delivery.
Yes and no. If you don't separate master from develop, hotfix branches just work like feature branches. But if you need them separated, then...
> Bug fixes/hot fix patches are cherry-picked from master
Avoid doing this, use daggy fixes[1], and it will make it breeze to check that a fix was actually merged and where[2]. And if you do cherry pick, at least use "-x".
I am a fairly committed practitioner of git flow on multiple projects.
The problem of not easily linking feature branch names to issues is real, though I have been solving this using:
`[issue id]/feature-or-issue-title-abbreviated`
And while you don’t get nice hyperlinking from the branch references on GitHub—-and it’s possible for the issue title to be out of sync with the feature branch name at times, it isn’t too bad and I don’t often need to go back and look at a feature branch.
Perhaps projects with bigger teams this is more important and the linking / strong issue attribution to the branches is more meaningful.
I’m less clear on the removal `develop`. It makes more sense to me to use master as this serious “released” state. And for develop to be a bit more loose.
More importantly, I typically will run `develop` on a staging instance. So it’s pretty important to me.
I still do testing on ‘develop’ and feature branches before they are merged. I also don’t feel like the start / finish release process is all that onerous.
From a commercial standpoint, I’m not a huge fan of tossing GitLab into the concept name. It’s a little brand-y on something that just doesn’t need it.
> From a commercial standpoint, I’m not a huge fan of tossing GitLab into the concept name. It’s a little brand-y on something that just doesn’t need it.
I imagine it's meant to be a response to "GitHub Flow" [0].
I don't think this is a problem with git-flow, it's more a problem of using git-flow when you don't need it.
I use git-flow on projects that need to be installed by various different clients and there's no way to force all of them to migrate to the latest version, so you have to maintain support for older releases.
When the software needs to be installed in a single place and you can do it with CI/CD theres no need for the git-flow complexity.
I'm not a fan of merging master -> production for deployments. This means that what you tested in master may not be the same artifact that is deployed to prod. You're relying on people to correctly handle merge issues in git. This can become an issue if you have some complex hotfixes that have to happen.
edit: I'd rather trunk based w/ tagging commits for releases.
I feel a whole lot of problems would just go away if merge was abandoned. Thus every 'merge' would require a rebase (and subsequent deletion of the merged in branch), and branches stay as branches. For example it then becomes obvious and inevitable that hotfixes must be on an independent maintenance branch (cherry picked in/out of the master if appropriate). There is no longer the proliferation of merged in feature branches - only features currently worked on (and yet to be rebased) would have a branch. There is no concept of release vs master vs dev as it is not possible to merge one into the other. Push/pull is fast forward or force rebase.
Easy enough to write a wrapper around git to do this - I am tempted.
On the topic of Git Flow and simpler & faster alternatives, last month I published a short blog post about my experience learning those alternatives exist at CircleCI[1].
The most important learning for me was that things like Git Flow aren't free: they slow you down and make developers feel more detached and less responsible for the code that's running in prod.
> The production branch is essentially a monolith – a single long-running production release
It assumes that only one version is deployed at a time, so you can't really service version1.0 after you shipped version2.0.
Having multiple independent maintenance branches for production is critical for any branch pattern that should apply to both web software (normally single deployed version) and e.g. Desktop (normally multiple deployed versions).
This is idiotic. Different situations call for different workflows. Working in a corporate environment with planned, infrequent releases? Use Git Flow. Doing continuous delivery instead? Use the feature branch workflow.
This smells like GitLab trying to increase its ownership of the ecosystem.
Am I missing something or is this pretty much Gitflow with develop being named master and master named production? Maybe I don't fully understand Gitflow or we are using it like Gitlab Flow already with different names.
My problem with this suggestion is, that I tend to do history rewrites on the development branch, if I realize I've fucked something up; but I don't do that on master. So on master, broken things get fixed with a new commit; on development - sometimes, sometimes they get fixed retroactively.
That means that you can safely "git pull" the master branch, always, into the future, but if you try to pull the development branch, you may well fail.
I think this sort of thing might become much more tractable if it were possible to visualize a git repo as it actually is - i.e. a directed graph of commits (occasionally with two parents - a merge commit), with moving labels (branch names) and fixed labels (tag names). Like when you buy a small shrub from the garden centre and it has a label attached to it somewhere.
Pushes and pulls can then be visualized as two such shrubs reconciling themselves.
Interesting analogy, but as a gardener, I've never encountered the concept of a shrub reconciling itself so I have no idea what this visualization reveals!
Imagine a shrub made of shrub A and shrub B. Parts of the shrub (from the root) are from both, parts are from A only, and parts are from B only. Imagine also the 2 sets of labels from A and B. It would become clear how the push/pull is updating its target shrub, and indeed if and what bits of the 2 shrubs (or their labels) conflict.
Interesting tidbit: according to the "SEO-friendly" URL, the title should actually be "what-is-gitlab-flow" - I wonder if that was also the original title of the article, before it was changed into something more "clickbaity"?
That gave me pause; I'm used to relatively small commits, for easier review and verification, so that seemed too much. Reading the link, I think they mean, run tests on every branch on every push.
We currently use a simplified version of gitflow. Essentially we picked only the master, develop, and release branches from it. Of course we do have experimental branches but all nowhere as complicated as gitflow.
git's got some great features for very basic development workflows and the fact that it's distributed is undeniably a significant advantage.
What amuses me, though, is all the ceremony and patterns in the use of git that restrict its use in ways that are sort of pale reflections of subversion, P4 and similar version control systems.
I like the distributed nature of git, the ability to make local commits, jump airgaps with bare repos, submodules, shallow clones, all kinds of silly things I suppose. I won't lie, I don't have a ton of experience with subversion, but whenever I've used it, it felt incredibly limiting.
Oh that's entirely fair. I was 100% thinking SaaS because of the website we're on, but if you do have to maintain multiple versions for whatever reason, you're going to need a complex flow with multiple trunks.
I don't see how SaaS is immune to this. You could easily find yourself with a big customer that effectively wants their own deployment and doesn't want updates (new bugs / big changes) but still demands occasional fixes/features that you often want to also merge into the main deployment.
This combined with semver gets you far. I've yet to find any downsides with this approach.