> You just have to internalize that major version numbers are not sacred, you’re not going to run out of them
I know of at least 1 popular app that caused integer overflow when their version was parsed by a host system, so YMMV. Who would ever release a version greater than 65535, anyway?
The only reason fixed-width integer types exist is more or less a hack/optimization for doing fast and space-efficient integral math. Since these are total non-issues for the problem domain where version numbers are involved, there's no reason not to use a bignum instead, if available.
Similar test: which is a more appropriate data type for ZIP codes, int or string?
This is half right. String would be more appropriate than an integer type even if that weren't true. ZIP codes really don't have anything to do with the fundamental numerical operations that are defined for integer types.
I ran into that about 10 years ago with a certain infrastructure as code framework.
I versioned my code as YYYYMMDD.patch then couldn’t get subsequent versions to deploy. After enough debugging I figured out it was due to my major version silently causing an integer overflow.
Maybe fixed now, but my recollection is that .msi installers for Windows were limited to 0-255 for major and minor. Which sucked as I was trying to transition software using the year number as major.
I was hoping that Tom was going to 'loosen' SemVer a bit, but instead he's doubling down, and that's to our detriment. Following SemVer as stated--any "breaking change" to the API must be released with a major version bump--communicates less than an ad hoc versioning system. Because some "breaking changes" may be in areas of the API that is used by few people (and sometimes literally no one), but a major version bump communicates "breaking change" to everyone. Or, every release is a major version bump, and so contains no information at all. When this happens several times a year, users just don't have the bandwidth to sincerely investigate how breaking the changes are going to be for their own use-cases. So either they delay upgrading out of fear, or they YOLO blindly upgrade to the latest, but either way, SemVer has contributed to a culture of "who the hell even knows what's going on in their computer (or with their project's dependencies) anymore".
I think a better solution would be a less-strict SemVer: the notion of "breaking changes" that require a major bump would be tempered by the number of users affected by the change. If it's rarely used feature or API edge case, then we can risk a minor bump. If we've changed a commonly used API that will break some large % of user experiences, or a large number of uncommonly-used APIs that might in aggregate affect a large % of users, then we do a major bump. Of course this require some intuition about usage, and a very human judgment call, but that's all versions ever have been: a toplevel way for developers to communicate to users the degree of change and risk of upgrading.
I disagree for a few reasons. Is it possible to know, in all scenarios, the number of users affected by the change? Even if the breaking change is used by all users, it's still significant in how the software works. Reducing the strictness of SemVer just opens it up to being used incorrectly.
I've always wanted to ask this to a Chinese developer: How common is it to avoid the number four in Chinese developer's version numbers? I know for example buildings in China avoid floor #4 in buildings like with #13 in the US.
Semver is best treated as a defensive strategy: I choose to follow semver principles for the packages that I upload, but I never trust other packages to reliably follow semver.
For this reason, I often pin not just the major, but also the minor and patch version of a dependency in my package configurations. And also cache the artifacts (go mod vendor, private package registry, etc.) So that any breaking changes are easy to identify and triage in the development process before surprising production.
If you do pin only at major level, then try to target a Long Term Support (LTS) series. Because LTS releases are much more likely to receive active maintenance, security updates, and stable, non-breaking changes, compared to non-LTS releases.
Regular, automated testing will help to identify more breaking changes before deploying production changes.
In the worst case, a breaking dependency change occurs contemporary with a first party bug. You need to be able to quickly identify, isolate, reset the third party version to a safe, compatible version in a small, fast hotfix while also working on fixing the first party bug.
Don't be like those lazy dev teams that don't pin even the major version of their components. Remember, operating systems and programming languages can introduce breaking changes. Your Docker base image should already offer at least major version tags, so make use of them.
Some will prefer to strike a balance between specificity and flexibility, for example omitting the build version, patch version, and/or minor version. That's a reasonable approach, too. But that approach has some implications regarding production deployments, which one prefers not to accidentally update production. A breaking change can even arrive in scant seconds between pre-production testing and production release. So if you do choose to pin at major level, then make sure to deploy exactly the same, pre-tested, whole project artifact to production. Don't, for example, rebuild a Docker image to production that targets insufficiently granular component versions.
Regardless of the versioning approach, I would not recommend doing it one way for pre-production environments and a different way for production environments.
Any divergence in packaging will make troubleshooting unnecessarily complicated, and you won't truly have tested the production code anyway. Don't try to pin full in production while pinning at a diffferent granularity in non-production environments. Do reuse the same package configuration throughout the pipeline, with changes going the normal forward path all the way from local development to testing to production.
The problem is not so much that breaking changes are occurring all the time. There's the psychological aspect that we neglect to plan for long term bitrot. For example, the app the began as a slapdash hackathon project, is now in production and several months (or years) have passed. Well, in that timeframe the probability of a breaking change has dramatically increased. You may not even be able to build the project again, due to breaking changes. Pinning in both documentation and package configuration is a way to future-proof your project, so that you will have the important details ready when you need them.
Treat your projects like a science experiment in a timecapsule. So that you can reliably rebuild and redeploy, later, when suddenly the landscape has dramatically shifted.
I know of at least 1 popular app that caused integer overflow when their version was parsed by a host system, so YMMV. Who would ever release a version greater than 65535, anyway?