I'm in favor of firming up the roles of `go get` and `go install`--it was damn weird that running `go get github.com/foo/bar` did one thing inside a Go project and another outside it. Growing pains of the introduction of Go modules and the deprecation of GOPATH and what-not.
My bigger complaint, though, is that the Go team have decided to ignore replace directives when you call `go install`. In a perfect world, I find a bug in an upstream library, submit a fix, and they accept it. In practice, lots of upstream maintainers check their issue/PR lists about once a month if you're lucky, so the best way to temporarily fix the problem is to put in a replace directive changing github.com/foo/bar -> github.com/me/bar until the PR goes in. Except `go install` will ignore that, so if you install a tool via `go install` you get the broken library, but if you git clone the repo and run `go build`, you get the fix.
In proper Go team fashion, they have declared that because sometimes replace directives are difficult, they refuse to deal with them at all--if you intend to distribute something for use with go install, you better fork the dependency and update all your source files.
'go install pkg@version' will error on replace directives rather then ignore.
I fought hard in the github issues to make go install error on replace as a compromise to ignoreing it which creates two subtly different build modes! Also by erroring we leave open the possibility to eventually change it to build correctly with the replace directives without breakage.
Add your voice that @version cmds should respect all replace directives that they reasonably can!
Thank you for the correction, and thank you for fighting for it! I agree that errors are far better than silently ignoring it. I'll weigh in on the issue tracker if I remember later.
I think what we need is a tool for properly forking upstream modules and adjusting the dependencies accordingly (and of course, the adjustment can be undone when/if the upstream maintainer accepts and merges your change). Replace is problematic in many ways:
1. It doesn't propagate downstream. If module Z depends on module Y, and module Y depends on module X, and module X is faulty, it's logical for module Y to have a replace directive for a fixed version of module X. This does not affect module Z, which now subtly (or loudly, sometimes) breaks. That helps nobody. It's good if you have an application that you build in CI and ship to your own private servers, but pretty much useless for everyone else. (And people don't seem aware of this; the author of module Y thinks they've fixed the problem for their consumers, but hasn't.)
2. It doesn't compose. If module Z depends on module Y and module X, and module Y and module X depend on module W, and the authors of module X and module Y independently discover that module W is broken and develop separate fixes for it, the author of module Z really has no option to make their software work.
I think Rob Pike's "a little copying is better than a little dependency" is apt here. If you just copy the source code you want into your own application, the problem goes away. But, the problematic upstream dependencies are often large projects (Azure's API client is a big offender here; I've never successfully depended on it without a replace directive) that are impractical to simply copy-and-refactor. At some point, these upstream providers need to fix their shit or risk not being used.
This problem, by the way, is not unique to Go. I notice this problem come up with Javascript when a bunch of modules depend on faulty-module-0.0.1 with a security problem. faulty-module-0.1.0 is released with a fix for the security problem, but your app directly depends on neat-feature-1.2.3 and nifty-addon-23.47.1 and those modules break with faulty-module-0.1.0. So inevitably you show up at the issue tracker for neat-feature and nifty-addon where 6000 other people are yelling at the authors to release a version that depends on the fixed version of faulty-module, find that the authors aren't around, and are just sad because there's nothing you can do. Basically, depending on other people's software sucks, and so does copying their code into your application. go has the advantage of very clearly telling you "hey, you're fucked!" which annoys people, but you basically get into this situation no matter what programming language you use. It's just you might not know, which is scary. (I hope you use something like Sentry for your Javascript apps and have 100% test coverage.)
In the interest of fairness, shouldn't this finish with:
> In proper Go team fashion, they have declared that because sometimes replace directives are difficult,... they've asked for proposals that can be publicly vetted and discussed?
The line has been "submit a proposal, let's review it" as long as I can remember.
meh. It’s just the same response by a different color: the hard and useful problem doesn't get solved since google avoids it with mono repo meanwhile everyone writing real open source software has to endure half baked garbage until some future proposal comes around and attempts to clean up the mess. Turns out it’s still hard so let’s write another tool, that should fix it! More deflection and proposals ensue on the new tool.
New Go release have broken a few workflows for me recently. I am spending more time answering 'Go questions' and not Pion questions.
`go get github.com/pion/webrtc` used to pull into $GOPATH/src/github.com/pion/webrtc. That worked really well for new contributors. They get the code with one command and run `go test`.
I wasn't able to just update the docs and recommend `go install` because it doesn't work with old Go versions. Is the expectation to put in the docs a sh snippet that does if/else on the Go version?
I am grateful for modules though. Suprising the amount of people that still aren't using them (or aware at all!)
`go get` isn't a command you should be using to just be a handy way to clone a git repo to your PC. Yes, that was literally what it does today and that was effectively the workflow before modules.
But GOPATH is deprecated so you shouldn't be relying on that anymore anyway.
The expectation is that the user just clones your Git repo to _anywhere_ on their PC and runs `go test` there. If they want to consume this local development version in their application then they use a simple `replace` directive in their go.mod to point their dependency at their local version.
If they're still using GOPATH then that's something they can probably manage themselves honestly. That's just a simple folder move. It's been like, what, nearly 2 years since GOPATH was deprecated?
This is everything wrong with modern languages/toolsets. Constantly changing everything, pulling the rug out from people who just want to get work done and wasting everyone's time who googles how to do something and has to go through 3 outdated answers to figure out the current way to do something. Deprecation and breaking changes should be an absolute last resort in the face of overwhelming necessity, not something done on a whimsy to please the aesthetic preferences of some new project manager who needs to feel important.
I don't know that I disagree per se, but GOPATH and modules mean that the tooling has to handle two different cases which are often surprisingly ambiguous and it creates real headaches for the user. There's a lot to be said for having a single path forward even if it means existing users will have to experience some mild pain (i.e., learning a new workflow, rewriting some build scripts, etc) one time. Ideally language developers would have perfect foresight from the outset such that no changes are ever needed to the language, but sadly such language developers are hard to come by.
> GOPATH and modules mean that the tooling has to handle two different cases...
No, the main thing, that GOPATH meant, is that Google (and everyone else) had to make their Go libraries open-source — since entire logic of GOPATH revolves around building stuff, downloaded from Github.
Once Google management decided to be serious about pushing Go to masses, they hurriedly rushed to erase GOPATH from history (just like they erased "Don't be evil" slogan from Internet after getting serious about doing business).
You don't know shit. I “hate” Go and Google with a mild passion (I mean this lightly, rhetorically) and still know enough to explain to you that modules have nothing to do with publishing binary blobs (which like isn’t even a thing in the go ecosystem) or private repos. Modules add a layer of indirection and solve so many actual problems with the GOPATH like “how do you have a dev copy with local changes and a main copy of a library on the same machine?”, “how do you manage/scope different versions of the same dependency between/to different projects?”, etc. GOPATH was so abysmally short sighted only Google could have created it.
GOPATH has nothing to do with GitHub. It is one of the places that has special case handling, but is absolutely not the purpose of GOPATH. Case in point: the VM I have to build some legacy stuff using GOPATH does not have a _single_ dependency from GitHub. Not one. Nor does it contain any open source code, besides the standard library.
Indeed, — GOPATH does not imply using Github, and Github does not imply open-source (even if it is one of the most widely used ways to host open-source).
The logic of GOPATH and "go get" just so happen to revolve around building packages, fetched from source code repository. This logic used to be a stop-gap, written during early stages of Golang development, and makes it awkward (at best) to distribute proprietary Go libraries. I assume, that Google is introducing module system (and aggressively deprecating GOPATH) to work around this issue (among others).
Yeah, the parent seems to think that GOPATH has something to do with pulling from GitHub and that modules implies some central repository which isn't the case at all. If you use modules, you're also pulling from GitHub directly in most cases, although nowadays there is a public caching proxy in the middle IIRC. And in both cases you could/can use private repositories.
It is Go, the language, that provides a backwards compatibility guarantee.
Another tenant of Go, the language, is specification before implementation. There specifically so that many implementations of the language are created and do not become dependant on quirks of a single set of tools and enabling users to choose which tooling they prefer.
Point being, there is a very clear line drawn between Go the language and the Go distribution. Guarantees provided by one are not necessarily applicable to another. Nor do they apply to gcc-go or tinygo.
> This is everything wrong with modern languages/toolsets
Go can hardly be called a modern language. The usual argument we hear on HN is that it has an absolutely stellar backwards compatibility story. Are we losing that too?
I'm not sure what this means exactly, but it's a significant advance in simplicity and productivity in my experience.
> The usual argument we hear on HN is that it has an absolutely stellar backwards compatibility story.
This refers to the compatibility of the language, not necessarily the tooling. Even still, this is a single change, there's no need to panic about losing backwards compatibility just yet.
To be clear, the whole GOPATH development workflow is deprecated. But the GOPATH environment variable isn't, since it's still used for configuring the location of other things like the module cache.
i use golang on and off and i always have to remind myself the difference between GOROOT, GOPATH, and GOSRC. the seeds of confusion was sowed from the get go.
> I wasn't able to just update the docs and recommend `go install` because it doesn't work with old Go versions
I don't think your documentation has to be backwards compatible in this case. If you must split it into two sets of instructions based on Go versions, you can. But unlike many other ecosystems, in Go it's quite common to expect the developer to have moved to a somewhat recent version.
I will say I had projects using dep and/or relying on a GOPATH-style directory layout with my build scripts, but that ship has sailed with just about any new libraries one might depend on. I don't think anyone would blame you for your library moving on too.
> in Go it's quite common to expect the developer to have moved to a somewhat recent version.
People and organizations decide when they will move versions based on their own project lifecycles and requirements, usually (outside of the Go community?) not just because the language vendor says there is a new version and it's time. This is something that was hard for me to cope with as a Rubyist, who was always doing everything to be on the latest version.
I'm new to the Go community, but I can already see the effects of this in Kubernetes, and I don't necessarily think it's good the way things are. Kubernetes 1.16 was released in 9/19 and already EOL and out of support in 8/20, less than a year later. (It took some time for many cloud vendors like AWS to catch up, EKS only landed K8s 1.16 in April 2020. That's four months of usable life before it was officially out of date.)
Integrated things that worked in August of 2019, probably need work again in 2021. As a software developer I understand this and "yeah, we need to always be upgrading" is part of my mantra, and it has been clear to me since I was a teenager that I will spend time struggling with problems that I would not even know about, unless I run Debian "unstable" or an equivalent rolling release distro, where patches can be accepted from upstream at any time of year, whether or not they contain new features.
But I could never get my bosses to see it this way, working in a company that was not firmly embanked inside of the "Cloud-Native" sphere of the world. The "normies" I've always worked with didn't want to see new features popping in at any time, they wanted their own predictable environment that works the same as it did yesterday, to stay that way all year long.
(Edit: and for what it's worth, Ruby does a pretty good job of being stable all the time I've used it professionally for the last 8-10 years, and in spite of having chosen it themselves, they still hate it. Anyway, that's what they say they want...)
"I'm new to the Go community, but I can already see the effects of this in Kubernetes,"
Well, then, good news, this is mostly an effect of Kubernetes being its own ecosystem. It's implemented in Go but it is not generally considered a good example of a Go project. First of all, there's a great blog post somewhere about not looking to the largest projects in any language in general, because they tend to become their own ecosystems, but I don't know where it is. It's right, though. Secondly, Kubernetes is an extra-specially-bad example of a Go project because it's actually a Java project that was ported into Go, and even today the architecture makes that very clear that's how it evolved.
The Go ecosystem in general is actually pretty good about backwards compatibility. If you're starting out new, you shouldn't even necessarily notice this "go get" change because for you the new way will just be how it works, and the new way is easier to understand for a new person than the older, more confused way is. Plus, you have to opt-in by upgrading your Go compiler, and that's really quite optional; if you choose to stick to one for a while, you don't miss out on many libraries, because the language hasn't changed much. It isn't until generics drop that you'll start seeing libraries that you must upgrade for, and if you ignore the libraries that are literally "we're deliberately using generics to do things we can only do with generics" I expect it to be even another year or two minimum before you start seeing other more generally-useful libraries start to require them. I certainly don't intend to go back to all my libraries and retrofit them with generics just because. I don't think there's a lot of pressure to keep up with the latest compiler all the time.
HN has a lot of people waiting with bated breath for Go stories to drop who like to come in and complain vigorously about whatever the Go team has done, despite not using the language themselves. I don't mean to discount people with real experiences, but HN's sound and fury are not always proportional to the real problem. (Just because someone don't like Go's type system and the fact that it lacked generics for so long doesn't mean that piling on and moaning about some not-all-that-significant change in how the tooling one wouldn't be caught dead using and don't know much about is a useful thing to do.)
"Don't even get me started on Generics" no I'm just kidding, but I've read this debate plenty of times before :D
Thanks for the perspective. That makes sense, I wrote how they don't really like Ruby even though it is very stable like they said they wanted, and rarely makes breaking changes (and takes great care to ensure they are well documented and easy to follow when they happen) ...
Then I thought about Rails, and decided not to say anything about Rails, (because it might undermine my whole argument, as that's where arguing from this position breaks down a bit.) Rails isn't really that bad with breaking changes, but it is where most of the breaking changes probably come from, and therefore probably worst thing to use as an example... like you say with Kubernetes in Go!
> But unlike many other ecosystems, in Go it's quite common to expect the developer to have moved to a somewhat recent version.
You'll find that true for any language and framework created within the last 15 years (and some that are older than that too). Rust, Nim, node.js, TypeScript, .NET, etc. And it's to be expected too, for the following reasons:
It's really more down to development pipelines. If your pipeline requires additionally installing the language and/or framework (eg Go doesn't ship by default on most operating systems, it's an optional install) then you're going to expect a recent version to be installed since you're having to install it anyway and since you're not going to break a mass of existing OS boilerplate that might depend on an earlier version being installed (as is often the problem with Perl, Python, shells and others).
Also if a language is newer, then there is often a higher pace of in demand features with each new release. If Go were 30 years old you can bet most of the features people might want would either already be implemented or developers that might want those features wouldn't be giving Go a second glance and instead working with other languages instead.
The longer you work in IT the more you see this cycle repeat over and over. It's almost comical how cyclic the industry is.
I think people will be happy if the instructions work with the latest Go version (as opposed to instructions that are outdated either because the project is not maintained anymore or it is still maintained but nobody bothered to update the docs).
So it will be exactly one major version that supports both the new and old ways, as I read it? Sounds tight for such a significant change. I'd give it at least 2.I don't see the rush.
I used to use `go get` for grabbing even non-Go repos but had to stop because of similar problems. I finally wrote a small tool to replace it I call `grab` https://github.com/jmhodges/grab to take over my "fetch the code and organize it into a tidy directory system" need
It actually uses the nice VCS library guts that `go get` does!
Agreed regarding deprecation schedule. It’s hard to break old habits.
However I do totally understand why, as a person who has more than once become ridiculously confused because I accidentally ran go get in a directory that was a Go module. Somehow I even managed to make my home directory a Go module, leading to a lot more confusion.
Needless to say, they do need to fix that somehow. They probably should’ve gone the other direction and made a new go get command that was specific to modules. It’s a bit too late to do that now.
Go modules overall have been reasonably smooth but I’m a bit disappointed since this particular break will cause a lot of confusion. I think most likely in the long term it will pay off, but it’s probably unnecessarily abrupt. Maybe this is being done to lead into Go 2?
Honestly, I do have another wish: I wish there was a version of “go run” that could fetch, compile and run directly from the Internet. Now I realize that’s not particularly safe, but it’s not much unsafer than doing it in two steps, and man would it be slick for small demos.
The use of `go get` vs `go install` was always confusing to me. Especially with the release of modules.
I say this is a change for the better. The cli is more consistent. The argument made in the post seems more like a philosophical "wah, wah, I don't want to".
Agreed. They originally almost made this new cmd a special flag on go get leaving go get and install alone. Now they finally clearly do one thing each. Might be painful but its clean break for the better now.
I am happy I changed job from DevOps so I don't have to deal with stuff like this breaking all the time for no good reason at all (on more computers than mine). Backwards compatibility is gold.
Platforms that are not willing to deprecate parts of themselves in order to spur innovation and keep complexity low, eventually get deprecated as a whole.
Look at Java, the king of BC, even they've started deprecating packages in the JDK for removal.
It's inevitable. Otherwise it's like a brain that can't forget even the most useless piece of information. About an organism that just eats, but can't expel the waste.
There's a way to do change slowly, gradually and reasonably, but change is inevitable. If you don't want small changes, big changes will come for you.
No one is arguing for no change. The problems here are the new single corporation languages from this past decade that behave like 6 months is a long time. It isn't. Every ~4 years is more the cadence of everything before.
HN person a few days ago: "It's pretty common that I have to deal with support issues that were fixed six months ago, but the client has never taken an update in all that time..." - https://news.ycombinator.com/item?id=27492147
I was surprised to see six months described as "never, in all that time".
Ye I understand that, but from a DevOp or system admin perspective all these small things adds up. 6 months deprecation is way too short to first realize this is changing and then fix all the scripts - and most likely still keeping backwards compability with older Go versions in the scripts for projects stuck in older versions.
The remedy in practice is pinning versions by project or even worse org. wide to be able to concentrate on the actual problem being solved if things are breaking too much.
They have only removed packages that was quite literally not used by anyone after being deprecated for years (deprecation doesn’t break compatibility).
The reason 8 is preferred for many is due to a) not understanding the new model b) depend on a dependency that does some shady things with JVM internals it should have never done in the first place, this did get “sealed” by the module system. Most of these deps actually have already migrated to 8+, and even if not, access to internals can still be allowed with a command line flag.
‘go get’ is older than ‘pip’. The way Go packages are versioned and referenced has evolved massively since then. I get the need for backwards compatibility, I really do, but if you can’t modernise 11 year old tech then I think you’re on the road to much bigger problems.
Edit: correction, Pip is older than Go. But for those downvoting me, the dates were just an illustration and don’t really matter to the point I’m making. That point being: after 11 years it should be safe to update a legacy interface.
According to [1], version 0.2 of pip was released 2008-10-29. The 1.0 release which added Python 3 support was 2011-04-04 (which is what Wikipedia lists).
The earliest release I can find for Go at [2] is 2009-12-09, with 1.0 being released 2012-03-28.
The new behavior could easily have been `go mod get`. There are many ways to design things to not break people. When it comes to the tools, the Go team isn't concerned with breaking people. Their docs even note that.
You’re missing the point because ‘go mod get’ would cause the same deprecation notice. The complaint isn’t what the new command looks like, it’s that the old one is being deprecated.
It’s just an illustration to help conceptualise the point. Like when people talk about larger distances using comparisons like “lengths of a football field”. The purpose was just demonstrating that the tooling in question is as old as many of the other common tooling used in other languages, rather than some recent thing that’s prone to frequent changes.
I prefer this change. ‘go get $PKG’ will now only bring in source code. To install the related binary if $PKG is a tool, use ‘go install $PKG’. Honestly, that is honestly how I already operated and I didn’t like that you might auto-install a tool when all you wanted was its source. Never remembered the flags.
I never liked "go get" approach to dependency management in Go. It always seemed wrong. Even npm or maven got it better.
On the other hand, deprecating a critical tool of infrastructure leaves a sour taste. Many people would be pissed of. Better not including it in the first place.
IMHO such tooling should not be part of the language itself, and be left to the community.
> IMHO such tooling should not be part of the language itself, and be left to the community.
I strongly disagree with this. "All batteries included" is one of Go's strongest points, especially if you look at gofmt, which set a standard for all code out there.
Yeah, I heard this a lot about go. Agree that gofmt is a good call, but it’s not hard to get that one right. I’m seeing more and more of these “included batteries” being deprecated, it’s almost as if standard libraries don’t stay shiny and new forever ;)
Speaking from the perspective of someone with a fair bit of exposure to packaging and shipping software in the Python world, I don't thinking leaving these matters to "the community" results in a satisfactory outcome.
In more practical terms, I expect this tooling is provided and evolved as a first class affair because Google needs it internally to build their golang projects.
I disagree on the idea of leaving tooling up to the community. Having one set of tooling that everyone uses is a huge advantage. It really helps projects interoperate with other projects.
"go get" was originally not intended as dependency management, just as a convenient way of getting the latest version of some source code you need from GitHub or other hosters. Ok, when it gets that source code, it will also get its dependencies, but I don't think you can really call that dependency management? Later, it was updated to "play nice" with the "modules"-based dependency management, but that was not the original intention behind it.
But I still prefer to have the occasional churn when "the official way of doing things" changes to having x competing tools doing the same thing, which has the potential of generating even more churn.
>The bigger thing is that this particular deprecation is probably going to mean that fewer people use utilities and other programs written in Go. There are quite a lot of open source programs written in Go out there that currently say to install them with 'go get ...'
I can tell you that in the wild, non go programmers do not use tools installed using the go compiler that they do not have installed. It's apt, brew, $WINDOWS_PACKAGE_MANAGER, flatpak, etc.
I imagine precious few people install docker or sysdig using `go get`.
I might agree with the author if `go get` wasn't such a terrible and confusing command in the first place. It actively deceives users new to go now that modules are a thing because it does not do what you would expect if you are in a directory with a .mod file. Projects that list “go get” as the install method do not work in subtle and surprising ways for users new to the language, period. It’s 2021 go modules have been around for what 6 minor versions now? Flows have been convoluted (broken) for quite some time. The author fails to discuss the real issue and is simply pandering to some “but workflows might break” argument. If you have a workflow that depends on go get and you aren't using modules you will have at least months to update it to go install.
Idk.. the alternative seems like a Python 3 nightmare. People start depending on things and then you’re faced with two choices: (1) break lots of people, (2) hold on to every bit of ugly broken legacy stuff in the name of backwards compat.
At this point, I feel like the entire solution space has been explored between all the different languages and there is not a great solution from any of them. Personally, I’m much happier with Go breaking things than becoming a C/C++ hole of legacy going back like 50+ years.
On the timing and speed (only one version warning before removal): we can debate the right speed until the cows come home. But, Python 3 took longer than a decade to kill Python 2 as they were trying to give people “plenty of time”. In the end (IMHO), people don’t change unless they’re aggressively forced to because there is always something better to do with your time than upgrade software to prevent some possible future hypothetical breakage. That calculus changes when it’s not hypothetical at all.
Upgrade cycle should be one major version per year (may be less, but not more).
N+1 version does not break compatibility with N version, but introduces warnings for things that's going to be removed. Every warning must have very easy and clear way to resolve it, ideally automatic way, if language and tools support it. Developer must not redesign his application to get rid of warning, but rather just make few obvious mechanical moves.
N+2 version removes previously deprecated things and breaks compatibility with N version.
So you're going to spend few hours once a year to migrate to newer dependency and resolve their warnings. If your software was rotting for 10 years, you're going to spend a week to gradually upgrade, resolve warnings, etc. That should be acceptable to everyone. And those who prefer to live in rot, should pay someone to maintain those outdated libraries (or just live with bugs and vulnerabilities, that's their choice).
With programming languages and foundational frameworks it might be worth to be a little bit more conservative and extend that deprecation period to a few years. Just don't keep deprecated things infinitely.
The timeline seems way too short for a lot of usecases to be honest.
For young projects, where it's kind of expected (trade-off of being an early adopter), or for small projects (which can be easily maintained by a third party), it's acceptable.
But for bigger/core projects this is far more problematic.
Just for comparisons, a lot of Linux distributions will maintain a specific version for ~5 years if not more (Debian, RedHat), and if for smaller components, they can do things like backport patches, or even create the patches themselves, however, for larger ones, they are reliant on upstream providing stable and maintained versions for the life of a specific version of the distribution.
Also, I've seen quite a lot of large and complex projects taking several years to reach production. In these contexts, having to constantly rework components (and re-test them, both individually and as part of the overall system) is not really feasible.
There is no ideal solution, maintaining backward compatibility is costly, and can lead to messy code bases, and it's legitimate to want to avoid that by deprecating API/functionalities. However, downstream (distribution, users of a given library) needs stability and generally cannot afford to rework their solution every 3 months (keep in mind that downstream generally relies on many upstream components, if a lot of these components break at the same time, even in minor ways, this can result in a huge headaches).
IMHO, a more desirable compatibility period, at least for big/core components, would be around 5 years.
There's definitely also a tradeoff. If you make the timeline too long then people will just assume that they can use the old version forever and get mad when that doesn't work out. Say because the project that needs "several years to reach production" actually ends up being used for 5+ years
The linux distro doesn't keep the head supported for 5 years though, they just support older versions for longer. Ergo, what you're basically arguing for is an LTS version of golang.
Your "move fast, break things" speed must be inversely proportional to the penetration of the language: penetration into books, academia, courses, hardware, firmware, software, software that depends on other software, and so on.
Soon I will be able to let go to Python 2.7, but not just yet. My industry is the precise opposite of "move fast, break things" because here the "things" happens to be what are called humans and their "break" scenario involves shrieking as their lungs are burned to leather, and no amount of Jolt soda poured into the floating holographic head of Mark Zuckerberg will suffice.
If you have a piddly language you and five people circulate on Usenet, go for it. Break it twice a day. But once your language gets serious, be ready for inertia. It's a property of mass.
> Go is much more stable than Rust, you know how many Rust libs need nighlty to work?
Rust libs that use nightly do so because they're opting in to new features, not because released features are unstable.
Even then with the releases of const generics, proc macros, and a couple other big-ticket features, fewer and fewer Rust projects require nightly anyway.
rustfmt is not a library, and folks don’t depend on it as one. The only nightly features it uses are “hey let me use compiler as a library please,” which is also not, in my experience, common in the ecosystem. Even furthermore, it’s distributed precompiled in your rust installation, so that doesn’t end up giving you a nightly dependency to use it.
Anyway, thanks. Nightly does exist to be used, but our users largely report using stable, so when people say things like this, I am trying to see if there’s any blind spots.
This can also backfire - no one updates Go because of breaking changes and now you've created more problems. Simply miss one or two upgrades and it cascades into an uphill battle. Code base is stuck at N-8 and while version N's popularity plummets, libraries are in disarray.
I know with C, I can rely on it for 45 years. There is something to be said about this and the peace of mind that comes with it.
Obviously, this is a spectrum with extremities and many ways to go about it. We should not discount "Move slowly, and not break things".
I'd differ on this a bit. Your N+1 scheme is fine with me if you want to use the newest and greatest features, but I think old code / libraries should still work so long as they don't use the latest and greatest features.
I think the compiler should just ship with older versions of the language and different translation units can be compiled with older versions of the compiler. So long as the newer versions can interact with the older ABI (which ideally rarely changes), there's no problem.
Go 1.17 will release with a lot of known bugs. Continuing to ship it a year later with known bugs is not different.
But more deeply, LTS ships buggy versions of the software. That's the point - only the truly dangerous bugs (e.g. compilation-triggered RCE or something) get fixed, everything else stays compatible even if it's otherwise undesirable or suboptimal. Code depends on those bugs.
I think you need at least two major releases (or two years) where you can do things the old way or the new way.
Some organizations have a long path to production for runtimes and would prefer to skip releases. Requiring a code change to be tied to the runtime update makes things harder and encourages staying on the old release forever.
Agreed. Python 3 and IPv6 are case studies in how not to do an upgrade.
While I can appreciate the care and engineering skill involved in upgrading a technology while still having backward compatibility, it turns out just to enable the worst possible behavior on the consumer end.
I think I like Apple's way of just forcing breaking changes down everyone's throats (e.g. removing CD/DVD, PowerPC->x86, 30-pin->Lightning, dropping headphone port, etc.). Sure the tech media complain for a cycle, but then everybody adapts and the world's a better place.
I feel like Apple's way of forcing is because you cannot buy new Apple products with old features. In software you can keep using the old version and complain about the new one. This problem is especially visible in poorly designed ecosystems like Go and Python. Do you think introducing generics to a language like Go won't cause a great divide in the community? Sure, it won't be a breaking change but it will still divide people into the "old way" and "new way" camps. I guess the language which changed the most along the years without breaking things is C# but then again .NET actually introduced a lot of breaking changes unrelated to the C# language.
Lots of us on Windows are still stuck with .NET Framework, because no matter how Microsoft keeps pushing it, even their own products aren't fully ported to Core, let alone 3rd parties.
There was literally no choice when it came to IPv6. Everyone who thinks they've got a clever compromise is forgetting that it's not a software problem where it matters.
Right. The specific problem is that 2^32 isn't enough. The most naive "solutions" were from people who hadn't even recognised why it's 2^32. For example, let's just have numbers bigger than 255 between the dots ... why can't we do that?
But once you get past those, another layer of "solutions" see the immediate technical consequences of 2^32 not being enough but don't grok the bigger picture. For example let's just turn existing 32-bit IP addresses into a 64-bit IPng address by adding zeroes, and then all the old addresses are now networks with up to 4 billion nodes. But wait, all those addresses are assigned too now, so we still have an address shortage. Oops.
The least stupid people saw the new address scheme itself as logical but just felt like surely IPv6 should have resisted the urge to fix anything else. Sure, you're embarking on a multi-decade project that likely cannot ever be repeated, but let's just leave all the sharp edges and weird anomalies exactly as they are to minimise upgrade costs. This belief is less stupid, but it is still pretty stupid, that cost difference is a drop in the bucket, and this was our only chance to fix some pretty huge mistakes made when the Internet was newborn.
It would actually be cheaper for a lot of big outfits to go IPv6-only today (and then use edge translators) than drag their heels and try to stay on IPv4 as long as possible, but contrary to popular belief almost nobody really does "cost-benefit analysis" to decide what to do - they make a decision based on emotion and any such analysis is purely there to support that decision.
Buying more addresses is a budget line item you can see, but a lot of big IPv4 costs are hidden in existing practices that "always" have cost money but would vanish under IPv6, and some IPv6 costs are only there if you did no preparation. If you just count the need to replace those IPv4-only printer-copiers you purchased a year ago across the company, and forget that you intentionally chose not to ask bidders if they were IPv6 capable you're going to persuade yourself that it's cheaper to keep waiting.
IPv6 could have been a straight forward expansion of the address field without replacing ARP (which could easily have been extended for v6 addresses, just need to define an address type value) and header processing and address autoconfig and X and Y and Z.
It still would have taken a long time to deploy, because you still need hardware and software support in lots of places, but it might have been a little less long, and a little less bad along the way.
I don't think we got much of use from the new ways either. I can announce IPv6 ranges to my LAN from two different ISPs, but my clients will use an IP from one and send to the gateway from the other, so that doesn't help me.
ARP is a broadcast protocol, so that's pretty bad news, whereas IPv6 Neighbour Discovery gets to use (link local) multicast.
On the cheapest budget cobbled together network, either one turns into a network broadcast. But even a bargain basement 1990s Ethernet chipset does onboard multicast filtering, so irrelevant neighbour discovery packets needn't wake up your OS whereas every ARP query must be drawn to the attention of every host's OS to confirm it's not for them.
However, spend a little more money and the network switch understands multicast, so now the data isn't even sent across links which don't have anybody listening to that address, whereas of course nothing can be done about broadcasts.
You couldn't have easily done this trick in IPv4 because the address space is too small, but in IPv6 choosing the Ethernet multicast addresses to make this work falls out very easily.
You've listed a command line you'd like to run, but have you spent any time to decide what it is that you think it should actually mean?
Maybe you think it should just be exactly equivalent to ping6 ::1 - your ping6 could be updated to allow this but, like, why? That's not an improvement at all.
OK, so maybe it should be equivalent to ::127.0.0.1 - but that's very strange, an operating system could choose to have this do what you expect locally but it's not clear to what end and again it doesn't offer any wider improvement.
IPv6 is big enough that it contains the entire IPv4 address space - in several different places in fact although ::a.b.c.d is the most transparent. But, that's completely irrelevant to the problem, which I suspect means you still haven't quite grasped what the problem even is.
I expected (20+ years ago), that IPv6 address space will be extension of IPv4 address space, so `ping6 IPv4_ADRESS` will use IPv4 protocol internally, bind6(IPv4_ADDRESS, IPv4_PORT) will use IPv4 socket internally, and so on. It's easy to implement:
an_ipv6_function(address) {
if (address & 0xffffffffffff0000 == 0) {
// Use legacy ipv4 protocol ...
} else {
// Use modern ipv6 protocol ...
}
}
Actual result: I cannot ping an ipv4 address using an ipv6 tool, or connect to an ipv4 server using an ipv6 only tool.
That already exists, it's called an "IPv4-mapped address". The IPv4 address 198.51.100.15 can be represented in IPv6 as ::ffff:c633:640f (normally shown as ::ffff:198.51.100.15). See in particular at https://man7.org/linux/man-pages/man7/ipv6.7.html the IPV6_V6ONLY option which can be used to disable that mapping.
Yes, I know about this mapping (I can read man pages too), but
a) ping6 ::ffff:127.0.0.1 doesn't work too;
b) it's not an IPv4 address, because the binary representation of ::ffff:127.0.0.1 != 127.0.0.1. It's the IPv4 address space mapped to an IPv6 address space (IPv4-mapped-on-IPv6).
In short, IPv6 and IPv4 cannot coexist, so they must be separated, so we must keep IPv4, because it's mandatory, and we may skip IPv6, because it's optional. So, all internet providers in my area are skipping IPv6, because they can.
All that problems are created by lack of backward compatibility in IPv6 protocol.
Why? «Because backward compatibility with IPv4 is unnecessary. IPv6 is so much better. Everyone will jump to IPv6 till end of the next (1995) year.»
You're wrong here. Folks actually deploying wanted better co-existence options for folks on IPv6, and it's shown up. You basically route ipv6 as IPv6 until you hit a network border that doesn't support IPv6, then go over to IPv4 for the rest. This let's me do IPv6 only devices without giving up IPv4.
As it is, they locked the upgrade cycle, because you can't do IPv6 only without losing access to IPv4 (historically).
In the end folks did get work arounds going using 464XLAT
IPv6 is great... the only problem is, that IPv4 has be patched enough with workarounds, to make IPv6 unecessary for most "normal users", and there is nothing to sell there.
Yes, a simple voice call might use STUN, through two nats, fail to puncture a hole, and then be proxied over a server somewhere, but "it works". Noone is willing to may 1$/€ more per month to get IPv6, noone really needs it, half of IoT doesn't support it (ahem, esp8266/32), it's costly to implement and costly to educate users (especially those, relying on nat for "security").
Most of the IPv6 progress is because the governments mandate it for many stuff, big cloud providers are running out of addresses to buy, vendors want to sell new routers and switches... basically, everybody except the users.
> But, Python 3 took longer than a decade to kill Python 2 as they were trying to give people “plenty of time”.
I disagree, I think you're learning exactly the wrong lesson from the Python3 debacle. It took a long time for Python 2->3 because the Python3 developers ignored the need for easy backwards compatibility, just like the Go developers are doing. Python3 required all software, including all your transitive dependencies, to be simultaneously updated. Python3 was dead-on-arrival for many years until there were finally concessions made in Python3 to make backwards compatibility less painful (in this case, so that code could more easily be written to work in both Python2 and Python3).
If the go developers want people to use "install" instead of "get", sure! But give time so that the change can be distributed across the ecosystem before removing the functionality. There's no hurry.
Except py3 needed to be a breaking change, because unicode handling. It doesn't matter if the API rearranging was delayed and the only change 2->3 was string/bytes handling, it still would have taken a decade.
What's the alternative? Deprecate warning old-style string handling and slowly nudge it out? Great, that's not actually decideable, since you need to know how a str is being used.
str are at the very heart of python. This would be a gargantuan task just to write the linter.
Ruby made a similar change in character encoding starting at roughly the same time; it's (barely) remembered now as a non-event because they didn't take the "opportunity" to bundle in a bunch of other breaking changes which collectively required much more pervasive dinks to pre-existing code than the encoding issues.
> Ruby made a similar change in character encoding starting at roughly the same time; it's (barely) remembered now as a non-event because they didn't take the "opportunity" to bundle in a bunch of other breaking changes which collectively required much more pervasive dinks to pre-existing code than the encoding issues.
No, its often misremembered that way; Ruby 1.9 had lots of backward-compatibility breaking changes, there was a long time when lots of things were stuck on 1.8, and much of the other stuff that happened with the Py2 -> Py3 conversion was mirrored. The big difference was that the Ruby community and ecosystem were smaller and less diverse.
> Ruby's also a bad comparison because not many folks are running ruby on windows.
Especially around the 1.9 transition; Ruby was barely usable beyond the level of toy code on windows. The Ruby on Windows story is actually a lot better now.
Which things in particular? Rails supported both Ruby 1.8 and 1.9 simultaneously from Rails 2.3 through Rails 3.2, without it apparently being a huge struggle for the developers, dropping 1.8 only at about the same time that support for it was being dropped by the core Ruby devs altogether. And other posts from maintainers of smaller, but widely used, gems describe it as being a bit of a bother to set up, but not nearly the struggle that it took to run anything in both Python 2 and Python 3. See, e.g.,
One key here is that the common subset of Ruby 1.8 and 1.9 was large enough that you could ship a single library for both, and a lot of large ones did, as in the above examples. That wasn't the case initially in the Python transition, and by the time Python3 had started making enough concessions to back compatibility to make this practically possible, people were a lot less willing to try.
(I personally was managing upgrades or a large Rails app at the time; Rails upgrades were, and remain, a bear, but as an accident of code style which just avoided the tricky spots anyway, the changes required to move that code between any two Ruby versions have never been more than a day or two.)
Lots of things; there was for several years a “caniuse” equivalent for Ruby gems tracking if they were on 1.9 yet because it was such a big issue. Bigger issue outside of the Rails-centered ecosystem.)
> I personally was managing upgrades or a large Rails app at the time; Rails upgrades were, and remain, a bear, but the changes required to move that code between any two Ruby versions have never been more than a day or two.
Yeah, a lot of Rails use exercises a fairly narrow slice of Ruby language and core/stdlib directly (that js, outside of library code), while leaning heavily on Rails-provided APIs, so that's not too surprising.
This and that have literally nothing to do with one another.
AFAIK Ruby's change was that they introduced a notion of encoding to strings. Python literally change the string types and its semantics.
Adding a bunch of other BC breakages was a good idea. Having been responsible for the migration of a large codebase the strings change was by far the most disruptive and problematic one, the rest did not even come close (the distant second was the change to rounding IIRC). Ripping the bandaid and adding a bunch of other BC breaks to clean things up or prepare for the future was I think an excellent idea. Most of those were easy to shim or work around (the syntactic BC breaks were especially trivial).
The problem, as GGP's comments correctly notes, was the incompatibility between Python 2 and Python 3: when Python 3 was first released it was impossible to write a non-trivial program which could run on both version.
Meaning if you wanted to port a project you had to convert everything at once. And as if the odds of that working were not infinitesimal enough, libraries had to fork their own codebases, and either try to maintain completely incompatible codebases concurrently or drop support for Python 2 while literally all their users were still on it.
You're completely missing the point of GP's comment.
Their point is not that Python3 should not have broken backwards compatibility, it's that Python 3 should not have been entirely incompatible with Python 2. That decision is what hampered migration for years, because it turned out not to be a feasible migration strategy, it would have required the entire ecosystem to migrate in lockstep which when you think about it made no sense.
Python 3 migration picked up steam once the core team backported what could be backported to Python 2(.7), and reintroduced syntactic compatibility features to Python 3.
At that point, and with a few shims for standard library stuff, libraries could create cross-version codebases, which meant they didn't have to try and fork their own codebases. And dependents could migrate at their leisure.
> Their point is not that Python3 should not have broken backwards compatibility, it's that Python 3 should not have been entirely incompatible with Python 2.
No, my point is precisely that py2 and 3 are not compatible. Even with the backports, py2 and 3 are not fully compatible. Hence the "py23" or "six" dialect, which is the subset of 2 and 3 that overlap, with some shims to facilitate it.
Py2 and Py3 string apis are not compatible. They cannot be made compatible. They can never be compatible. There's no "from __future__" that could make them compatible without a stupendous amount of work.
Not to be pretentious, but I'm developing the opinion that anyone who thinks that the Python team could have just shimmed their way between 2 and 3, does not really understand the Python compute model.
Google literally invented a py2 to go transpiler (grumpy) to avoid converting. Do you think for a second if there was a way to just "from future" their way to a compatible str object, they wouldn't have taken that route?
> No, my point is precisely that py2 and 3 are not compatible.
Which still misses the point, py2 and py3 not being compatible is different than them being completely incompatible, because there is a useful, workable, common subset of Python 2 and Python 3.
The entire issue with Python 3 was that there originally was not, the core team had to be dragged kicking and screaming into carving it out through backports and reintroductions because their original strategy was not workable.
> Py2 and Py3 string apis are not compatible. They cannot be made compatible. They can never be compatible. There's no "from __future__" that could make them compatible without a stupendous amount of work.
So?
> Not to be pretentious, but I'm developing the opinion that anyone who thinks that the Python team could have just shimmed their way between 2 and 3, does not really understand the Python compute model.
How can you claim that when "shimming their way between 2 and 3" is exactly what ended up happening?
> Google literally invented a py2 to go transpiler (grumpy) to avoid converting. Do you think for a second if there was a way to just "from future" their way to a compatible str object, they wouldn't have taken that route?
Have you considered stopping beating up your strawman?
I should define "shimming their way". I mean smoothly interoperating python2 code with py3 mechanics in a piecemeal way.
Writing 2/3 python is harder than writing either 2 or 3. Porting 2 code to 2/3 is about as hard or slightly more work than just converting straight to 3. But at the end of it, the benefit is your code can run in both environments.
So let's say instead of pulling the plug, py leadership says "Python '2.8' will only run 2/3 code." How is that any different? You're gonna have the people who already ported, who have no need for 2.8 cause they can run 3, and the holdovers who have yet to port from 2.7.
That's what I mean by there's no shim.
> The entire issue with Python 3 was that there originally was not, the core team had to be dragged kicking and screaming into carving it out through backports and reintroductions because their original strategy was not workable.
Maybe I got into the game too late then, but I don't recall it being this much of a dust-up.
"six" was released on pypi as 0.9.0 on Jun 28, 2010. 1.0 was in Mar 2011. Even if the py core team was dragged kicking and screaming, that's still 4 + 5 years of people who could have been porting, and weren't.
Basically, my primary point is, it was a damn hard problem, and I doubt there's much the core team have done better at the time. All the porting tools in the world won't get people to upgrade if they are resistant for any reason.
E: py3 was released in Dec 2008. So I guess those first two years might have been tough. But to me that seems exactly like the ripe time to work on the translation infrastructure and things like six, where you have public release but no immediate pressure to switch.
> E: py3 was released in Dec 2008. So I guess those first two years might have been tough.
3.2 was released early 2011, and P2 compatibility features were reintroduced as late as 3.5 (% on bytes) in 2015, though for my money the last key one was the reintroduction of the `u` prefix in Python 3.3, in 2012. That's the point at which the migration really became feasible without suffering.
Furthermore that's assuming all dependencies were ported, which was not the case: waiting until porting was feasible (and they had the time and manpower to) was also something they had to wait on e.g. Django first added Python 3 support in 2013 (IIRC PyLint only supported 3.4 onwards), difficult to port your own software if the stuff you depend on hasn't been ported over.
It didn't need to be such a disruptive breaking change that fragmented the ecosystem.
They should have made the unicode handling changes opt-in per package or file. Then, start warning about anything that hasn't opted in. Then, only after a significant migration period, change the default to the new unicode handling.
The changes that were needed were far too invasive for that to work in practice. Any library that touched anything even vaguely stringy or bytey would have needed to be operating in either bad mode (2) or better mode (3), but that would have been an almost-invisible part of its public API, which starts causing serious trouble when you want to use bad mode libraries and better mode libraries simultaneously, since the two are fairly incompatible, so that automatic translation would not be possible, so you’d have to know which mode a library was operating in, which undermines the whole endeavour.
In practice this isn't as big a problem as you're making it out to be, because UTF8 is compatible with ANSI.
Nearly all of computing has made a similar transition in the 2000's and 2010's, and I've certainly debugged my share of encoding issues, but in general it worked for a large majority of use cases.
I think most people would've taken a few encoding issues over the 10+ years long migration.
If it were ASCII and UTF-8 that this was all about, it wouldn’t be as big a deal. But it’s not.
Python 2 was a code pages world. Not an ASCII world. Also Python’s Unicode isn’t and hasn’t ever been UTF-8 anywhere; rather, it’s sequences of Unicode code points (note that that’s code points, not Unicode scalar values−this and the representation of str are a pair of things I find quite bafflingly bad about Python 3, because it means Python’s Unicode strings aren’t, y’know, Unicode strings, because they can be ill-formed, and they’re stuck with bad fixed-width representations so that any even vaguely interesting string will be stored as (ill-formed) UTF-32).
Python 2’s str type was a mess, used for bytes data, for strings of uncertain encoding, and for Unicode strings. In a vacuum, you might say you could make that work by splitting it up (and indeed the types module was an early endeavour to do that sort of thing back before it became evident it was insufficient). The key is that this confusion and widespread misuse was rife through the standard library and other third-party libraries. If the Python 2 → 3 transition had been just str-as-bytes/str-as-who-knows-what/unicode → bytes/something/str, it might have been manageable. But the libraries were so shot that a straightforward migration was impossible, because str was too many things and it was quite impossible to tell statically what should be done about it. (And impossible dynamically to resolve as well.)
Python 3 IO isn't UTF-8. The entire point of the change was to support _arbitrary_ encodings (most of which are legacy code pages).
Python would sometimes guess the output encoding on Windows to be ASCII even when UTF-8 was supported by the output device. So you'd end up with encoding-oblivious Python 2 programs that worked perfectly reading and writing UTF-8, and Python 3 programs that failed upon encountering a non-ASCII character.
They've since improved the defaults on Windows, but the changes in Python 3 that caused me a lot of pain were when they made IO to be non-Unicode.
Man, it's it's harrowing in hindsight to look back on porting 2->3 on linux, and remembering my coworkers that had to do it on Windows with encoding-oblivious libraries.
They're still constantly finding bits of code that break cause some customer decided to switch from "delta" to “Δ” in some machine learning log output.
It's well known that you should make the "correct" choice the default. So unicode should have been opt-out (from __past__ import legacy_strings). Result: Everyone batch-adds unicode opt-out to their whole codebase, but new files use the modern way. Gradually people remove the opt outs as they go over old code.
This couldn’t work. For starters, the names of classes, attributes and functions are str in both versions, which is to say, bytestrings in Python 2 and Unicode strings in Python 3. This cannot be handwaved away with a pragma.
That ignores how python scoping works. Built-ins are actually the last thing python checks.
You can override the built-ins by re-defining the symbol in any of: local functions, enclosing functions, or the global scope and it will impact the entirety of that scope.
You missed the point. It’s not about names themselves. What happens if I want to lookup bytestring-named attributes on a class defined in a Unicode-string module or vice versa?
You missed the point. It’s not about names themselves. What happens if I want to lookup bytestring-named attributes on a class defined in a Unicode-string module or vice versa?
I'm afraid I don't understand your question completely, so I'll answer what I guess you mean. Let's say you have a function in one module that expects a bytestring. This function is called from another module, and a unicode string is passed. What happens?
Well python is a duck-typed language, so it will handle the unicode string as if it was a bytestring, and this may or may not work correctly. It's up to the modern code to interface correctly with the legacy code. This could mean manually converting between bytestring and unicode string.
The correct way should be the default, but in the case where you already chose the wrong default, you shouldn't introduce the correct way of handling things and make it the default all at once.
Upgrading to a new major version should never require changes to your code if you fixed all deprecation warnings in the previous major version. Otherwise, you make incremental upgrades impossible.
The new alternative should always be introduced as opt-in with the old default deprecated, and then made default in the next major version.
Have an import switch that enables a legacy mode, that will convert your strings into and out of the library? You can even require that the application developers cast their strings into the correct ones if you don't want to calculate it on the compiler.
That's just not possible with python's computational model. That works for "from future import print" cause that's just an additional function.
str is too deeply embedded and used in too many dynamic and unique ways for that approach to work without doing some really ugly or nonperformant or community-fracturing things.
We're not talking about deprecating language constructs though. This issue is only talking about deprecating a sub command in favour of another sub command from literally the same binary. This should take all of 10 seconds for the developer to adjust to. I'm all for pragmatism but lets not exaggerate the significance of this proposal.
And it might well be that the deprecation notice will run for two versions. At the moment no official decision has been made as it is still an active issue.
I mean, the Python2 vs Python3 split was primarily caused by language changes, not tooling. And Go’s 1.x comparability promise is rock solid. It seems difficult to compare the two.
This exaggeration of Python3 migration is ridiculous and frankly has become boring. Yes, Python 3 broke compatibility. But we had more than a decade to migrate. It definitely does not deserve to be the poster child for migrations gone wrong.
The philosophy of incompatible upgrades is brutal for popular languages, IMO.
The main issue is not that I give you X amount of time now please fix it coz you have generous time now.
The cost of such a change is ENORMOUS.
As an example, in a recently IPO'd company we had a team of engineers and we had 6 months of work for 10 engineers, round the clock, to fix the Python 2 to Python 3 migration issues alone! and the original creators of the code were long gone! .. and I could only imagine the plight of thousands of other folks in similar boats and other resource constrained boat-less entities!
Then there are distributed packages and libraries that are used by a large set of audience that are dev complete and no longer maintained as such. The cost of fixing those are much much higher.
It makes one more than wince at the thought of going through this exercise again.
The python language developers did not do it wrong. The community of python users, which include companies with ten year old unmaintained scripts, did not want it invest to transition until the absolute last second. The "Apple Solution" is to make the last second the first second. Who knows if that's the right thing to do.
The python language developers absolutely did do it wrong. You could not run mixed python2 and python3 code. Either all your dependencies were migrated and you could flip the switch and try things out, or some stragglers kept you running code with compatibilities for 2 and 3 while you waited.
And if one of your dependencies started using new features from 3.x it would break everyone stuck on 2.7, so you can't even use the new features in your library. So why would you bother migrating? So everyone was stuck in a game theoretical position where there was no first mover advantage because one straggler would invalidate all the investment.
> So everyone was stuck in a game theoretical position where there was no first mover advantage because one straggler would invalidate all the investment.
Clearly not, as most libraries are now Python 3 only, so the switch did happen.
I do see a lot of people complaining but I see no practical solutions being suggested. "Just don't do a breaking change" isn't a practical solution when the problem being fixed was such a core feature of every language, strings.
>Clearly not, as most libraries are now Python 3 only, so the switch did happen.
You don't build a convincing argument that the python developers did nothing wrong when saying this 1.5 years after python 2.7 has been EOL.
>I do see a lot of people complaining but I see no practical solutions being suggested. "Just don't do a breaking change" isn't a practical solution when the problem being fixed was such a core feature of every language, strings.
A system should have been put in place to allow python3 code to import python2 code. Then people have an incentive to actually run python3 and use python3 features.
More people are stuck on Java8 which has been EOL for a very long time. They are often stuck because Android hasn't upgraded afaik. But Java11, 14, etc can use almost all Java8 code unless it uses I think sun.misc.Unsafe or something like that. The point is that people can use modern Javas with Java libraries targeting older versions.
> The community of python users, which include companies with ten year old unmaintained scripts, did not want it invest to transition until the absolute last second.
Try shipping python3 scripts to systems that only have python2 interpreters. Doesn't work very well unless you also start to ship the interpreter and all its dependencies. I learned my lesson and just avoid distributing any Python code to customers now.
Much different from Python, though. As long as you compile your C program against the same version of libc you're running against, the program will still work, no matter how old the code. You just can't run a program that was compiled against an incompatible libc. A Python 2 interpreter can't run Python 3 code, and vice versa, no matter what, because the actual language changed.
You can't always easily compile an old C program against the new C libraries you are running. Even if you can compile it, changes in libraries can break programs. E.g. say in the ancient library, memcpy with null pointer parameters worked, if the size was zero. The new library checks it and aborts. There are all sorts of issues like this.
Every time libraries and compilers advance, you're effectively porting all the existing code. You have to fix any build breakage, and re-validate everything.
Moreover, Python is a C program. Just get the old Python 2 C code and compile it against "the same version of libc you're running" and it will work, right? Where is the problem?
Why recompile it when the existing binary still works with the current runtime? At least on Linux the old version of memcpy will still be included in the runtime library to avoid this kind of issue.
But it will run on a modern system if you compiled it against the CentOS version. I have a VM running to do just that. Trying to run Python 2 code on a modern runtime on the other hand will have the Python community feed your remains to the pigs.
> What language lets you ship scripts that use features of version X and also works on versions <X?
I didn't care about Python 3 features, but you can't run Python 2 code on a Python 3 interpreter either and the Python 2 interpreter was abandoned. Meanwhile you can run ancient JavaScript code on both old and modern systems, ancient C++ on both old and modern systems, ancient Java mostly on both old and modern systems, etc. .
> Docker tends to be the easy solution for all this stuff, and it's language agnostic.
I think I will rather go with statically linked binaries before I dump docker on unsuspecting users. Just the pain of getting that dependency whitelisted by their corporate IT makes my skin crawl.
It is not an exaggeration, the Python3 move was one of the worst releases ever in terms of managing the process. It took a decade for library creators to have it all settled. It was not end users fault that they depended on libraries that wouldn't upgrade for several years.
It is easy to look at Python 3.x today and say the migration should have been easy, but some of us were there, and remember how intransigent some core devs were about things like u"" and %, and how Python 3 offered nothing better upon release.
And it is still slower for small glue scripts. On my Windows 10 box hello, world script finishes with Python2 in 73 microseconds (the best time of 5 runs) versus 238 ms for Python3. That translates into noticeable slowdown of compilation time for a project extensively using python code in the build system.
The problem was not that Python3 had broken things. The problem was the scale of breakage and inability to use both python2 and python3 code. Moreover only after a few years since Python3 release it became possible to write libraries compatible with both Python2 and 3.
This is an interesting side effect of open source. Companies can not afford to break their customers deployments and so tend to bend over backwards to guarantee backwards compatibility. But in open source that link is much more tenuous and project reputation does not directly translate into monetary value in most cases, especially if there is no commercial model attached to a particular solution.
This is made worse by the fact that open source is typically developed with a very early public release to drive adoption, rather than that the first decade or so the solution is battle tested in-house and only released when most of the kinks have been worked out.
Any type of backwards compatibility breakage provides a decision point to either decide:
* do I put in the work to upgrade all my code so that it works with this new version?
* do I jump ship to some other library/language that I've been interested in?
In the Python 3 breaking case, it provided good reason for engineers at major enterprises to start building in Go instead of investing in the Python 3 upgrade.
Another debacle was Angular being completely backwards incompatible, giving an opportunity for React to gain mindshare.
In the language proper, the Go authors have made very clear commitments about backward compatibility. (See https://golang.org/doc/go1compat.)
What's notable about this situation is seeing that their compatibility commitment extends only to the stuff inside the language, and not to the tooling provided to work with the language.
I tend to agree. I'm one of those developers maintaining a few open source projects on the side and sometimes struggle to keep up with updates to the tooling and best practices.
For me, it has caused more confusion than help that the old way of using go get still works, even without any warnings to tell me I should start using the new way. I wanted to upgrade myself to using the latest best practices ASAP, but struggled to learn the proper new way, because all the old ways I do almost automatically still work.
As long as the messages about how to do instead are really clear and helpful, I find it should be better to help people move in the right direction as soon as possible.
To be fair, python way of handling versions is the worst and that even goes to the package maintainers. Especially what happened to python 2 -> python3. I don’t like the logic of no version specified = python 2. I have made the mistake of thinking i have the right python binary only to realize it gives syntax error for being python 2
This tells me that you've never used a well-managed language or framework. Those aren't the only two options.
Ruby 1.9 was released around the same time as Python 3, with a similar amount of large breaking changes, and no one was holding on to Ruby 1.8 the way the Python community held on to Python 2.7. You could keep using it if you wanted to, but no one did. Ember.js is another example that I've used that was refreshingly well-managed, with a clear (oftentimes automated) upgrade path between minor releases. Using semver and LTS versions, it didn't just break backward compatibility without a significant deprecation period. Those are just a few examples off the top of my head, but I'm sure other good examples exist.
When things are well-managed, you as a developer have the freedom to upgrade when it's a good time for you. When you have production software with real users or a business that depends on it, you can't always just drop whatever you're doing to upgrade, which is why backward compatibility is needed.
If it's done right, there's a self-imposed drive to want to move to the latest with all its improvements. On the other hand, with Python 3, they took away things that people cared about, leaving developers not only with no drive to upgrade, but an irrational impetus to hold on to what they had, even when the situation was fixed at a technical level.
In most cases, a breaking tooling change should only impact the root project (e.g., they need to change their readme from `go get` to `go install`), so there's only one codebase to fix and it's in the control of the person upgrading the tooling.
Whereas, a breaking language change could impact transitive dependencies of the root project, which a developer doesn't have control over.
Moreover, breaking a language change can result in many changes over any given project including insidious bugs. Breaking the tooling behavior means you have to update your build script and your workflow. Meh.
There's value in delightfulness in CLI tools. Having "go get" be the incantation for Go to go and get something, that's a nice experience for a dev, particuarly when you're picking up the language/tooling.
This is not a deprecation of the language, or even a small part of the language, and it's certainly a lot less than dumping huge parts of an existing and functional language as we saw with python 3 and perl 6. This is simply changing the command-line options that you would do on a somewhat occasional basis, but it does not require changes to any code (except for, perhaps, installation shell scripts or install docs.)
This does not infringe on the Go backwards-compatibility promise that attracted me to Go in the first place in any way, and it helps clarify a confusing situation. It's a win-win.
> There are quite a lot of open source programs written in Go out there that currently say to install them with 'go get ...' in their READMEs and other resources.
My experience is that a fair number of these utilities are broken anyway, because they rely on "go get" to fetch dependencies, and the dependencies they use have changed too much.
(Yes, that shouldn't happen, but it does happen, and it's one of the major problems that go.mod fixes. You can shame library authors for making backwards-incompatible changes, but that won't fix your tools.)
> The bigger thing is that this particular deprecation is probably going to mean that fewer people use utilities and other programs written in Go.
I kept hearing this about Python 2->3, specifically in the context that people would start writing stuff in Go instead. And it was a little bit true, sure, but there's plenty of stuff that's still in Python that people still figure out how to use.
So I'm deeply amused to hear that people are going to switch away from utilities written in Go and curious what language they're switching to.
I agree with the author, that is a ridiculously short time for a breaking change in widely-used functionality.
Every "update that breaks things" is teaching your users to not update the software, ever. Frankly, it's hurtful to users.
By all means send the deprecation warning, but give a lot more time before removing the functionality. Tools require time to get widely distributed unless there's a huge emergency. In this case, there's no need for the hurry.
How do we solve the problem of clueless maintainers breaking everyone's shit? How do we make them understand that their urge to homogenize and rectify and beautify all APIs is causing people anguish. Do they just not care?
Are they just deprecating it, or removing it? Removing it goes against the backwards compatibility thing they're proud of. I have no objections if they choose to remove it for go 2.
1. `go get` will work as expected when run from a directory with a go.mod (or have I misunderstood?)
2. Given the above, the argument about people who include `go get` in their GitHub READMEs makes little sense. Those who blindly copy `go get` from a README will quickly learn to run `go mod init` to set up their projects.
3. There is a deprecation warning, giving everyone plenty of time to adjust to these minimal changes.
If the error message is something decent like “please run ‘go install’ instead” then it seems like users are going to learn the new command rather than giving up? It might be more a problem if ‘go get’ is used to install commands in installer scripts.
tbh it's not really a breaking change, it's for getting binary, it has nothing to do with compiling Go code or the Go language itself. I don't remember the last time I had to to use go get to download a binary ( I'm using wget or curl for that ).
Lessons (not) learned: Go never worked well with semantic versioning or packaging, didn't mature and evolve as a language fast enough for productivity at scale in ways users wanted, became stuck in its ruts, and was overwhelmed by an influx of trend-following noobs. I bounced because its stiltedness never seemed to improve.
I never understood the need for modules and versioning in the first place. You can do censoring by forking your dependency's repo and keeping it the way you want it.
Either way, this is a terrible way to do it. I've lived it. It results in libraries that don't work unless you freeze to exact git hashes of dependencies and never evolve because it's too painful.
My bigger complaint, though, is that the Go team have decided to ignore replace directives when you call `go install`. In a perfect world, I find a bug in an upstream library, submit a fix, and they accept it. In practice, lots of upstream maintainers check their issue/PR lists about once a month if you're lucky, so the best way to temporarily fix the problem is to put in a replace directive changing github.com/foo/bar -> github.com/me/bar until the PR goes in. Except `go install` will ignore that, so if you install a tool via `go install` you get the broken library, but if you git clone the repo and run `go build`, you get the fix.
In proper Go team fashion, they have declared that because sometimes replace directives are difficult, they refuse to deal with them at all--if you intend to distribute something for use with go install, you better fork the dependency and update all your source files.