You can tell from this that gwern has never spent any time climbing up the ranks of a large organization.
Almost all of the hard problems in the world are straightforward coordination problems. The few that aren’t come to mind much easier because they’re much more legible than coordination problems.
To paraphrase a saying, amateurs discuss effort and professionals discuss impact and in whatever field you’re in, with a few tiny notable exceptions, once you reach beyond a certain magnitude of impact it becomes 95% about solving coordination problems.
And throw in Conway's law [0] and the scope of the coordination becomes easier to understand.
My career trajectory bought it come for me, intermittently contracting for/with a UK govt acquisition organisation buying systems for Govt customers.
Task: make these disparate systems speak to each other
* Q: Why don't the systems use the same protocols?
* A: They are delivered by different contractors
* Q: Why don't the contractors use the same (existing) standard?
* A: They aren't incentivised to do contractually
* Q: Why do they have different types of contract?
* A: Because each one is contracted by a separate acquisition team
* Q: Why don't the acquisition teams use common requirements / contracting / technical / etc approaches?
* A: Because they are separately funded by different customers
* Q: Why don't the customers speak to each other?
* A: decades of institutionalised rivalries and differences of opinion regarding apparently common problems
It took me about ten years to work down this list, understanding each layer. One of my most satisfying career moments was when I was asked to brief a senior customer decision maker who owned the resulting mess, and drew them a diagram showing the layers. He then set in motion a top-down alignment of the bit he could directly control: requirements writing. The belief was that getting consistent requirements would be a good starting point that would hopefully flow down the organisations involved.
[Edit] Another problem with increasing coherence in this way is that it costs money (e.g. to adopt / set up the standards / protocols involved). Money that could be spent delivering customer requirements, and which PMs are reluctant to spend on 'under the bonnet' techie things the customers won't see let alone understand.
I retired three years later so have lost touch. But before I left a couple of new procurements were starting up, and I attended at least one meeting where the separate customers were in fact in the same room and playing nicely with each other, which was a very good sign. The acquisition authority had also established some cross-cutting functions (design, contractual support) that would in theory support the new projects coherently, when they reached that point.
As my career progressed, I got a better handle on the genuine cultural issues behind some of the customer-level differences. For example, one particular category of user had very different career progressions / longevity in different parts of the overall organisation (heroes in one part, zeros in the other, approximately). This affected the relative prioritisation of the relevant requirements by the respective customers, and the perceived value and affordability of solutions, in a way that overshadowed the tech-focussed arguments younger-techie-me would have made.
a) what a relatively mundane value-add via co-ordination looks like. Something like this is impossible to communicate without so much context but these stories are happening in every single pocket of a giant org.
b) What the artistry of it is. It's not easy and can take years to understand a single co-ordination problem well enough that you push single pebble and cause the right avalanche of change
c) Why anyone who is looking for impact inevitably gets drawn into it, independent of their feelings of whether they personally enjoy it or not (spoiler alert: I have never met anyone who has told me this was their true passion from birth. All of us wade into it reluctantly and eventually develop the internal motivation to make it satisfying).
I’ve said for a decade now that there’s a huge difference between programming and software engineering and you’ve managed to put it in a single not too complex sentence, which generalizes well to other domains.
If you’re reading this as a junior engineer anywhere and don’t quite understand it, just print it out and hang it by the bathroom mirror right next to your operator precedence table.
It’s sad because they’re all just systems. People in a particular circumstance behave in a statistically predictable way given a system of incentives. If a behavior is unexpected, you’ve encountered a bug in your mental model and you should just chase it down like any software bug. Making a change to the system requires the same set of programming and analytical skills of navigating any complex code base.
There’s a reason why nerds who grokked this became masters of the universe for a few solid decades.
I have this intuition, that we (people), even the really bright ones, are really bad at predicting emerging properties of systems.
This phenomenon might be seen almost everywhere, but I find it that using cellural automata (Game of Life anybody?) and abstract board games with limited rules is the easiest way to show people that they are actually really not that far from children if it comes to imagining consequences of few rules interacting with each other on scale greater than 10-20 moves ahead.
Go board game is my favourite example - few almost self explaining rules and if you show this to someone that did not play it before and he will be lost for days or weeks at a time (if it comes to predicting consequences of two players laying stones according to those few simple rules).
And then, after they learn anything about this game, change one rule (allow for two moves in a turn for example) and see how they are lost again...
It's almost like in the quote from Feynman (this is more a paraphrase because I do not remember correct phrasing, only the general idea) - "People do not understand what it really takes to know anything".
People, well rather any given individual, might react in a predictable manner. Guessing this manner woupd require to a) build a modelnfor said individual and b) put all the input into that model that the person has at any given time.
A) is close to impossible while b) is deeply in the realm of never ever not even close. Now imagine that for any number of individuals across a complex network of social connections.
People who optimize for effort look for truth. People who optimize for impact look for effectiveness. Unless your need is to predict to an individual level to high resolution, this is wasted effort.
People are largely a statistically predictable bag of attributes in many areas that matter for impact. The small amount of areas where humans are wildly unpredictable is what makes us beautiful and we focus on that as the "soul" of a human but they're also not usually the ones that intersect with how we care about impact.
There will be a statistical prediction of all the ways people will choose to get to a train station based on distance, weather, percentage car ownership, quality of sidewalk etc. Something as simple as improving the lighting of the right road to the train station can increase the ridership to a significant degree because it helps people feel safe walking home in the evening which means they also feel safe heading to work in the morning.
You spend enough time with a sensitive enough radar in a large organization and you see the same statistical patterns. People are milling around and impacted by their environment and things you want to make happen aren't happening until you notice the corporate equivalent of the unsafe road and you put down a couple of lights and rapid change occurs. You do it enough times consistently and you get promoted and the patterns become subtler.
Many things that look like coordination problems are really optimization problems.
It’s common for huge government programs like the Space Shuttle, F-35, California High-Speed Rail etc to try and satisfy a widest range of different groups and therefore be N increasingly poor solution for any one group. In such cases the root problem isn’t finding the optimal solution to make everyone happy, the root problem is picking which groups get what they want and are willing to pay for it such that a project works well.
It could easily be the correct solution is to abandon a project early rather than waste a great deal of time and money either directly or by to upsetting some groups who will block the project. The same applies inside of organizations, though it’s less obvious from the outside.
Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration. - Stan Kelly-Bootle
The examples you listed above are what I would call 0.5 solutions. To me, these are straightforward co-ordination problems that failed to find a solution and then devolved into optimization problems that failed to find the right local minima because the problem formulation was flawed in the first place.
Co-ordination problems aren't fuzzy and nice almost all the time. A lot of solving co-ordination problems is getting two people to agree to knife the third to make the problem go away.
Your example disproves your point. Array index works just fine if C programs use 0 indexing and Pascal programs use 1 indexing. There’s zero need for compromise between different groups here and the overhead of trying to do so is often just wasted effort.
Coordination only applies within some group, so your example is showing coordination bias. By assuming everyone needs to agree to something you’re failing to optimize.
Pulling the plug doesn’t look good appealing when this project is your livelihood, but looking after stakeholder’s interests isn’t always the same thing as continuing to collect a paycheck. Thus at the root of such projects should be an optimization problem not a coordination one.
There's 3 seperate aircrafts in the F35 program which, if we designed any 2 of them, could have been coherent, well designed, well executed single aircrafts. It doesn't matter which 2, pick any 2 of the 3 missions and we could have gotten 2 great aircrafts.
But planners looked at the costs and said we simultaneously can't justify 3 seperate programs and also we can't simplify it down to one program, hence the ungainly mess that resulted.
Like I said before, if someone could have convinced any 2 of the branches of the military to knife the other one, we could have reached a much better place than we are now. That we didn't was a co-ordination problem.
I wouldn’t call the F-35 a mess, all 3 are solid aircraft and the F-35A was both cheaper than the F-22 and can be exported.
Rather than having the branches pick 2 of 3 the real question IMO is if making more F-22 instead of trying to include the F-35A would have simplified the F-35B and F-35C enough to be a net benefit. That isn’t about one side losing the fight, it’s a question of which approach results in all sides being better off.
However, from a military contractor standpoint the F-35 was a gravy train and ultimately any talk about efficiency means reducing profit. In that context things worked wonderfully for Lockheed Martin and it’s many suppliers and subcontractors.
The F-35A was fine, it was mostly the B that clusterfucked the project. Again, if we're talking about incentive problems, those are just straightforwardly co-ordination problems.
It’s complicated, the weight reduction redesign spurred by the B significantly improved the other 2 variants. So arguably the B was a major benefit to the project even if it drove up prices and added significant delays.
The only way to quantify if it was a success or is in comparison to having the B end up as a separate project. Which comes down to a cost benefit analysis, though we don’t actually know as much as that alternate because it didn’t happen.
I think this is only partly true. I think it's more generally true that once hard technical problems are solved once, all that's left are coordination and prioritisation problems.
"The few that aren’t come to mind much easier because they’re much more legible than coordination problems."
How much of the world is spent working on truly greenfield R&D vs how much attention do we pay to it in the media, in our discussions and as examples that we use when we try and convey abstract concepts?
More economic title: "Holy wars are coordination problems".
As an uncanny cosmic coincidence would have it, today is an anniversary of the Third Council of Constantinople, counted as the Sixth Ecumenical Council [1] by the Eastern Orthodox and Catholic Churches, as well as by certain other Western Churches. The Council met in 680-681 AD and condemned, wait for it: monoenergism and monothelitism as heretical.
Coordination problems are not even "problems". They are the stuff of Homo Sapiens social life. The underlying reality, to the extent it exists at all, is but an excuse to engage in enthusiastic formation of competing clans.
If people can engage in passionate discussion (all the way to actual destructive war) over "monoenergism" they can also do so about Python 2 versus 3 or any other harebrained perpetual holy war in tech.
Ever played the investment game? (or variations) [0]
I used it in a game theory class to teach distribution, thresholds and
coordination.
Take a pile of pennies and give one to every member of the class. A
group of about 100 in a large lecture theatre is best.
Now tell them that if they agree to invest in your scheme by pitching
their penny, you will guarantee to give them two pennies (2x sure ROI)
just for participating... BUT IFF more than say 75% of the group
invest. If less than three quarters of the group invest, the
investors lose everything.
Watch in amazement!
Slowly lower the participation threshold.
You can get it down to 40% sometimes, with more than half the class
literally screaming at each other "Don't you fucking understand you
idiots!". The minority still hold out and claiming it's a con and dig
in even more as the threshold lowers.
Controversially. I sometimes then use that to teach
leadership/demagoguery by acting as (or selecting someone else to act
as a stooge) coordinating authority and order them all to invest.
It's actually terrifying to see how democracy succeeds or fails based
on the flimsiest of information paths and communication factors.
The stakes there (a penny) are far too low for the game itself to matter.
I'd say what you're witnessing is instead a debate about whether people are trustworthy or not, plus some attention/status seeking, preexisting social dynamics in the group, and people just having fun stirring up trouble.
Could be. If only they paid professors enough I could afford to hand
out some £5 notes. Oh well, for that reason I probably won't ever
again get to try that fun game with higher stakes.
The game can't be played with higher stakes; the whole point is that the 2X payoff can only be guaranteed in the first place if the stakes are too low to make any real difference. In the real world, people who refuse to invest in such schemes when the stakes are enough to make a real difference are being rational, because in the real world, nobody can guarantee a 2X payoff with stake amounts that actually matter unless they are running some kind of scam.
That is the basic reason why these games played in academic settings are useless for understanding the actual world outside academia.
> The game can't be played with higher stakes; the whole point is that
the 2X payoff can only be guaranteed in the first place if the
stakes are too low
Also a good point about how contrived the experiment is. Double your
money is always too good to be true.
> That is the basic reason why these games played in academic settings
are useless for understanding the actual world outside academia.
I am always wary of assumed dichotomies between "the real world" and
"academia". Please let me try to explain why I tire of that jaded
trope...
Actually there's three domains of interest here. There is "academia",
which is indeed a cloistered madhouse of egotistical and socially
stunted Luftmensch. And there is the world outside it.. which is often
deflated into something like "Industry" - to which people attach
pompous tags like "the real world". Rot. I've worked in both and
neither encompass the "real world". Neither capture the reality of the
Favela, the terminal cancer ward, the battlefield.... Both are
microcosms with big ideas about their own scope.
Thirdly, separate and distinct there is the abstract mathematical
world of Game Theory, being neither the academy nor the factory
boardroom, but a non-existent and inscrutable world of equations and
theories. It makes few if any claims. One may derive utility from it,
apply it, or not.
> Actually there's three domains of interest here.
None of which, as far as I can tell, are "the real world". So there is obviously a fourth domain that you didn't cover (although you briefly described some aspects of it).
And that kind of disconnect is what I was talking about. Abstractions are all very well, but unless they ultimately bottom out in something getting improved in the actual real world (not "academia" or "industry" or "game theory", but the world of Favelas and terminal cancers and battlefields and so on), they are useless.
How do we focus minds on beneficent applications instead of "museums
of ideas for loafers in the garden of knowledge" and cheap, ignorant
industrialism for it's own sake?
I've taught thousands of students now, and looking back I'm haunted by
their intelligent enthusiasm and burning passion for real-world
problems. Those kids education totally failed. By the time they were
only 25, and had been through the "university" system, they were
beaten down by the need to shoehorn brilliant ideas into merely
profitable ones, and "Make a living" despite their higher talents and
God given rights, and prove themselves to gatekeepers of mediocrity
and the status-quo.
Business, academia, knowledge, money, prestige; all of these are
insufficient drives for the emerging generations.
Well timed article, been having a ton of headache trying to explain this on the Univeraal Analytics vs GA4...
Extends way further than programming and could be applied to Metric/Imperial measuring, english as de-facto international language vs Esperanto and such
There's a lot of hidden wisdom in just using the most popular thing and ignoring the fanfare.
Maybe also with a lindy-ness bias -- X might be more popular than Y at time T, but X came out of alpha at T-1 and Y released version 17 at T-5. But those two rules will carry you very very far if you want to focus on building instead of picking.
Not really OT, but what a beautifully formatted page. I especially love the convenient footnotes next to the text when on a wide screen, while being at the bottom for narrow windows.
This, however, is just hurtful, no need calling my tech stack irrelevant just because it's kinda true:
> People have ceased to debate Emacs vs vim not because any consensus was reached or either one disappeared, but because both are now niche editors which are more threatened by a common enemy—giants like Visual Studio Code or Eclipse or Sublime Text.
It's not even "kinda true". Emacs and vi are just part of the landscape, there for those that want to use them. People don't have much to argue about (beyond jokes) after nearly 50 years.
More people SHOULD have refused the python3 transition plan (the most common example in the piece). It was absolute garbage and the eager adopters were the ones responsible for it becoming permanent.
That's why Python 2.7.x was kept alive an updated for several years, the Python 3 Wall of Shame existed, new projects were started in Python 2 by default anyway, and so on.
In hindsight it may be easy to forget how BAD it was.
To provide a counter-example: the startup I was working for switched from python-2.7 to python-3.4 after that came out. It was fine, no major issues. I wrote a whole lot of python2.7 before that, and a whole lot of python3.5 afterwards. I maintained our deployment and local dev scripts, and a couple libraries and tools on pypi, which were compatible with both python3 and python2. It was very doable.
I think one mistake was promoting the "2to3" converter early on. It turned out better to add a couple features to python2.7 and python-3.2/3.3/3.4 which made it quite feasible to write code compatible with both.
It is perhaps ironic that writing code that deals with str/unicode/bytes across both python2 and python3 is a bit more complicated than just python2 - but, again, still doable. I did it for years, many popular libraries did it for years, until recently dropping python2 support. It worked.
My opinion is there were 2 major issues and a bunch of minor cuts.
The major issues (imo):
* Reliance on extending the runtime via the C-bindings (and that changing)
* A community that had largely gotten accustomed to stability being thrust into an enormous change all at once
I think people tend to be in a camp of either "this was good and needed to happen because python had unshakable warts" or "this was bad and we should have lived with language mistakes forever". I think both of those camps conflicting is really what made it really painful - breaking changes were held for years and then once one got in they all came. I think the reality is that coming up with a migration plan incrementally working towards it would give developers more time to focus on one upgrade rather than a full rewrite. Node gets a bad reputation for being "chaotic" or "constantly changing" but the changes are small and manageable comparatively. Go on the other hand has managed to maintain strong stability, but with an ecosystem that's working primarily in the core language rather than the implementation language (C for python).
I think the python 3 merge did a ton of damage to the community's willingness to encourage breaking changes that are needed and it's why packaging and runtime self hosting have been comparatively weak despite a huge userbase.
Not breaking apis is a great ideal but if you have to, breaking them in planned, bite-sized, frequent bursts is often MUCH MUCH better than once a decade.
I finished what will hopefully be my last 2->3 transition ever back in 2021. 6 months, ~180 commits, and 3,000-odd files in that single PR, after years of preparatory work by others. I needed a sabbatical after that.
There was no way to use python 2 libraries in python 3. This, right out of the gate, made them feel more like different languages than a migration, and forced people to delay their own migration until all their dependencies had migrated.
(Example of how to do it properly: "netstandard" in C# libraries could be used in .Net Framework and .Net Core, despite those being very different runtimes. Heck, even the ancient Microsoft "_UNICODE" migration where all your functions got macroed to either functionA/functionW depending on which string type you were using was less miserable.)
Within that, it was difficult to migrate code file-at-a-time. If your project had really good isolation between modules and they all had separate matching unit tests, then you had a good chance. Otherwise it was "run program, whack bug, repeat x1000".
After a while libraries (six, future) were developed to allow you to write polyglot code that generally worked in both. This mitigated the print() issue. print() was less of a problem for large projects, more a problem for novices with outdated tutorials, and people with large script collections all of which broke individually.
Changing the string type, and return type of lots of IO functions, in a non-typechecked language, caused total mayhem. Suddenly all sorts of code which never cared about character encoding was forced to.
The sad thing was that the transition could have been incremental - the "polyglot option" arrived somewhere in the middle of the transition, and a lot of debate was still raging as if there was no way to use python2 libraries in python3 (and vice versa) when in our migration we've already took the initiative and adapted the few (relatively small) libraries we needed so that they supported both python2 and python3.
> As someone who wasn't around for the Python 2->3 transition, what made it so painful?
All the people who screamed and yelled so much that they prevented Python 2 from having significant breaking changes. That meant that the needed changes kept piling up until the list was gigantic.
People forget that the Python 3 thing wasn't done in a vacuum. All the important people in Python had direct memory from the upgrade of Python 1 to Python 2 and what a big fiasco that was.
So many people dragged their feet on that that Guido et al. made a point of making 3.0 have hard, breaking changes in an attempt to force the upgrade through in a timely fashion instead of the long, drawn out, painful process that was the Python 1 to Python 2 change.
We all know, in hindsight, that the forces of inertia were FAR more intransigent than Guido and Co. estimated. However, that wasn't obvious looking forward.
I'm not sure there was any good solution. People would have pissed and moaned no matter what.
IMHO there was a good solution - launching python 3.0 only when you had a working solution to make a library that is usable in both python2 and python3 code, as was possible later on, and what IMHO was a key factor in making the migration actually work.
I don't think "intransigent" is a fair description. At the end of the day, Python 3 was a different language from Python 2, and people weren't going to switch languages if there wasn't a benefit to them. That's a perfectly reasonable position. The Python dev team extended the end of life date for Python 2 from 2015 to 2020 because they realized that Python 3 simply hadn't advanced enough by 2015 to make it worth while for all the Python 2 users to switch.
A someone who just started learning Python at the time, it was terribly discouraging that I couldn't even get a "hello world" script to work, following exactly the example in any tutorial. I kept thinking there must be something wrong with my system configuration, or my environment variables, my character encoding, or something. There's always a million things that can go wrong when getting started with an unfamiliar programming language.
I didn't imagine they'd made a breaking change affecting the print statement which, of course, no beginner-learn-python site was yet aware to warn you about. The error message at the time was not so helpful as it is now.
One of the big changes was to make many basic tools into lazy iterators instead of greedy lists (map, filter, range, etc.). I can remember having to teach my fellow scientist/engineer colleagues not to just loop over an index variable when working with certain datasets, but this was a major bit of friction because they were so used to Matlab and other languages where direct indexing is the primary method. While lazy execution is great for many things, it is not something that is necessarily common knowledge among people who use coding as a means to an end.
Additionally, because so many of the core libraries were 32-bit only or Python 2 only, you ended up having to either write your own version of them or just go back to 32-bit Python 2. Numpy in particular (and therefore transitively anything halfway useful for science and engineering) took several years to stabilize and I have many memories of having to dig into things like https://www.lfd.uci.edu/~gohlke/pythonlibs/ to get unofficial but viable builds going for the Windows machines we used. It was enough of a pain to deal with dependencies that I actually ended up rolling my own ndarray class that was horrendous but just good enough to get the job done.
davidjfelix outlines some of the issues, but I wanted to add in some that may have been simple upfront headaches that made people resistant:
- Simply put, print was ` print "text" ` instead of ` print("text") `. Looking back, this was such an annoying habit to break; but if you're codebase had thousands of print statements, that becomes thousands of changes
- range(3) used to return a list [0, 1, 2] while Python 3's range(3) returns an object. If your codebase relied on that explicitly created list, then it'd break in Python 3.
- Division was originally integer division, so again if you expected an integer and are now getting a float (with a non-numeric decimal point), then more crashes
- except used to be except (Error1, Error2) as e and Python 3 explicitly requires each except to be on their own exception catch
All in all, tons of changes that you'd need to do before switching otherwise your codebase would crash. It also meant you couldn't rely on Linux's default Python (2.7). I never needed to make the switch on a production base, but hopefully you can see why someone would drag their feet
[edit] Python was also still a "new kid on the block" of languages. Its popularity was growing, but since it was not an industry standard yet, these systems were mostly through hobbyist, so I imagine there was plenty of just trying to find that mythical "free time" I keep hearing about.
Python 2 had b'byte string' and u'unicode string' with unmarked being 'byte string'
Python 3.0 had b'byte string' and unmarked being 'unicode string'
Because python 2 was so lenient with mix'n'matching the two string types, it wouldn't error if you only were using ASCII values and finding all these places where strings weren't quite right can get pretty difficult. It also meant libraries had to awkwardly always use b'byte string'.decode('utf8') if they wanted to create a unicode string and be compatible with both python 2 and 3.
Python 3.3 then reintroduced prefixed u'unicode strings' to make it significantly easier for libraries, simply by always using b'' and u'' instead of ever using ''. It also made any preexisting unicode-aware code "just work" without having to be converted from u'' to ''.
I think I remember hearing about other similar compatibility changes made in either 3.4 or 3.5, but can't remember what they would have been.
the most important benefit that most people ignored until they needed to debug something was fixing the horribly broken exception system in Python 2, were you were losing your stack traces when reraising.
most people who don't come from native-English countries also immediately benefited from unicode awareness by default. there were a few folks who cried about having to prefix bytes with b'' when pushing text through sockets, a vocal but small minority.
> there were a few folks who cried about having to prefix bytes with b'' when pushing text through sockets
It wasn't just that; any program with string literals in it that were being treated as bytes in Python 2 would have to have all those string literals prefixed with b in Python 3; otherwise the program would break.
Also, defaulting to unicode meant having to have a default encoding that became critical in many more places--for example, the standard streams (stdin, stdout, stderr) were now unicode by default instead of bytes, so there were now plenty of new footguns when the standard stream encoding that Python guessed was wrong and you had no way to change it. Not to mention that if the standard streams were pipes instead of ttys, unicode made no sense anyway.
* The "python" command pointed to "python2" on many Linux system for a long time
* A lot of libraries were not ported for many years
* People kept repeating that "Python 3 is not ready yet", because they read an old statement on Stackoverflow and didn't consider that this might change in only a few years.
It was pointless. The python core team decided to obsolete every existing line of python code in exchange for basically nothing. On top of it a few features were changed in ways that felt outright vindictive (division, the u'' convention being the ones that affected every single codebase).
Unicode was a meaningful change to the language that it's difficult to imagine could have been done without breaking compatibility or repeating C's mistakes. There isn't much excuse for the rest, as forks with miniscule development resources managed to backport everything else. 3 did turn out a generally nicer language than 2, even if it was slower and caused a solid decade of pain.
Except that python2's `unicode` was a mistake. We already knew at the time that "just use utf-8 for everything, and never use numerical indexing" was the way to go, but instead python3 decide to leave us with no working string class, as opposed to python2 which at least had a mostly-working one.
That said, the major forces against the transition were that it was impossible to write code that worked with both versions for a long time:
* python 2.7 added support for a few python3 syntax/library features, but python 2.6 was very widely deployed on stable distros.
* it wasn't until python 3.3 (very late to actually be deployed) that you could even write a unicode string literal (you know, the thing the claimed the transition was all about) that still worked in python3
* python before 3.3 had a completely bogus idea of "unicode" on Windows anyway, even ignoring the API nonsense.
* python 3 completely broke the way indexing worked for `bytes`, making it produce integers (rather than single-codeunit instances) of all things.
* there was a lot of gratuitous package breakage. Instead of leaving deprecated shim packages, they just removed them, and you hard to add a third-party dependency to get something that worked with both versions (and said dependency hooked into core parts of the interpreter in weird ways).
It wasn't until around 3.5 that there started being any actual advantage of python3 at all. But there is still tons of code that is no longer possible to write.
Python is made for cobbling together things with string and ducktape, for users who would do such a thing. Any change, however good and trivial will have them paralyzed, gnashing their teeth.
This is not an isolated one-off decision. In your opinion, what should people do after refusing the python3 transition plan? Refuse changing the string treatment forever, or block the transition until some better plan is proposed?
Almost all of the hard problems in the world are straightforward coordination problems. The few that aren’t come to mind much easier because they’re much more legible than coordination problems.
To paraphrase a saying, amateurs discuss effort and professionals discuss impact and in whatever field you’re in, with a few tiny notable exceptions, once you reach beyond a certain magnitude of impact it becomes 95% about solving coordination problems.