I think unfortunately the conclusion here is a bit backwards; de-risking deployments by improving testing and organisational properties is important, but is not the only approach that works.
The author notes that there appears to be a fixed number of changes per deployment and that it is hard to increase - I think the 'Reversie Thinkie' here (as the author puts it) is actually to decrease the number of changes per deployment.
The reason those meetings exist is because of risk! The more changes in a deployment, the higher the risk that one of them is going to introduce a bug or operational issue. By deploying small changes often, you get deliver value much sooner and fail smaller.
Combine this with techniques such as canarying and gradual rollout, and you enter a world where deployments are no longer flipping a switch and either breaking or not breaking - you get to turn outages into degradations.
This approach is corroborated by the DORA research[0], and covered well in Accelerate[1]. It also features centrally in The Phoenix Project[2] and its spiritual ancestor, The Goal[3].
> The reason those meetings exist is because of risk! The more changes in a deployment, the higher the risk that one of them is going to introduce a bug or operational issue.
Having worked on projects that were perfectly full CD and also projects that had biweekly releases with meetings with release engineers, I can state with full confidence that risk management is correlated but an indirect and secondary factor.
The main factor is quite clearly how much time and resources an organization invests in automated testing. If an organization has the misfortune of having test engineers who lack the technical background to do automation, they risk never breaking free of these meetings.
The reason why organizations need release meetings is that they lack the infrastructure to test deployments before and after rollouts, and they lack the infrastructure to roll back changes that fail once deployed. So they make up this lack of investment by adding all these ad-hoc manual checks to compensate for lack of automated checks. If QA teams lack any technical skills, they will push for manual processes as self-preservation.
To make matters worse, there is also the propensity to pretend that having to go through these meetings is a sign of excellence and best practices, because if you're paid to mitigate a problem obviously you have absolutely no incentive to fix it. If a bug leaks into production, that's a problem introduced by the developer that wasn't caught by QAs because reasons. If the organization has automated tests, it's even hard to not catch it at the PR level.
Meetings exist not because of risk, but because organizations employ a subset of roles that require risk to justify their existence and lack skills to mitigate it. If a team organizes it's efforts to add the bare minimum checks to verify a change runs and works once deployed, and can automatically roll back if it doesn't, you do not need meetings anymore.
This is very well said and succinctly summarizes my frustrations with QA. My experience has been that non-technical staff in technical organizations create meetings to justify their existence. I’m curious if you have advice on how to shift non-technical QA towards adopting automated testing and fewer meetings.
Hi, senior SRE here who was a QA, then QA lead, then lead automation / devops engineer.
QA engineers with little coding experience should be given simple automation tasks with similar tests and documentation/ people to ask questions to. I.e. setup a pytest framework that has a few automated test examples, and then have them write similar tests. The automated tests are just TAC (tests as code) versions of the manual test cases they should already write, so they should have some idea of what they need to do, and then google / ChatGPT/ automation engineers should be able to help them start to translate that to code.
People with growth mindsets and ambitions will grow from the support and being given the chance to do the things, while some small number will balk and not want anything to do with it. You can lead a horse to water and all that.
> The main factor is quite clearly how much time and resources an organization invests in automated testing.
For context, I think it's worth reflecting on Beck's background, eg as the author of XP Explained. I suspect he's taking even TDD for granted, and optimizing what's left. I think even the name of his new blog—"Tidy First"—is in reaction to a saturation, in his milieu, of the imperative to "Test First".
I tend to agree. Whenever I've removed artificial technical friction, or made a fundamental change to an approach, the processes that grew around them tend to evaporate, and not be replaced. I think many of these processes are a rational albeit non-technical response to making the best of a bad situation in the absence of a more fundamental solution.
But that doesn't mean they are entirely harmless. I've come across some scenarios where the people driving decisions continued to reach for human processes as the solution rather than a workaround, for both new projects and projects designated specifically to remove existing inefficiencies. They either lacked the technical imagination, or were too stuck in the existing framing of the problem, and this is where people who do have that imagination need to speak up and point out that human processes need to be minimised with technical changes where possible. Not all human processes can be obviated through technical changes, but we don't want to spread ourselves thin on unnecessary ones.
So this seems quantifiable as well - there must be a number of processes / components that a business is made up of, and those presumably are also weighted (payment processing has weight 100, HR holiday requests weight 5 etc).
I would conjecture that changing more than 2% of processes in any given period is “too much” - but one can certainly adjust that.
And I suspect that this modifies based on area (ie the payment processing code has a different team than the HR code) - so it would be sensible to rotate releases (or possibly teams) - this period this team is working on the hard stuff, but once that goes live the team is rotated back out to tackle easier stuff - either payment processing or HR
The same principle applies to attacking a trench, moving battalions forward and combined arms operations.
Now that is of course a “management” problem - but one can easily see how to automate a lot of it - and how other “sensory” inputs are useful (ie which teams have committed code to these sensitive modules recently
One last point is it makes nonsense of “sprints” in Agile/Scrum - we know you cannot sprint a whole marathon, so how do you prepare the sprints for rotation?
I agree entirely - I use the same references, I just think it's bordering on sacrilege what you did to Mr. Goldratt. He has been writing about flow and translating the Toyota Production System principles and applying physics to business processes way before someone decided to write The Phoenix Project.
I loved the Phoenix Project don't get me wrong, but compared to The Goal it's a like a cheaply produced adaptation of a "real" book so that people in the IT industry don't get scared when they read about production lines and run away saying "but I'm a PrOgrAmmEr, and creATIVE woRK can't be OPtiMizEd like a FactOry".
So The Phoenix Project if anything is the spiritual successor to The Goal, not the other way around.
That's indeed how I wrote it, but I could have worded it better. Very much agree that the insights in The Goal go far beyond the scope of The Phoenix Project.
I am really interested in organizations capacity of soaking the changes.
I live in B2B SaaS space and as much as development goes we could release daily. But on the receiving side we get pushback. Of course there can be feature flags but then it would cause “not enabled feature backlog”.
In the end features are mostly consumed by people and people need training on the changes.
I think that really depends on the product. I worked on a on-prem data product for years and it was crucial to document all changes well and give customers time to prepare. OTOH I also worked on a home inspection app and there users gave us pushback on training because the app was seen as intuitive
> ...there users gave us pushback on training because the app was seen as intuitive
I would weep with joy to receive such feedback! Too often the services I work on have long histories with accidental UIs, built to address immediate needs over and over.
> By deploying small changes often, you get deliver value much sooner and fail smaller.
Which increases the number of changes per deployment, feeding the overhead cycle.
He is describing an emergent pattern here, not something that requires intentional culture change (like writing smaller changes). You’re not disagreeing but paraphrasing the article’s conclusion:
> or the harder way, by increasing the number of changes per deployment (better tests, better monitoring, better isolation between elements, better social relationships on the team)
You are not. The conclusion of the article is the same, you "need to expand the far end of the hose" by increasing deployment rate or making more, smaller changes. What was your interpretation?
this isn't even a software things. Its any production process. The greater amount of work in progress items, the longer the work in progress items, the greater risk, the greater amount of work. Shrink the batch, shorten the release window window.
It infuriates me that software engineering has had to rediscover these facts when the Toyota production system was developed between 1948-1975 and knew all these things 50 years ago.
It all just looks like ordinary black on white to me. Somebody else was was complaining of blurry text, also not obviously in evidence on my PC (Firefox, Windows) or iPad, so perhaps browser-dependent.
It changes based on the `prefers-color-scheme` CSS media query. When the value is `dark`, the page shows a yellow font with an orange glow over a dark background, otherwise it shows black on white.
> You can replace any of Math definition with code.
You often (but not always!) can replace a math definition with code—but either your code is sufficiently precise that it's just another way of phrasing the definition, or you're in the analogous situation to using a language defined by its implementation instead of a specification. And there's plenty of useful space for such languages—but they aren't formally specified languages. Math that isn't formally specified isn't math, in any sense that a mathematician would recognize—which is not to say that it can't be useful.
Sure, a bullet the same size or smaller will 'fit' in a given barrel, but a depending on its weight and dimensions it will, at best perform equally, and at worst destroy the barrel.
But a bullet isn't the whole story; the rest of the cartridge has to fit snugly for the action of the gun to lock properly. Again, best case - the gun doesn't fire. Worst case, the action blows up.
This is a pretty simplified overview, but unfortunately the assertion above is a (pretty absurd) myth. Sorry :(
It’s technically possible though. The case of 38sp -> 357mag is a working example. The cartridge is longer and pressures higher so you don’t want to load a 357 into a 38 (and often can’t). But you can run 38sp in your 357 all day long.
One warning I received from my gunsmith though is that you’ll want to clean the barrel super well when switching so the powder ring of the longer round doesn’t cause problems. Hasn’t blown up in my face yet. Will report back if that changes.
Sure there are select cases where it is possible or practical. .38 Special/.357 Magnum as you point out, I believe there are revolvers that let you shoot .45 ACP or .45 Long Colt with a cylinder swap, as well as the same for .22LR and .22 WinMag.
However in the GP's context, the typical small arms ammunition for NATO (5.56x45mm and 7.62x51mm) and the Warsaw Pact (5.45x39mm, 7.62x39mm, 7.62x54R) are not close to being interchangeable.
I've fired .22LR out of my .22 WinMag. It's possible but the casing will sometimes split as there is more room for expansion. The particular cheap brand of .22LR I was using only did this about 10% of the time. Not a big deal for a bolt action but I could see that leading to jams in autoloaders.
I was thinking of the Ruger Single Six, where you swap cylinders to shoot the two rounds. Not sure I'd want to try it in an autoloader. The difference in cartridge length not fully engaging the bullet into the throat can't be good for it.
E: Not to mention the excess fouling in the chamber and even back into the action. Cleaning that is no fun.
> The case of 38sp -> 357mag is a working example.
That was intentional on the part of the people bringing tree-fiddy-seven to market. Backwards compatibility to something cheap and readily available is a great way to ensure more people buy guns chambered in your new and uncommon at the time caliber. They could have made it forward compatible too (i.e. chamber tree fiddy seven in your .38) but didn't because that would have been wildly unsafe.
Yeah, same. Their plans included a number of active screens/supported devices a at a time. It seems pretty ridiculous to charge for the number of screens but then put restrictions on where those screens are.
Too right. I only got the 4-person NF account for my elderly Dad. If he's out, I'm unsubscribing and going back to the high seas for their content. What a shame - four years ago was truly the golden age.
Yep, paying $20/mo specifically to be able to play on 3 devices. If they stop allowing sharing, there is little to no value in that plan for me anymore.
Yeah, if you like being stuck with TV that looks like what we got in 2004.
Netflix should stop whining about account sharing when they demand I pay for two screens to get minimally acceptable TV, but can only use one per their terms.
Because there's no bullshit like this preventing me from sharing with family who are overseas (gets really fun when we want to watch something together but it's available in their country but not mine). Because I can download the content and store it locally for trips. Because I can now access a wider selection of content than Netflix offers.
It's not cheap, but the fact that I and other commenters here are willing to pay that money and spend the effort of setting up piracy engines is a pretty loud signal that the entertainment market has failed. Again.
There was a golden age when Netflix had a great catalog and didn't engage in dark patterns. It's long gone, and the streaming industry has become worse than the cable tv industry. I'm happy paying multiples of the netflix subscription cost for my piracy setup just to have the convenience and ownership of the content.
Why’s that? FWIW, this isn’t something new, this household sharing rule/explanation has been posted for years, and Netflix has been enforcing it softly, not very strictly except for obvious and egregious offenders for a long time.
Netflix is signaling that they're going to start cracking down on it.
>Later in Q1, we expect to start rolling out paid sharing more broadly. Today’s widespread account sharing (100M+ households) undermines our long term ability to invest in and improve Netflix, as well as build our business. While our terms of use limit use of Netflix to a household, we recognize this is a change for members who share their account more broadly. So we’ve worked hard to build additional new features that improve the Netflix experience, including the ability for members to review which devices are using their account and to transfer a profile to a new account. As we roll out paid sharing, members in many countries will also have the option to pay extra if they want to share Netflix with people they don’t live with. As is the case today, all members will be able to watch while traveling, whether on a TV or mobile device.
>As we work through this transition – and as some borrowers stop watching either because they don’t convert to extra members or full paying accounts – near term engagement, as measured by third parties like Nielsen’s The Gauge, could be negatively impacted. However, we believe the pattern will be similar to what we’ve seen in Latin America, with engagement growing over time as we continue to deliver a great slate of programming and borrowers sign-up for their own accounts.
While that’s true, what I said is also true: this definition of sharing has been posted for a long time. I know because I’ve linked to it before when people complained about Netflix sharing. The shareholder letter isn’t the linked article, and nothing in the linked article signals a change.
That's the most fundamental one, plus the lack of expression based syntax (!), macros, quoting, (truly) interactive development...
It seems to me that there isn't much there outside of proper closures and data literals? Go has those as well and it's hard to compare to a Lisp.
When JS was invented it was inspired by Scheme and apparently having those two features was more out of the ordinary then. But that's hardly true today.
But there is a _qualitative_ difference between interactive development between JS and a Lisp like Clojure. And "pretty much" is also qualitatively less expressive than "everything is an expression".
I write expression based, simple, mostly functional JS and I use tools to make that as interactive as is practical. But there's still a lot of friction and differences between that and development with a Lisp (in my case Clojure).
The entire Javascript language is not striclty homoiconic, then again many lisps aren't if you're exceedingly hawkish about that term, but the "code is data" mentality is obviously present in the way Javascript and JSON interact.
Sounds like you are being overworked. A 9-5 as an Eng Manager is achievable, but you might need to jump ship to find it.
Working late has a huge knock-on effect on your social life and ability to interact with society around you. That's probably contributing to your burnout.
If there's budget or willingness, having someone in that timezone who can perform your role for that meeting may be possible.
I'd be very wary of taking any regular work outside of your contracted hours unless there is a lot of $$$$ involved and your relationships can survive it.
You might get some mileage out of a long vacation, or agressively pruning your work hours.
Usually the company will arrange meeting time that's closest with both parties working hour. Let's say between 6-8 AM or 18-20 PM. Otherwise that very late time meeting will be rare and in special situation.
Having that kind of meeting be regular sounds like a bad management to me.
The author notes that there appears to be a fixed number of changes per deployment and that it is hard to increase - I think the 'Reversie Thinkie' here (as the author puts it) is actually to decrease the number of changes per deployment.
The reason those meetings exist is because of risk! The more changes in a deployment, the higher the risk that one of them is going to introduce a bug or operational issue. By deploying small changes often, you get deliver value much sooner and fail smaller.
Combine this with techniques such as canarying and gradual rollout, and you enter a world where deployments are no longer flipping a switch and either breaking or not breaking - you get to turn outages into degradations.
This approach is corroborated by the DORA research[0], and covered well in Accelerate[1]. It also features centrally in The Phoenix Project[2] and its spiritual ancestor, The Goal[3].
[0] https://dora.dev/
[1] https://www.amazon.co.uk/Accelerate-Software-Performing-Tech...
[2] https://www.amazon.co.uk/Phoenix-Project-Helping-Business-An...
[3] https://www.amazon.co.uk/Goal-Process-Ongoing-Improvement/dp...
reply