I see what he's getting at, but I don't think it's right.
Meetings are caused by policies and the need for coordination. Even companies with continuous deployment still have meetings.
You can eliminate policies, which will reduce meetings, but you'll still need coordination. You can move coordination to your issue tracker or agile board, but then you've just replaced meeting time with the time you spend interacting with the tickets and the agile board.
There really is just no way around it. The bigger you get, the more coordination you need.
Running small teams responsible for small services is one way to reduce that need somewhat, which is part of why microservices are getting more popular, but at the end of the day I'd say coordination time, in whatever form it takes, is linearly (or maybe even exponentially) correlated with org size.
But then you just end up with a bunch of microservices you have to coordinate changes between. Microservices are more of an implicit recognition that the job your software does is complex enough that you need the overhead of managing them.
I do think there's an element of truth to what he's saying: organizations develop processes that can handle a certain amount of throughput, and trying to increase throughput means making changes to those processes, which means meetings about reducing the number of meetings. The amount of overhead is largely driven by executive visibility requirements (and past times that executive has been "burned" by a bad deployment) so it can't really be changed.
I actually kind of agree with the mantra: "If you want to deploy more, deploy more often." We become accustomed to testing changes of a certain size, and if we try to cram more into a release, then release testing and certification takes longer. Big releases are the enemy of most Agile processes.
> But then you just end up with a bunch of microservices you have to coordinate changes between.
Not if you do it right. As long as the API doesn't change and you adhere to every service having its own data store (which is key and many people forget), you can make as many changes to your service as you want without any coordination.
In any microservices architecture of a decent size, you're often making changes that have been requested by other services and will request other services to make changes to accommodate your needs. That's coordination, and it requires overhead. But if you're going the microservices route, you probably already have the overhead and splitting your app into many small services makes the overhead easier to quantify.
And very rarely can you implement a feature without touching multiple microservices. Hell, that's why the concept of epics even exists: you have many smaller user stories inside the epic that need to be coordinated to deliver a single feature. Your UX needs the back-end service calls to exist before they can release their UX changes.
>you can make as many changes to your service as you want without any coordination.
I'm really not sure that's true. If I change my classifier endpoint, even with the same API it's important people know that they might get different results for the same input. It might be really quite important for peoples analysis.
Their system might not fall over, but that doesn't mean I don't need to coordinate with them.
Perhaps I'm missing something in what you're describing though.
I would say changing the resulting data in a meaningful way is still changing the API. The API contract is not only the verbs that they can use but the nouns that you send back.
Well, if all you were saying is that given the exact same requests, the exact same responses come back then you don't have to co-ordinate changes, then I'd agree. I'm not sure that's a particularly controversial point.
Although that does involve knowing your responses are identical.
If my results are different at all I would need to notify people downstream.
Obviously, I can't start suddenly returning XML instead of JSON, because "I feel like it and it's cool"; it breaks everyone horribly, so that needs to be version 2.
But if you insist on upping the version effectively any time you do, you're essentially saying that any side-effect (not just output) visible can't be fixed, except by duplicating the API and creating a new version, fixing it there, and moving the callers over (if you can — this, I've found is the hardest part[1]).
Often, these are things that I feel like the spec is supposed to hash out: what is it supposed to do. If it strays from the spec, that doesn't necessarily require a new version number, especially if it doesn't break things.
However, it has been my experience in that even the most benign changes will break things; you've got to be able to evaluate and figure out who is not following spec, and sometimes, how bad is the effect of the change if the clients didn't like it. (Is it so bad that, even though it's to spec, I should roll back? Or should the client fix the buggy behavior they've been depending on, and we can carry on in the meantime?)
[1]: Mobile clients are out there. Unlike a web page, there's no way to edit/update them — you can push a new version, but will the users upgrade? At some point, you have to cut them, I suppose, but after how long? 6 months? <2% share among the user base? the active user base? What's an active user? (and now higher ups are involved, and you're in meetings…)
> If my results are different at all I would need to notify people downstream.
Presumably your API says what types of data the caller should expect. As long as you don't change that, they should be able to deal with the response changing.
But really it's more about the fact that if you have a single monolith, if you want to make a change, you have to coordinate with everyone at the company, whereas if you have microservices, you only have to coordinate with those who are affected by the change.
> Presumably your API says what types of data the caller should expect. As long as you don't change that, they should be able to deal with the response changing.
The point I wanted to get across was that although a client won't break, I'd still want to talk to those pulling data from the system. I was disagreeing with the statement "you can make as many changes to your service as you want without any coordination."
> But really it's more about the fact that if you have a single monolith, if you want to make a change, you have to coordinate with everyone at the company, whereas if you have microservices, you only have to coordinate with those who are affected by the change.
Yes, it does lower the hurdles to getting something out.
For efficiency, to add new features, to change your data store, lots of reasons. As long as you don't break anything for your user it would be fine. Adding features may require a heads up to the team the feature is for, but not everyone else.
I do think that the degree of coordination needed is reduced with increased frequency of deployment.
Mentally, I imagine an airport and there's a number of planes that need to be landed in a week. What requires more coordination: an airport that is open 24/7 and planes land when they arrive (continuous deployment) or an airport that is only open 1 hour a week?
Congestion drives the need for coordination. Or, put another way, when there is low or no congestion, sequencing through time is a sufficient form of coordination.
I'll happily trade time spent in meetings with time spent interacting with tickets and agile boards: I can fit the latter in my flow, while there's no chance that I'll be able to fit the former.
> but then you've just replaced meeting time with the time you spend interacting with the tickets and the agile board.
But that happens asynchronously, which is already an improvement.
> There really is just no way around it. The bigger you get, the more coordination you need.
True.
> (...) at the end of the day I'd say coordination time, in whatever form it takes, is linearly (or maybe even exponentially) correlated with org size.
It's usually quadratic. If you manage to make it linear then you've probably found an organizational goldmine...
It should be better than that. Orgs are hierarchical so that in theory the coordination cost is constant per leaf node (worker bee) and the total cost is O(n log n) in number of leaf nodes.
This is how very large orgs like the military scale. Common sense shows it can't be exponential, since the military is not totally paralyzed.
The military isn't paralyzed, but it also hasn't proven that their method is the best. There are tons of places in the military where needless duplicate work is happening (and any large corporate org too) because it's easier to do the work twice than to try and coordinate the work once across all the people.
Many developers and companies look at Facebook as a model for good software practices and principles, so it's interesting to see them struggle with the same issues:
"Increasing overhead initiates a positive feedback loop: less getting done -> more pressure -> more mistakes -> even fewer changes per deployment -> more overhead -> less getting done."
I think the jury is still way out on whether Facebook's model is something to emulate or avoid. In the end, it's going to depend on the company, the product, the customers' expectations, and the engineering talent/engagement. "Move fast and break things" just isn't going to work everywhere.
I've seen places where waterfall was totally appropriate and worked well. There's a lot of variety out there.
Yes, they deploy multiple times per day, but this is usually only a subset of the entire application. For example, it might be that the Messaging Component gets deployed today but not tomorrow etc. It feeds into the whole CI/CD, DevOps culture where teams are responsible for independently deploying their piece of the application as often as they feel the need.
Facebook gets to decide what Facebook is. They don't have any contract with the user to support a certain feature if they no longer want to, or if they want to change the feature in some breaking way.
Most production environments are much more constrained and/or operate under a lot of outside constraints. In those situations you have to be a lot more careful.
Meetings are caused by policies and the need for coordination. Even companies with continuous deployment still have meetings.
You can eliminate policies, which will reduce meetings, but you'll still need coordination. You can move coordination to your issue tracker or agile board, but then you've just replaced meeting time with the time you spend interacting with the tickets and the agile board.
There really is just no way around it. The bigger you get, the more coordination you need.
Running small teams responsible for small services is one way to reduce that need somewhat, which is part of why microservices are getting more popular, but at the end of the day I'd say coordination time, in whatever form it takes, is linearly (or maybe even exponentially) correlated with org size.