Your reply seems focused on size, but this line caught my attention:
>trusting them to make the right decisions.
I'm wondering if trust isn't the real issue. For example, wouldn't a large high-trust organization be able to provide good care? Or is there something intrinsic to scale (e.g. diffusion of responsibility) that makes "high-trust" and "large organization" incompatible?
My mom is a (retired) preschool teacher. By the end of her career, she was working for a school affiliated with a large chain, and maybe 1/3 of her time was spent on filling out all the paperwork required by management. That was time she absolutely was not spending with the kids.
My dad is a (retired) doctor. Similar story there. His hospital ultimately ended up associated with a regional health care network, and the paperwork load ultimately got so bad that they ended up having to hire additional staff dedicated to help the actual health care providers fill out all the paperwork so that they could spend more time actually providing health care.
I switched primary care providers over this a few years back. I had a great doctor, but her practice got bought up by one organization, which then got bought up by another, and over time working with her office became a huge bureaucratic quagmire. I switched to a different clinic that's still local (although also chain with multiple locations), and everything's easier again.
The same thing happens at my own job, software. The bigger a company I'm at, the more of my job consists of filling out paperwork about the work I'm doing, rather than doing the actual work.
>The same thing happens at my own job, software. The bigger a company I'm at, the more of my job consists of filling out paperwork about the work I'm doing, rather than doing the actual work.
There's some irony here considering software has been promising for decades to help businesses scale information sharing.
Oh, it has. The scale at which we share information has never been greater.
Concrete example:
15, 20 years ago when we tracked user stories, the status of the sprint, etc., with sticky notes on a wall. We didn't have to record a lot of information on them because they only needed to track status at a high level. Details were communicated through conversations. That did mean that a lot of that information was tribal knowledge, but that was actually fine, because it was of ephemeral value, anyway. Once the work was done, we'd throw the sticky notes in the trash can and forget the tribal knowledge. Reporting out happened at a much higher level. We'd report status of projects in broad strokes by saying what big-picture features were done and what ones were in progress. We'd fill operations in on the changes by telling them how the behavior of the system was changing, and then let them ask questions.
Nowadays, we put it all in Jira. Jira tickets are extremely detailed. Jira tickets live forever. Jira tickets have workflow rules and templates and policies that must be complied with. Jira rules make you think about how to express what you're doing in this cookie cutter template, even when the template doesn't fit what you're actually doing. Jira boards generate reports and dashboards that tell outside stakeholders what's happening in terms of tickets and force them to ask for help understanding what it means, almost like you're giving them a list of parameters for Bézier curves when what they really wanted was a picture. Jira tickets have cross-referencing features, which creates a need to do all the manual data entry to cross-reference things. Jira tickets can be referenced from commits and pull requests, which means that understanding what changed now means clicking through all these piles of forever-information and reading all that extra documentation just to understand what "resolves ISSUE-25374" means when a simple "Fix divide-by-zero when shopping cart is empty" in the commit log would have done nicely. etc.
We communicate so much more these days. Because we can, because we have all this communication technology to facilitate all that extra communication. What we forgot is that, while computers can process information at an ever faster pace, the information processing hardware inside our skulls has remained largely unchanged.
I think that highlights the issue I'm poking at. "Good communication" doesn't just mean a firehose of information at your fingertips. It means getting the right amount of information at the right time. Developing systems like the latter is much harder than the former, but they both get the same sales pitch.
This is also where I really dislike a lot of this more recent push toward automating communication.
One person deciding what needs to be said, to whom, and when, can have a LOT of leverage in the productivity department, by reducing the time that tens or even hundreds or thousands of other people lose to coping with the fire hose.
Microsoft Copilot has been yet another downgrade in this department. Since it got adopted at my job, I've seen a lot of human-written 3-sentence updates get replaced with 3-page Copilot-generated summaries that take 10 minutes to digest instead of 10 seconds.
At my company we are aggressively rolling out policies to forbid the use of AI. I'm one of the bigger folks behind it. I just see no benefits. I have no desire to debug AI generated code, I have no desire to read pages and pages of AI generated fluff, I have no desire to look at AI generated images. Either put the work in or don't.
If you use AI like a quick answer machine, or quick example machine, they all outdo Google by a large margin.
The friction between moving between, and knitting different systems and languages together that I don't use frequently enough to be fluent, has been lowered by an order of magnitude or two. Due to small knowledge gaps getting filled the instant I need them to be.
The same with getting a basic understanding (or a basic idea) about almost anything.
My AI log documents many stupid questions. I have no inhibitions. It is a joy.
> If you use AI like a quick answer machine, or quick example machine, they all outdo Google by a large margin.
I mean, A) hallucinations still happen, and B) Google sucks anyway. I don't know of anyone at the company still using Google because we're largely an engineering outfit and all were aware as Google's search features slid into uselessness.
I find that the code I get from Copilot Chat frequently fails to do exactly what I asked, but it almost always at least hits on the portions of a library that I need to use to solve the problem, and gets me to that result much more quickly than most other ways of searching do these days.
Hallucinations (or more correctly labelled, confabulation) is a property of human beings as well. We fill in memories because they are not precise, sometimes inaccurately.
More to the point, once you know that, having a search engine for ideas that can flexibly discuss them is a tremendous and unprecedented boon.
A new tool, many (many) times better than Google ever was for many ordinary, sometimes extraordinary tasks. I don't understand the new gigantic carafe of water is half full viewpoint. Yes, it isn't perfect!? It is still incredibly useful.
> Hallucinations (or more correctly labelled, confabulation) is a property of human beings as well.
Yeah and if, when I asked a coworker about a thing, he replied with flagrantly wrong bullshit and then doubled-down when criticized, I wouldn't ask him anything after that either.
I will say I have warmed to GitHub Copilot's chat feature. It's a great way to look up information and get answers to straightforward questions. It feels similarly productive to how well just Googling for information was back in the 2010s, before Google went full content farm.
We can’t. Paperwork exists to be able to transfer knowledge and liability. It isn’t meant for you and it is mostly cost for your company. It’s for lawyers, insurers, investigators, auditors, your future replacement, etc.
Ha, no instead we're going to eventually have to have AI workers getting AI salaries and to spend on AI products. Then the AI governments and eventually AI wars...
Related: every few years someone posts "Paperwork Explosion"[0] again, and people here rediscover that there's nothing new under the sun.
> In 1967, Henson was contracted by IBM to make a film extolling the virtues of their new technology, the MT/ST, a primitive word processor. The film would explore how the MT/ST would help control the massive amount of documents generated by a typical business office. Paperwork Explosion, produced in October 1967, is a quick-cut montage of images and words illustrating the intensity and pace of modern business. Henson collaborated with Raymond Scott on the electronic sound track.
Usually the discussion quickly converges on how automation in administration is a prime example of the Jevons Paradox[2] in action.
Well, Buurtzorg is a large organization, it's just that it does not have a large hierarchy. I suspect you're really asking "high-trust" and "large hierarchy". In that case there's plenty of causes to point at. I'll just give a few of the top of my head
First, note any organization that breaks into different departments (hierarchical or not), at least partially does so to let each department "abstract" away the other ones - if you don't have to worry about issues outside of your responsibility, you can focus more on yours. That is actually a form of trust.
In the case of a hierarchy however, that means that each layer abstracts away the layers above and below, and since going up multiple levels in the hierarchy happens indirectly, the further away, the more abstract things become. So that often needs some kind of structure to regain the trust that is lost by dealing with abstract departments - leading to bureaucracy.
On top of that, usually more power resides higher up the hierarchy. That means that without explicit structures to compensate for this, people lower in the hierarchy lack individual leverage to protect themselves against bad decisions made higher up, that may not even be malicious or intentional but just a consequence of the aforementioned abstraction.
Of course, most structures that are created to fix this typically are also abstract procedures, meaning they barely help with our instinctual "I cannot attach a face to this" type of distrust. Bureaucracy can create leverage, but rarely creates trust. Which also explains that quite often, talking to someone in person can make such a difference in being allowed to "go ahead" or not. Because it can provide a more "natural" sense of trust that bureaucracy is supposed to provide but barely does.
Not OP, but orgs that scale usually rely on metrics. It's metrics that are easier to measure, not the one that measure system performance (i.e. lines of code written per day, points closed per sprint) that get selected. Then management lampoon workers for not meeting those metrics (they need to prove their doing something, and can't lose control), regardless of how the system is performing. So trust erodes.
I honestly think most orgs would leap forward considerably (with some pain) by doing a severe reduction in middle management, basically making them prove why they are actually providing value, and removing them if they aren't.
Tons and tons of middle managers do absolutely FUCK ALL in terms of delivering product, meeting goals, and serving customers.
I generally dislike middle-management as much as the next IC, but I think these types of arguments tend to ignore latent or low-probability risks.
You see this all the time in discussions about quality or safety metrics. By definition, if those teams are doing a good job you won't see many quality or safety issues, which leads people to believe they are doing "FUCK ALL" and provide little benefit. Only in hindsight, after a low-probability but high risk event happens, does getting rid of them seem like a bad idea.
You’ve never worked in middle management, have you? Just because they do largely ‘soft stuff’, mediating between different departments, teams and layers of the organisation, and (hopefully) running interference so that their team can focus unimpeded on the actual fun part of the job, doesn’t render their contribution null and void.
I’ve dealt with bad upper management, bad project management, bad clients, bad suppliers, but only rarely bad middle management. (And no, I’ve never worked in middle management although at times I’ve been some of the other categories above. :P )
Most healthcare organizations in the US are profit-oriented, sometimes to the detriment of the patients. That is why we have large, unwieldy organizations surrounding the very few people that actually do hands-on healthcare. There’s also the issue of liability – it can be pretty litigious in the US, and companies are frequently wanting to limit their liability, which means having the paperwork to back up their decisions. Unfortunately, it also means they have to restrict their decision-making to a very small matrix.
>Most healthcare organizations in the US are profit-oriented
According to the American Hospital Association, less that 20% are for-profit [1]. I'm sure all are extremely budget-conscious, but that's not the same as being profit-driven.
It seems to me that the US optimizes for quality to the detriment of cost and, more recently, access.
But also be aware that non-profit does not mean non-profit-oriented. Just that any profit goes to executives [1] instead of toward community/charity services [2].
I think there’s an error in conflating “not for profit” with “charity”. You could provide all care at cost and have zero charity care. That d doesn’t imply all “profit” goes to executives, but rather to keep reimbursed or charged costs lower.
1. Thanks for the nuanced view. I agree that zero charity care doesn't necessarily mean greedy execs.
2. In my mind keeping costs lower is a form of charity. Especially with something as frequently difficult to understand as health care costs.
3. Executives do deserve to make a fair amount for their skills and effort. I'm not sure myself what salary I consider it fair pay vs taking greedy advantage of not-for-profit status.
On your last point: I think it's useful to think in multipliers and desired outcomes.
Do you want the best doctors involved in care for patients and training juniors, or do you want them to spend time jockeying for a position in the hierarchy because that's a plausible but also the only way to 2x their income?
This doesn't fully answer the question, of course, but it suggests that large pay disparities are extremely wasteful for society as a whole.
I couldn't find the 20% in your reference, but is it talking specifically about hospitals? I can believe that only 20% of hospitals are for-profit. But, if I do a maps search for all 'healthcare organizations' within 10 miles, the vast majority are for-profit.
Also, of the groups of ambitious hustlers that I know nearby, many are looking to get into running healthcare clinics because there's so much profit to be made.
Don’t be fooled by the “non-profit” label. Many are as greedy as for-profit hospitals, except that the money goes to execs and their friends instead of shareholders.
Legally an organisation is usually exposed in relation to its scale, even if the misconduct was limited to one employee trusted to carry out work independently, the penalty will likely be related to the total size of the company.
The second is if you're scale is large enough that you are considering a significant portion of the employees with some skill in a region, you have a harder time selecting for anything other than "holds a qualification" during hiring. This leads to all sorts of policy to prevent someone with qualifications but less integrity causing issues.
I don’t think it’s that they are incompatible by definition. I think the large organization is a result of low-trust.
A trust-breaking event occurs, so a new form gets added, a compliance process is created with a new team monitoring, etc. etc. Have enough of those and you eventually get your typical, modern healthcare bureaucracy today.
I live in Switzerland and my girlfriend has been working as a doctor (surgery), and it has mostly to do with the politics that come with hierarchy. As in nearly all companies, hierarchy allows politics and favourism to enter the playing field which will attract people that do not act in the primary interest a service should have: ergo, patients.
For example you have leading doctors who prefer not to look at patients, even tho they belong to them (from a specialization point of view), as they are "cumbersome" cases. That often leads to them "ignoring" it for some time until someone else takes over, or completely delegating it to non-fit persons.
It's a huge pain for me to heard this every day, because it literally sucks out any desire to work as a doctor from my girlfriend. At the same time, it's infuriating: we pay a lot each year, and with every year more, for services like this. If I was to ever win lottery, I'd use that money to make my own hospital without all of this crap.
>trusting them to make the right decisions.
I'm wondering if trust isn't the real issue. For example, wouldn't a large high-trust organization be able to provide good care? Or is there something intrinsic to scale (e.g. diffusion of responsibility) that makes "high-trust" and "large organization" incompatible?