I was involved as a remote contractor on Longhorn (Vista) working the accessibility team on Narrator (the screen reader) and the Magnifier. I experienced all of this with the addition of a couple of layers of management at the contracting company I was working for at the time. It was a bit insane. Our code also depended (very tightly) on the new Shell team code but we were a ways away in the branch hierarchy from them and every week it seemed some change was reverse integrated down the tree to us that broke something in our code. I'm not sure why the accessibility team wasn't part of the shell team.
Most of the cool code that we implemented was dumped on the floor. We had an experimental feature that would play a varying tone (frequency and amplitude) as you moved the mouse around and got close or over certain controls. The idea was to give a way for non-sited folks to explore a GUI. It worked so much better than the final thing that was shipped which just read all the controls in x/y order in the currently focused window.
I eventually got off that contract and I was amazed that Vista was eventually released.
I was a contractor at apple and had to do some stuff to authkit. It was a friggin nightmare. Running daily builds of macOS is a nightmare and the amount of secrecy and “you only get to know what you need to know” is super painful. I disliked that part a lot.
Apple employees also look down on the “sad grey” badge employees. Was really great getting emails every week for beer and bbq events with the disclaimer I wasn’t invited.
Both of these sound like areas where Amazon's "API-interative first" approach would help?
The memo text:
> All teams will henceforth expose their data and functionality through service interfaces.
> Teams must communicate with each other through these interfaces.
> There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
> It doesn’t matter what technology they use. HTTP, Corba,
Pubsub, custom protocols — doesn’t matter.
> All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
To be clear, that wasn't the memo text. That was Steve Yegge's pithy summary of it in his infamous Google Plus post that was meant to be privately shared inside Google to call out how/why Google is worse than Amazon at web services.
It wasn't one specific memo, but rather a series of guidances from leadership down and through the Principal Engineering community to both incentivize and require this kind of interop.
Also, a lot of folks (not you, but inb4) focus on the last 2 lines because it fits with their narrative of an idea of what Amazon is like. For the record, nobody sent out (or sends out) a memo with a "do this, or you will be fired" conditionals. (I'm sure there are extreme cases where HR is already involved and people are on their final notice. I'm also sure there might have been some anti-union-busting b.s. or how the climate change activists got fired in a way that i personally don't agree with. But it's not a standard practice)
It was kind of a one off. One of Microsoft’s first 100% remote employees was getting bored working alone on the east coast so he got permission to hire a small consulting shop near him to work with him. I was part of that shop after a failed startup (they were also a startup accelerator that I was part of earlier). I don’t think they exist anymore - the partnership had a falling out around the same time as the dot com bust.
Ah, yes. Program blocks with a dialog to ask if you want to save. Windows puts a giant fullscreen message saying a program is blocking shutdown. And the only way to click the dialog is to abort shutdown altogether. And then when you click don't save, the program doesn't close anymore because the system isn't shutting down anymore. Great UX...
> So in addition to the above problems with decision-making, each team had no idea what the other team was actually doing until it had been done for weeks.
The Open Source development model addresses this. By removing meetings and forcing all work to be done "in the open", it is impossible not to know what is going on, because you can always just go look at what's going on. Subscribe to the mailing list and you will always know what is going on. Or don't and read the mailing list archives. Look at the patches, read the documentation, understand the context from discussions. There is no waiting for a team meeting, there is no arbitrary line of organizational hierarchy in your way. Just make a patch and send it in.
There are of course still delays and discussions and disagreements and whatnot; that's the nature of collaboration with multiple humans with agency and ownership. But you never need 24 people to design a menu, because you just start designing it, and people can discuss the work asynchronously on the mailing list. There is no ceremony. All organization is ad-hoc, dynamic, decentralized, and just-in-time. By the time a large "corporate" team has finished building one feature, the open source developers have already released and tested three alpha versions.
I think every product owner should read The Cathedral And The Bazaar and put the bazaar method into practice (or at least try it!).
You cannot reasonably expect people to check into a plethora of information streams, digest it, and be able to come to a meaningful consensus of action.
The meetings, the management, the prioritization, they all act as information filters of a sort. Otherwise, you experience death by information overload.
The most "open" of open source and remote companies still have meetings, product roadmaps, and prioritization. Resources and time are finite.
What you say is true, but the open source model does make it easier for people to subscribe to a subset of threads, issues, projects, etc that they find relevant to their goals, and to follow and contribute to those selectively.
In other words: core project team(s) may follow traditional organizational practices, but it's also possible for them to accept contributions from pro-bono specialist interests.
(that's theoretically possible for companies using inner-source approaches too, although the audience size in those situations is likely to be smaller)
Linux doesn't solve nearly the same set of problems that Windows does. One is a kernel. The other is an operating system.
Typically this is just semantics, but it's meaningful here because you still need all of the other components (GNOME and X and PulseAudio and SystemD and so on, as concrete motivating examples) to make up the whole OS.
And unsurprisingly, you end up losing out on features that would require vertical integration. Accessibility on Linux is more or less a non-starter, the design of every application and toolkit is blindingly variable, nothing about the system can be depended on (even the standard C library!) unless your app has been intentionally built and tested with that particular version of that grab bag of components.
Windows is one thing. People know what they're getting when they use it. I don't like it much, but it certainly achieves the goal it set out to accomplish. I certainly don't lie awake in bed wondering why everyone uses it over my favorite grab bag of widgets that more or less looks like an OS.
See the Linux documentation's "How the development process works" [0].
The way a changeset goes from a developer's machine to the root is reasonably similar (it usually gets pulled through a number a intermediate trees, from a development tree to a sub-subsystem tree to a major subsystem tree to Linus's tree), riding the train from the bottom to the top, with an additional time constraint on it (the merge window being open).
The kernel also has a "next" tree that is a snapshot of what the kernel would look like with incoming changes merged right now, surfacing early the exact long-distance coordination issues described in the article. Plus, of course, everything is in the open, so maintainers of different subsystem can then coordinate directly on things that impact both sides, even if the required patches will if possible make their way up separately.
Are you aware of any open source project of the size and magnitude of Windows (including and especially the number of engineers involved?)
And I don't think Linux is a possible answer here, because remember Vista/Longhorn pretty much started as a bottoms-up rewrite (I'm sure there is a legion of blog posts out there about why THAT is a bad idea too), and also involved in the scope of "windows" is all the applications that will ship with it.
I agree with you that in the context of an open source development framework, you would never need 24 people to design a menu, but can you even build something the size of Windows in an open source framework on any sensible timeline?
My impression of the most impressive and successful open source projects - even large ones - is that they are a labour of LOVE of one specific genius who eventually figures out how to scale themselves for ongoing maintainace. But it is almost certainly at that time a) their personal passion, b) not their paid profession, c) not the key contributor to the stock price of a major billion-dollar multi-national company.
This is specifically addressed in the appendix of later editions of The Cathedral And The Bazaar. I think what people are missing is that traditional project management does not deliver projects any faster or better. "Oh, it's really big and complicated, so all this inefficient process is necessary!" But then you bring in a Six Sigma or Lean or Agile consultant, and they look around and go, "What the fuck are you people doing?" They write up a list of recommendations and leave, and then the corporation does exactly jack shit with those recommendations. But many of the recommendations are to stop doing so much bullshit process, because it's often making coordination on that big complicated thing even harder.
Also addressed by the paper is the fact that, yes, Open Source does benefit from people with very big brains motivated to do what they love. The traditional model of "throw 100 random engineers at the problem" is simply not going to result in the efficiency of 10 highly motivated, highly talented individuals. But that doesn't mean the Open Source way wouldn't result in just as successful a software product. The difference is in how you motivate people, and how to get the hell out of their way so they can be effective.
The closest thing I can think of is the Debian Project, which does have an organizational structure based on committees: https://www.debian.org/intro/organization. Debian is the base on which most Linux-based operating systems build themselves. A lot appears to be done by resolutions, which has its own set of problems. It's at least a counterpoint to the idea of the most successful projects being led by the personality of one person, as leadership has changed regularly.
And this is why Open Source has managed to produce paradigmatically beautiful user interfaces that everyone loves and chooses over chaotic proprietary ones.
I was kind of hoping this was some form of parody/sarcasm, but it appears not.
The open source development model does not address this in any meaningful fashion.
You are suggesting that the solution to communication problems is to force everybody to have complete knowledge of everything. That definitely doesn't scale.
Remember the description given of the bazaar is "a great babbling bazaar of differing agendas and approaches out of which a coherent and stable system could seemingly emerge only by a succession of miracles"
which turns out to be true a lot of the time. Large open source projects don't produce stable systems without some form of additional social order imposed. They become less stable as they grow larger, and either someone notices and attempts to impose various forms of non-bazaar-like order (as happened to the linux kernel, for example), or the project fails/becomes a mess (i will avoid slagging on projects here :P)
Beyond that, the way open source addresses the problems listed in the article is by ignoring them. That is by design.
"Differences between the two styles of development, according to Bar and Fogel, are in general the handling (and creation) of bug reports and feature requests, and the constraints under which the programmers are working.[2] In closed-source software development, the programmers are often spending a lot of time dealing with and creating bug reports, as well as handling feature requests. This time is spent on creating and prioritizing further development plans. This leads to part of the development team spending a lot of time on these issues, and not on the actual development. Also, in closed-source projects, the development teams must often work under management-related constraints (such as deadlines, budgets, etc.) that interfere with technical issues of the software."
IE In closed source systems, time is spent thinking, planning, and prioritizing. Also, fixing bugs and handling feature requests.
The bazaar says "don't do that, doing that takes away from focusing on technical goodness" This is true. The strict bazaar model prioritizes the technical goodness over everything. Unfortunately, technical goodness does not create good products. Thinking, planning, and prioritization do :). See, e.g., cascade of attention deficit teenagers (https://www.jwz.org/doc/cadt.html).
This is also why you rarely, if ever see straight bazaar models behind successful consumer-facing products. Good products require careful coordinated curation and choice. You do see straight bazaar models occasionally in infrastructure, but even there, not for large projects.
If you want to see the bazaar method in practice to products, you don't have to look any further than the huge mess that was (and is) linux desktops - it is only by moving away from strict bazaar models that they have made any progress at all over the past two decades.
That's fine if your proposed changes are small enough to require minimal intervention from other folks, but typically in these kinds of orgs there are all sorts of dependencies that you simply can't account for in code and in a pull request alone.
As a hypothetical example, let's say you want to send an email nudging your users about teams under their purview who could do better in adopting parts of your product suite. Suddenly other product teams are worried the recipient will choose to simply churn from a product that their teams haven't adopted, legal starts talking to you about the GDPR, etc etc, and things start to look a lot less hopeful.
Your best bet is to try and account for these sorts of issues before you even start writing the code. If they're reasonable issues and your code happened to land anyway, you would have done damage. Even if they're not actually issues, you have made other people's lives harder. And if your code never ends up landing, then you've wasted your time!
So, you get out ahead of it and set up a team that can interact with all of the stakeholders that might actually be affected. It might take time, but at least the risks are lower and your code isn't being needlessly thrown onto the cutting room floor.
And that's even without mentioning any sort of technical concern, of which I'm sure there will be many for anything larger than the smallest bugfix. Other engineering teams don't like the way you've duct taped your thing to their thing, others still are not excited to have to maintain your new addition to the infrastructure. Hopefully your open source bazaar thing has a decent way of resolving these issues, but most of them just have a BDFL that you can only hope will be polite as they smite your unwanted PR.
In a "cathedral" team with decent practices, you just have other engineering teams as stakeholders. It's just an extension of the set of connections you've already built.
How does this differ from a bazaar, exactly? Isn't all this possible under that model too?
Maybe, but you can't know for certain that you have exhaustively captured all the stakeholders or passed around your request to every relevant person. Additionally, decentralizing heavily enough will make roadmapping basically impossible, and your customers probably want to know when their desired feature is actually going to land.
Finally, the mapping holds the other way. The problem described in the article is that too many people hold veto power, and so nothing actually got done besides the lowest common denominator. It would be entirely possible for your OSS project to suffer the same fate if enough people in the mailing list complained that your new change breaks their workflow. You can take a look at all of these open protocols and projects that end up building the least common denominator product just because they do not dare break arcane promises made in a different time about some bits being in such and such place.
Centralized decision-making is its own sort of effective, at least in making sure stuff like accessibility and design are kept consistent. Microsoft gets a lot of crap from us for a lot of things, but some of these things are necessities that could just as well be forgotten otherwise.
The solution is kind of the same in both situations:
* Minimize the list of people who have veto power
* Work with the people who must retain it so that they trust you when you want to do something a little unexpected or different (and manage your own expectations when they disagree with you)
* Ensure someone above the conflicting parties exists that doesn't always resolve ties in a way that is risk-averse (or guide them to a solution that isn't always risk-averse if necessary)
For those who didn’t click through the right chain of links, here’s the original article this one was respond to. I do wonder what they were trying to do that didn’t happen though.
The thing that stood out to me is this is from 2006. Joel‘s suggestion of the “b’bye button“ is effectively what we got on the iPhone and later the iPad (minus user switching). The user no longer has to care about all those different states. They can either go away for a little while, or go away for a long while and have the hardware turn off.
None of our problems are technical. They are the direct consequence of the political state of affairs.
A few ideas for more productive organizational structure:
* Temporarily merge the two teams, so the members share the exact same leadership chain.
* Define clearer contracts between parties for what needs to get done, in what order of priority. Find the volunteers who can help ensure fewer important task dependencies aren't skipped.
* Narrow the participant pool.
No situation will make everyone happy, but some organizational rigor would allow for higher productivity.
I mean a monorepo is basically the modern answer to this, use technology to the solve the coordination problem, its just a different (better) answer. At Windows scale (like Google and FB etc) it would require effort but is do-able, and has been done elsewhere.
A multi tiered tree of repositories sounds very complex though, and would require its own effort, amazing it worked at all.
You wish it took a year to code up a menu and the result of that years effort is a menu that nobody likes? I don’t think that’s a good outcome for anybody.
Also, Vista is not regarded as a high point for quality, I think it’s pretty clear that modern Windows is an improvement in terms of quality, even if some of the choices they have made are questionable.
The ads and forced Microsoft accounts and forced upgrades have nothing to do with development methodology. All of those features can be implemented either with a meeting-hell broken-waterfall model or with an Agile® model. The problem is that the higher ups at Microsoft want those misfeatures.
If you restricted yourself to only the telemetry thing you'd have a point, since some models are reliant on spying on users to figure out what works and what doesn't, while other models are reliant on good UX thinking hard about the space and doing user testing. Even there thought, I bet the main problem is that user testing is more expensive than spying on users.
A proper critique of the development model would look at the huge and increasing number of broken, half-broken or idiosyncratic features. Pointing to well-implemented but user-hostile features is a broader critique of the direction of the company.
There was an issue in the first versions of Vista with the file copy especially inside explorer. It was fixed after a while, but this ridiculous error existed for quite a while.
After a year or two Vista became quite okay. But the initial release was just a huge pile of bugs, held together with a lot of duct tape.
Games that take 6+ years of development are more often higher quality than the ones that take 2-3 years to develop. So yes, I'd be willing to wait half a year or year for updates that are not garbage.
Most of the cool code that we implemented was dumped on the floor. We had an experimental feature that would play a varying tone (frequency and amplitude) as you moved the mouse around and got close or over certain controls. The idea was to give a way for non-sited folks to explore a GUI. It worked so much better than the final thing that was shipped which just read all the controls in x/y order in the currently focused window.
I eventually got off that contract and I was amazed that Vista was eventually released.