As a development manager for a quarter-century, and an active software developer for a lot longer than that, I can definitely say that every place there's a "meeting of the minds" is a place for bugs.
In the software itself, the more complex the design, the more of these "trouble nodes" (what I call them) there are. Each interface, each class, each data interface, is a bug farm.
That's why I'm a skeptic of a number of modern development practices that deliberately increase the complexity of software. I won't name them, because I will then get a bunch of pithy responses.
These practices are often a response to the need to do Big Stuff.
In order to do Big Stuff, you need a Big Team.
In order to work with a Big Team, you need a Big Plan.
That Big Plan needs to have delegation to all the team members; usually by giving each one a specific domain, and specifying how they will interact with each other in a Big Integration Plan.
Problem is, you need this "Big" stuff. It's crazy to do without it.
The way that I have found works for me, is to have an aggregate of much more sequestered small parts, each treated as a separate full product. It's a lot more work, and takes a lot more time, with a lot more overhead, but it results in a really high-quality product, and also has a great deal of resiliency and flexibility.
There is no magic bullet.
Software development is hard.
It had a 3-tier architecture.
I asked: Why?
And they answered: Why not?
I answered: Because layers must only be introduced if needed. Is there a need?
They answered: The standard design is the need.
I clarified: Is there a technical requirement? Or perhaps an organisation one, such as disparate teams working on the two components?
They answered: No! Of course not! It's a unified codebase for a single app written by a single person! But it is not Enterprise enough! It must be split into layers! And then, you see, it will will match our pattern and belong.
I verified the insanity: Are you saying that this finished, working application isn't currently split into layers, but you want it split into layers simply so that it can have layers?
They chorused: Yes.
This was the timeframe where more and more manual work was automated. Hence it was a common situation where input used to be given by a human, but now comes from another application. The simplest way to do that kind of retrofit was to drive the UI from the application: The application fills in its own gui fields which triggers the validation, then simulates a click on OK.
This caused al kinds of ungodly messes. You need a gui for background processes, reliability was low, etc.. 3 tier architecture were a way to say 'never again' to this style of programming. Forcing people in to it was necessary.
But that was another time. Mindlessly applying an architecture without understanding why is of course dumb. But not applying an architecture without understanding it's pros and contras is just as dumb. It all depends on the quality of the architects in question.
Not that I want to call you dumb, of course. IT today is different from 20 years ago.
It wouldn't have been too bad, except the ODBC interface inadvertently led to abandoning schema-aware programming models like VB's ADO, Paradox, FoxPro, etc.
At the same time, object oriented became fashionable.
So we ended up with ORMs, ActiveRecord, and various offshoots.
Mostly because no one remembers life before client/server.
Workgroup: I/O thru file system, clients responsible for locking, concurrency, etc.
Client/Server: I/O thru DB's protocol (eg TDS), server responsible for locking, concurrency, etc.
As for what was lost, I spent way too long (10+ years) trying to figure that out, trying to fulfill the desire to recover the ADO (Access Data Objects) programming paradigm. I think I succeeded, more or less, and am currently reorienting my life to work on it full-time.
FileMaker may be an alternative, but I never understood why it hasn't caught on.
And I am not even sure if there are any product similar that is good on the Web.
Most multi-user apps were two tiered: client and database. The way they worked was by connecting directly to the database and/or a shared filesystem in the local network. All validation happened client-side.
Database credentials and fine-grained file/directory permissions were the only security measures. That and the NAT. ;)
There's still software in the market that uses this model, like some niche ERPs from the 90s and software catered to small/mid businesses.
It baffles me that the CCNA materials is still based around these use cases.
> Not that I want to call you dumb, of course. IT today is different from 20 years ago.
The meteoric rise of RPA within IT shops is recreating the same situation. The more things change, the more they stay the same.
It's pretty easy to see why this would cause problems, but the consulting companies have been pushing hard on RPA because when it blows up in 5 years, who are you going to call? I say this as a consultant who has to sell this awful crap because "partnerships".
Copy&paste is the enterprise API/data integrator of last resort. Image/video is another integration point. iOS can screen capture full-page images of web pages, with tools for human annotation. Soon the local ML/bionic processor and AR toolkit can perform text recognition on those images, which means they can be live edited, re-composed and fed into another system.
> fancy service desk platform the suits want may not have a connector to your 15 year old heavily-customized SAP instance (which is run by another team you have no influence with)
This intersects with DRM and the title of the OP story. When OrgA and OrgB fail to partner/cooperate (e.g. no formal integration) or are actively hostile (implement DRM to prevent data movement between OrgA and OrgB products), it creates pain for customers and new business opportunity for OrgC and OrgD.
Which is why scraping and reverse engineering are never going away, they are society's last line of defense against vendor org dysfunction.
It was one thing scripting mainframe terminals, but the equivalent today are SaaS apps. The major enterprise vendors like Salesforce are pretty good about roadmaps and release schedules, but a lot of the smaller ones work on more of a continuous deployment model. This means your RPA integrations are constantly breaking, and suddenly you have to hire a whole bunch of RPA analysts to deal with fixing them. Or you can just hire a few more data entry people to do it manually.
One, and more often than not, applications embed a ton of business logic in the client which is not easily available in the API.
Two, and honestly I feel dirty writing this, the UI is usually a lot better tested. Or tested at all.
I'm just the messenger, here. We were very adamant about NOT doing this in my previous company or our current projects, but I can totally understand the "it's good enough for humans, it's good enough for machines" attitude. It makes me sad though.
It might have gone better if you had also stated why that is the case, e.g. "every additional layer exponentially increases the likelihood of bugs being introduced, so their introduction must be worth that risk, or the higher cost of mitigating measures".
Of course, the challenge will still be that "likelihood of bugs" is rather abstract, and often people believe they can be prevented just by paying more attention, and assume that that will happen of its own accord.
If they hadn't layered it, the new retrofitting wouldn't even be possible and the company wouldn't have that contract.
Was it insanity?
In your example, the team tasked with retrofitting would have introduced the layers when the need to make changes arose.
I sure hope all of y'all's companies are 'business purpose focused' or you're gonna be looking for a new job pretty soon.
Prelayered raises initial complexity while lowering eventual complexity. It also protects the core layers where the IP is (presumably) from regressions during retrofit.
Yes, it was complicated, but I think there is a benefit: it's very clear where certain functionality should live. The C# REST layer was application-facing, so it took care of SSO and basic validation. The Java webservice contained the business logic to validate things from an broader enterprise perspective. The ESB was a piece of trash that did provide authnz so the Java webservice didn't have to.
Was it worth the complexity? Probably not, in this case. But those sorts of applications tend to have long lifespans and evolving requirements, so the standardization can be helpful.
Off-topic, but your post reminds me of this Michael Feathers article , where he argues that while programming languages have tools to support encapsulation, the don't really work, but the thing about microservices (or layers in this case) is that they actually force us to actually encapsulate our code.
Not for a single person though, but it will force kin developers to think more deeply.
Otherwise it could easily result in a code-mesh/hell.
They usually don’t deliver on time and are too stressful to work with. It’s not worth it both personally and from a career perspective.
However if they manage to deliver a complex design on time, I’ll have lost a great career opportunity. It’s a gamble either way, but high complexity both in organization and design, usually yields a high failure rate on just about every metric I can think of except the metric of “I’m going to use this complex design to get another job and bail myself out of the dumpster fire I’m creating before I have to deliver anything of value”.
... astronaut proceeds to fill two whiteboards with ungrounded nonsense.
All you can go is smirk and just wait until the whole project gets mysteriously canceled because no mortal could ever implement it successfully.
Honestly as long as I don’t need to directly depend on them, these are some of my more favorite meetings. So surreal. That and when the team infights the entire time.
I walked past their desk one evening after hours and noticed a printed A3 page of their high-level design diagram. It was... spectacular.
It had over a hundred tiny icons, each representing various systems. Triangles for directory systems, cylinders for databases, and little arrows connecting these things.
You have to picture a spider web of connections between dozens of each type of system.
The notion that this could be implemented was absurd beyond all comprehension. Just one of the tiny little arrows was connecting SAP to a custom system vaguely similar to Siebel. This arrow represented on the order of thousands of tables and API endpoints that need to be hooked up. Another arrow connected an Active Directory with a million accounts to an Oracle directory of the same scale. Another arrow represented synchronisation between a cloud-hosted payroll service to an on-premises equivalent product.
Half the systems didn't exist. Three quarters of the connections didn't exist. Most would have to be written as bespoke code. Some of the arrows would in turn require load balancers, distributed systems, and change tracking databases of their own. We're talking thousands of man years of effort to implement this thing.
It was audacious in the breadth of its scope to the point of going past insane into the brilliant daring art that's only possible if you can appreciate it at the right level of understanding.
That understanding was that these two brilliant people had been collecting something like $4K/day each for years and produced something that dares your to call them on their bluff. But nobody dared say that the emperor has no clothes. They pulled it off.
I was truly impressed.
That being said, I took a picture of the two whiteboards after this dude filled it up with their god-like wisdom. It was truly a marvel of ungrounded architecture.
Yet they all still introduced the headaches of having to update the abstraction layer whenever you wanted to make schema changes
Maybe I'm nitpicking, but the article points out the number #1 predictor of software bugs is not the complexity of software but of the organization itself. A single person can make an hugely complex piece of software, and a relatively large team can make a conceptually simple system.
As for software complexity itself, there's an interesting research result that the thing that matters the most is line count. Not cyclomatic complexity, not the type system, not modularity or test coverage, not the programming language -- looking at the line count alone trumped all the other metrics in predicting flaws. (I can't look for this paper now, but I'm sure with a bit of googling anyone can).
In practice I think you tend to hit Conway’s law -- organizations build software that mirrors their own organizational structure. So it’s hard for large teams to make simple designs.
I’m very skeptical about that line count metric; in my observation bugs tend to sit between modules due to bad interface design. But I could certainly believe that bad modularisation is correlated with line count (in the form of excessive boilerplate).
Maybe it's anti-intuitive, but it's what reality shows :)
Agreed. As I pointed out in another post, a good candidate for this metric is the amount of unnecessary code, which is often a proxy for unnecessarily large/complex teams.
Smaller code fits in your head better, and stays more predictable. You don't have as many weird if-conditions to remember.
More LOC doesn’t necessarily mean more if-conditions, though! For code written in different styles.
It’s a strong statement that LOC and LOC alone is the best known predictor of bug rate, but that seems to be what several people in this thread are saying.
For example, I think most people (though not all) would agree that very dense code full of tricky one-liners (think Perl Golf) is more likely to be buggy and hard to maintain. But if so, “number of control structures used” or some such ought to be a better metric than plain LOC (I assume we’re using “LOC” literally here, not as a shorthand for something else).
Maybe there’s some second-order effect going on? Like very dense code discouraging modifications, so it gets less maintenance, and therefore accrues fewer bugs over time?
This is an interesting topic! Any research links appreciated.
Yeah, I wasn’t claiming to be right, more noting my reaction! I’ll look up the research -- any links appreciated.
Yep, and when you start ripping the monolith apart into separate version controlled projects and deployment pipelines without addressing the interface issues you've significantly increased the complexity of your work products.
I was on a team once where one of the senior people was such a jerk no one wanted to work with him. This led to him and the team carving out a piece of the system that he alone worked on and interfaced with the rest of the system through a single queue. This was certainly not the best design and added all sorts of unnecessary complexity.
Another team I was on had a person who was a good programmer and tended to blow off design meetings. The organization never rarely reprimanded this person. In turn it led to various APIs being built that were close, but always not quite right.
Big company examples abound. Contrast and Apple keynote to a Google one. Sometimes I wonder if the people presenting at the Google one even work at the same company.
But it's not the only factor, and, quite frequently, it's a matter of correlation, as opposed to causation.
That kind of thing can be very tricky to determine.
When I write software, my first stab at a function tends to be a fairly linear, high-LoC solution, which I then refactor in stages; reducing LoC each time, and ensuring that the quality level remains consistent, or improves.
As far as quality goes, my first, naive stab, was just fine, and I have actually introduced bugs during my refactoring reduction.
So if you try to predict software bugs using modularization (or lack thereof) or "if-then-else" branches, or whatever complexity metric you can think of, you'll get one result. If you try to predict them using simple line count, you'll get another result. The second one will have better precision & recall. So no metric so far has been shown to be better than simply counting lines. That's not an obvious result, but it's the truth.
Sadly, you'll have to believe me because I cannot find the studies right now.
But there are many code lines that don't contain bugs. If only one could somehow make software only from those...
I'd take a slightly more mathematical approach to the code lines predictor: zero lines of code contain zero bugs, an infinite amount of lines of code contains an infinite number of bugs. It follows immediately that yeah, LOC is massively important and no, we are not interested in that, what we do want to know is how everything else is influencing the derivative.
(and on that digression about longish linear solutions: I completely stopped feeling bad about writing long, linear functions for long, linear problems. I've come to greatly prefer those over the indirections of forced subdivision. If there's a reason to subdivide other than "long is bad", great, go for it, but never subdivide just for having smaller parts. Use assign-once, nested scope etc to make the length more palatable)
See minute 39:30: https://vimeo.com/9270320
Depends on your definition of Big Stuff. If you mean send a rocket to Mars, then yes. But the vast majority of us are working on simple web apps that might call a few apis, yet these seem to require Big Teams. Compare that to what a single game developer might produce, and compare the complexity and performance of the product.
I think we need Big Teams for Small Stuff precisely _because_ of these 'modern development practices' that you mention. Getting things done in these paradigms takes _forever_, so you need a Big Team.
I do think that we are in a sort of "dependency hell," that is sorting itself out. In the end, a few really good dependencies will still be standing in the blasted wasteland.
Dependencies mean that a small team can do Big Stuff, but that relies on the dependency being good.
"Good" means a lot of things. Low bug count is one metric, but so is clear documentation, community support, developer support, and even tribal knowledge. It doesn't necessarily mean "buzzword-compliant," but sometimes aligning to modern "buzz" means that it benefits from the tribal knowledge that exists for that term, and you can deprecate some things like documentation and training.
People often think that I'm a dependency curmudgeon. I'm not. I am, however, a dependency skeptic.
I will rely on operating system frameworks and utilities almost without question, but I won't just add any old data processor to my project because it's "cool." I need to be convinced that it has good quality, good support, and a high "bus coefficient," not to mention that it meets my needs, and doesn't introduce a lot of extra overhead.
Nothing sucks more than building a system, based on a subsystem that disintegrates a year down the road. I suspect many folks that have built systems based on various Google tech, can relate. I have had that experience with Apple tech, over the years (Can you say "OpenDoc"? I knew you could!).
Perhaps. But what I've also seen is the head count of a given project is a direct reflection of the intra-org status of the person heading the project.
There's a belief - that's a myth - that if 3 ppl is good the 6 is twice as good and time will be cut in half. I think we also know - with rare exception - that productivity slides as heads increase.
That's a given.
Then there's also a belief - again a myth - that some mod dev practices can fix the increased head count issue. It might mitigate it here and there. But MDP can only do so much to fix a dysfunctional org/group.
Ultimately it's a leadership/management issue. Process and technology are too often lipstick on a pig.
This goes back to Brooks and has been true since longer than most of the programming industry has been alive. I do wonder why people are so resistant to learning from the past and just assuming "the way we do things now" must be an improvement.
This 6 person development team is promoted by the Agile Industry. They say 6 people is a sweet spot, so that if someone goes on vacation then some other developer can "cover" them.
You may say, "but this problem doesn't need a billion dollars!", to which I say "your corporate ownership structure isn't complicated enough, you need to make sure that as much of the billion dollars sticks to your hands as possible after you fail". WeWork passim.
I assume that some investor was told that "We are going to have 100 developers while our competitor has only 20" and the investor bought into that plan.
My best guess is to why things have become this way is that middle management in "The Enterprised" reckoned "Agile" as an opportunity to commoditize software development.
With open-source, languages like go/rust, excellent IDEs and basically free compute, the amount that a single developer can produce is 10x/20x more.
I write in Swift. I love it.
I started with Machine Code (not ASM -Machine Code).
Also, all those lovely system frameworks are wonderful.
I used to use MacApp (Google it), and PowerPlant (Same).
AppKit and UIKit knock them into a cocked hat. SwiftUI shows promise, but it may be a year or two before it can really match the standards.
It is NOT obvious to someone who hasn't thought about it for a while. Suppose someone is trying to persuade another person and just assumes that they already realize the costs of organisational complexity. There's a good chance they'll run into a wall and not get the message across.
If you think realizing it is a no-brainer, then your 25 years of experience is showing.
Bring it on. Share your experience with youngsters. And let the elite confront with methodologies you maybe didn't have experience with.
* Using many of the GoF/OOP patterns, because you may need extensibility at some point. Basically YAGNI.
* Complex, hard to mentally map, build systems (e.g. CMake).
* Designing for purity over simplicity (I'm actually big on FP, here I'm thinking of the Haskell crowd which IMHO sometimes overdoes it).
* Writing a complex architecture without prototyping. Often your prototype will tell you what you need. If you start architecting too much beforehand then you often waste time on some details that don't matter, and even worse, afterwards you try to force it into your architecture which doesn't actually fit the problem. The beauty of software is that it's easy to change things. Architecture on buildings is different because you need to make sure that you're not building the wrong thing. In software building the wrong thing can give you the right insights and still be faster than planning for every eventuality.
One million times yes. In my experience using a prototype as an input to a specification works much better than the other way around.
> Complex, hard to mentally map, build systems (e.g. CMake).
Also yes. It's almost to the point where one has to understand every detail of how CMake works to get it to do one (1) specific thing you need in your build process.
Just to clarify. I have been down this road. I am not interested in sacred cows or third rails.
I'm trying to do all my writing and commenting, based only on my own experience and insight.
I'm done with fighting on the Internet. I don't have the energy for it anymore.
The above comment can devolve into a flame war because non-technical managers see Agile and Scrum in a different light. They believe that without proper management developers will be unproductive.
What does it mean exactly? I feel you are trying to share a nice idea but I can't comprehend it. What are those small parts? Classes? Modules? Services? What does it mean to treat them as a separate full product within an organization?
Many people don't seem to understand when to use microservices. They're not for small teams.
I believe the real benefit of them is that you can have a team at say, Amazon, who works on their product prediction engine. They have well defined input data, and they have well defined data consumers need as an output. Beyond that, they just have to coordinate within their own team to build what they need to.
They don't have to meet with stakeholders across the organization and get into debates with ten other guys in other departments about adding a database field. They have their own database, of their own design, and they do with it what they want. If they need more data they query some other microservice.
Even if your modules are very separated, if you can't individually use and play around with them they become a part of a big blob of software. Services may be products, but only if they're idependently usable.
If you have a small product that's useful in and of itself (e.g. git) you can much more easily make it work well and then integrate with other good tools and replace those if necessary (e.g. if you have problems with Bitbucket/Jira/Confluence, you can switch them out for other solutions, e.g. Gitea).
But if you have a huge clomplex product then at some point it becomes organizationally impossible to move away from it.
However, bug control is absolutely vital with these systems, and you definitely need some kind of quality-assurance system, or you will be hatin' life.
There are some overheads involved, and you lose optimisation possibilities.
Most of my coding work has been done as a lone programmer. Even when I was working as a member of a large team, I was fairly isolated, and my work was done within a team of 3 or fewer (including Yours Troolie).
I have also been doing device interface development for most of my career, so I am quite familiar with drivers and OS modules.
When I say "sequestered," I am generally talking about a functional domain, as opposed to a specific software pattern.
Drivers are a perfect example. They tend to be quite autonomous, require incredible quality, and have highly constrained and well-specified interfaces. These interfaces are usually extremely robust, change-resistant and well-understood.
They are also usually extremely limited; restricting what can go through them.
The CGI spec is sort of an example. It's a truly "opaque" interface, completely agnostic to the tech on either side.
There are no CGI libraries required to write stuff that supports it, there's no requirement for languages, other than the linking/stack requirements for the server, etc.
It's also a big fat pain to work with, and I don't miss writing CGI stuff at all.
It is possible to write a big project his way, but it is pretty agonizing. I've done it. Most programmers would blanch at the idea. Many managers and beancounters would, as well. It does not jive well with "Move fast and break things."
But there are some really big projects that work extremely well, that don't do this. It's just my experience in writing stuff.
When you write device control stuff, you have the luxury of a fairly restricted scope, but you also have the danger of Bad Things Happening, if you screw up (I've written film scanner drivers that have reformatted customer disks, for instance -FUN).
Sometimes you do. But many times big stuff gets written for reasons other than need. One of the best wins in our industry is to recognize that the big stuff isn't needed and to never start the project in the first place.
At Google when your project begins to scale up you can ask for more money, more people, or both. Most teams ask for both.
What you can't ask for is different people. You can't solve your distributed systems problems by adding 5 more mid-level software engineers to your team who have not worked in the domain. Yet due to how hiring works, this is what's offered to you unless you want to do the recruiting yourself. Google views all software engineers as interchangeable at their level. I have seen people being sent to work on an Android app with hundreds of millions of users despite never having done mobile development before. That normally goes about as well as you'd expect.
So you end up with teams of 20 people slowly doing work that could be done quickly by 5 experts. In some cases all you lose is speed. In other cases this is fatal. Some things simply cannot be done without the right team.
At Amazon, Sr. Leadership and HR love to pretend all SDEs at a given level are interchangeable, level actually indicates competence, and leetcoding external hires with zero domain knowledge have far more worth than internal promos. All of the above assumptions seem completely insane to me and have resulted in the destruction of many projects.
Honestly I don’t know. I agree it’s weird. But these companies keep succeeding doing it this way, so I’m not sure what to make of it.
That doesn't necessarily mean anything. The fact that a system might be working, doesn't mean it's anywhere near optimal. I think these companies are successful in spite of these types of policies, not because of them.
I bet we can improve predictive power by considering the degree of overengineering, i.e., the number of engineers working on a task (edit: or lines of code) relative to the complexity of the task they’re working on. 100 people working on a task that could be accomplished by a single person will result in a much buggier product than 100 people working on a task that actually requires 100 people. The complexity of code expands to fill available engineering capacity, regardless of how simple the underlying task is; put 100 people to work on FizzBuzz and you’ll get a FizzBuzz with the complexity of a 100 person project. Unnecessary complexity results in buggier code than necessary complexity because unnecessary components have inherently unclear roles.
Edit: substitute "100 people" with "10 million lines of code" and "1 person" with "1000 lines of code" and my statement should still hold true.
The sad part is, it would seem like all the engineers we have are overkill, but in my little silo, we could easily split our work into even more sub-teams, hire 12 more people, and still keep churning just to stay afloat. Sorry for the rant, I'm not sure exactly what I'm driving at. I guess I'm just trying to give a cautionary example of how not to manage large-scale software projects.
Maybe I'm wrong though. When you are charting new ground, building new shit that has never been build before--which is what your product teams should be doing--you don't have years long backlogs because you can't see that far out. Good, productive feature work is iterative.
If you can see with a high degree of clarity what you will be working on 5 years from now, it probably means it's been done before and you are better off cutting a check for it.
Hopefully this makes sense :-)
Also, the study doesn't really take "tasks" into account at all, it seems. Just modules and data relating to the modules.
From an existing codebase this would be very difficult to objectively assess. I think you’d have to study it empirically — come up with a set of tasks (“A”) that each takes a single programmer “P” on average a week to complete. Then come up with a set of tasks (“B”) that each takes a team “T” of 10 programmers on average a week to complete (ensure that 10 programmers is a lower bound, i.e. decreasing the number of coders causes the project to take longer). Across multiple solo programmers and teams, compare the quality of the code produced by programmers P on tasks A, teams T on tasks B, and teams T on tasks A. I’d bet P/A > T/B > T/A.
Some day, I'd love to participate in the NASA / JPL style. Everyone reviews the entire code base together. Bugs are assumed a failure of process. I guess the thinking is all bugs are shallow given enough eyeballs.
Realizing now that I'm a hypocrite (again). I hate pair programming. But do kind of enjoy code reviews. Now I don't know what I believe.
That at least is my experience anyway.
Larger and more complicated software both requires a bigger team (therefore more organizational complexity) and is more likely to contain bugs.
It's why Conway's Law exists, and points towards the importance of well-designed and -specified APIs.
Big projects require big teams, and also have a lot more "trouble nodes," so there are many more places for bugs.
The big team is not the cause. It is simply a natural coincidence.
But if you have a medium project, and you put a big team on it, then I suspect that you will get big team problems (communication issues causing bugs) even though the project itself didn't cause the issues.
After one of the early big software project failures (maybe Multics?) there was a quote about software projects going around (maybe John R Pierce?) that "If it can't be done by two people in six months, it can't be done."
One of the functions of good software design is to break the system down into pieces that a couple of people can complete in a reasonable length of time.
That will take to you to healthy and productive places.
I guess my point is that running decent software on what today would be considered very little hardware is a solved problem, but it's not what the economy is optimized for.
For me, Vista was slow as molasses, which was enough to upset me and make me hate it.
For a lot of people, it also had driver problems.
Every decision you make is second guessed by 8 other people, and anything you do impacts 5 other teams. It's infuriating unless you're the type of person who loves working through people problems, and a lot of developers, including myself, aren't that kind of person.
Also, maybe it has to do with the fact that failure is an expected part of startups, but in the business world there's perverse incentives to build something mediocre, expensive and ultimately useless just to give the appearance of success.
With a startup, you're free to do whatever the hell you want. If it works, it works, and if it doesn't it doesn't.
I guess it would be possible to give in-house teams free reign to try out new ideas without all the organizational friction, but the startup model works, so why not just throw some money at some kids and see what happens?
The traditional name for this is a skunkworks project:
They tried something like it somewhere I worked but messed it up. Everyone needed to pitch ideas then the 'approved' projects were the pet ideas of various suits, put under the same restrictions as regular projects, and choked to death.
In my current project (big co.) we have a technical PM, a non-technical PM, a non programmer dev lead, an scrum master and a lead business analyst, all involved in managing the work of a team of 2 and a half (a sr ba/qa guy, a part-time ssr dev and me). Waste work is probably in the 90%.
Not just middle layers and up. Freeloaders and gatekeepers everywhere.
This happens at smaller companies in smaller ways but the effect is the same.
It's worse than the "Mythical Man Month" in that production is not simply slowed down but it is slowly made rotten until it gets burned, buried, or passed off to out-sourced maintenance.
Their "code complexity" means nothing. If you compare simple "todo web app" it'll have orders of magnitude more "code complexity" than, for example, sha256 hash implementation.
It's very unlikely, however, that you figure out every single combination of variables while writing tests. So, so in practice, I don't think it's possible to avoid bugs, unless you have a really tiny code base, maybe.
Now, sure, such things happen less when it's me talking to me than when it's me talking to you. They still happen, though.
Blaming the means for the ends is a classic n00b mistake. A mistake that's being made over and over and over again.
> In the replicated study the predictive value of organizational structure is not as high. Out of 4 measured models, it gets the 2nd highest precision and the 3rd highest recall.