"The greatest people are self managing, they don't need to be managed. Once they know what to do they will figure out how to do it. What they need is a common vision, and that's what leadership is. Leadership is having a vision, being able to articulate that, so the people around you can understand it, getting a consensus on a common vision."
"We were in stage where we went out and thought, Oh! We are going to be a big company, so let's hire professional management. We went and hired a bunch of professional management but it didn't work out all well. Most of them were bozos, they knew how to manage but they didn't how to do anything!"
"If you are a great person, why would you want to work with someone you can't learn anything from? You know what's interesting? You know who the best managers are? They are the great individual contributors who never ever want to be a manager but decide they have to be a manager because no one else is gonna do a job as good as them"
Programmers/developers are only effective if either the developer himself or enough people in the team have sufficiently deep domain knowledge.
That means you can only write accounting software if you are an accountant, in addition to a developer. You can replace "accounting" with anything else you want.
Software developers don't want to hear this because it means that being a developer is near useless : it allows them to express themselves in code but ... they have nothing to express.
Accountants don't want to hear this because it means no generic software developer (or firm) can deliver on the software they want.
The real bad news for software devs is this : you'll do a lot better as a bad developer with expert domain knowledge than vice versa. This is why Excel sheets and VBA macros can run for decades when great and easily maintained software cannot : the knowledge they were written with is what makes the difference.
Of course both situations are what you constantly see in the real world. Software developers just making software that doesn't support the function it was written for, and really, really badly written pieces of crap software that work amazingly well.
That's overstated. You don't have to be a full-blown accountant; you just have to have enough accounting knowledge to do your job. I worked at a funds company for a few years and I didn't know anything about the business when they hired me. But the amount of domain knowledge I had to learn to be effective in my little area wasn't that difficult to pick up.
>The real bad news for software devs is this : you'll do a lot better as a bad developer with expert domain knowledge than vice versa.
Nobody is going to thank you for producing software that would have been great if only it worked the way you intended.
Nope, ideally, you should be better than your average "full-blown" accountant.
> Nobody is going to thank you for producing software that would have been great if only it worked the way you intended.
Look there are minimum levels of competency for a lot of things before you can do anything. In similar fashion, you also need to be able to walk, or at least get around, have some modicum of how to run a business (even if you're just a TL), ... and so on and so forth.
If you focus only on deliverables and deadlines, you'll end up with developers using a mix of different approaches, libraries and even languages. It hurts team cohesion and makes the logistics of project management much harder since Joe can't take over Tom's code now that Tom has the flu.
As I see it, one of the main tasks of a technical manager is to set conventions so everyone would feel at least semi-comfortable with everyone else's code. That's not micro-management, that's management.
This is actually an extremely inefficient and demoralizing environment for those involved. Yeah, it's easy to take over when someone leaves, because they were so hamstrung by the environment that they never built anything interesting. And you're going to be doing a lot of this taking over, because people are always leaving, because they were hamstrung. So this idea that we can't trust individuals to stick around and do a good job, and we have to make sure they never have enough power to do damage when they make a mistake, it's a self fulfilling prophecy. It makes them untrustworthy and drives them away.
It really is about ownership. Programmers who are achieving at a high level move much faster and do much greater things. It's worth letting them make mistakes to retain the best people and get their best work. The only catch is figuring out how to keep them accountable for their decisions. OK, you want to use this new tech or try this new architecture. How do we tie your compensation and career progress to the success of those decisions?
No amount of ownership is going to cover for an otherwise brilliant developer who jumps ship once the project has shipped its MVP. And if that brilliant developer wrote that project in, say, Haskell (because they wanted to learn it), that project is probably doomed from a financial point of view - a quality Haskell developer willing to do maintenance on an existing project will likely cost more than the project is worth in the first place.
The root of this problem does lie in management, but it's a self fulfilling prophecy at this point: employees have taken the lesson to heart. Even companies which do give fantastic benefits are going to still see high turnover, and need to account for that.
That's not what I was talking about. Employees are neither bricks nor cogs. But having said that, if you run any long-term project, you should expect people to come and go, it's part of the game. Thinking that this will not happen in your project is simply negligent.
> This is actually an extremely inefficient and demoralizing environment for those involved.
It doesn't have to be. Working with common conventions and tools doesn't have to be demoralizing. Conventions should be set for a good reason, after an open discussion, and possibly even a vote ("Do you think that it's worth bringing in this library?", "Should we all use the same IDE?"). Also, conventions aren't set in stone, they can change over time.
This gives the team ownership of their project. It makes passing knowledge between team members easier, it makes it easy for team members to help each other when they've run into an issue, not to mention that it makes code reviews and various technical discussions a lot easier.
If Bob is writing using a different language or a set of libraries than the rest of us, who can review Bob's code? Who can help him out when he's stuck on some bug? Who can help him flesh out his design ideas for some feature he's writing?
Without team cohesion, you're creating not a team of developers, but a bunch of individuals who happen to work on related stuff. I don't know about you, but to me that sounds a lot more demoralizing. To me, one of the great benefits of working on a team is that we all get to learn from each other, plus we get to share a common goal. Agreeing to some common conventions seems like a small price to pay for that.
I worked at a place that essentially standardized on Vim. They emphasised pair programming so IDE standardization helped with that. If you already use Vim, sounds great. If not, probably sounds terrible. So what looked like team cohesion was homogeneity. They subtly turned away a lot of people who didn't fit the mold in what should be a minor detail.
I've worked at two companies that basically banned Redis. Having a conversation with an SRE who had a bad experience, trying to convince him it's worth having this tool available, is one of my worst professional memories. Consensus driven technical decisions suck. It's better to give people authority and responsibility. I would have happily admined it myself.
Nobody's going to review Bob's code, not really. If they're not working directly with him on the project and it's doing anything reasonably complex, then they don't have the context to say anything useful beyond catching typos. Things get rewritten every few years anyway.
The bigger problem is that many companies do not reward that, they reward doing it quickly and not asking too many questions.
(Somewhat seriously) are you hiring?
I'd be more inclined to say one of the main roles any manager is facilitation more than anything else.
Any manager that "sets convention" is somebody who wants the easy part of technology without the corresponding hard part.
Talking about technical solutions is easy. Implementing it is much harder.
I've come across multiple "technical managers" who were hands on with code greater than x years ago and they always end up talking out of their arse.
And that's not to mention the numerous wrong choices they've committed to/spoke about at a high level meeting with zero notion.
Common ownership does not end up with a random patchwork of technologies. Teams make collective decisions on them.
I might like to write CGI programs in Prolog with a MarkLogic db, I'm not going to unilaterally decide to write my bit of a team application like that when everyone else writes WSGI Python applications with Postgres.
They don't exist to make good people more productive but to make mediocre hires marginally productive.
To be honest, my own senses were numbed by being a yes-man at a corporate job. My job title made it sound like I ran the world, but much of the work I was made to do was simply idiotic. And I was very happy to toe the line because it was easy, you could always spread blame and I didn't have to think much. Then I realised I have a limited amount of time on this planet and decided to do something more useful with my time...
Another example I've talked a lot with my colleagues is company wide coding style definitions. They usually have stuff like "never ever use goto", but then you have Linux kernel code using goto, and it seems all weird. But here too the coding convention that feels "stupid" to the rockstar coder is in place to make the code of the summer intern even remotely usable after they are gone.
I actually found a goto in a C# project I took over this year and one of the first things I did was change it to a while loop with a break.
Most situations (e.g., early returns) for which resource management would be handled by a goto are already addressed by RAII.
Maybe I am one of those people? The first time I hired a developer (contract, and senior contract at that) I had written a prototype of the software I wanted with the core features working but it was clunky. It was to interface a modern piece if software to a legacy system with a poor API. I gave him the code and said that I wanted that same thing, but written in a professional and maintainable way. I asked him to let me know anything he was not sure about, and I would document it further. 'Let's be Agile about it, we don't need to write heaps of documentation', he said. Apart from having to make him recover from various flights of fancy about new features I hadn't asked for, he kept blundering on with things he hadn't understood properly or he lacked the specific domain knowledge for. I had those things, and many were working in my prototype. Several times the project stalled due to a problem he couldn't fix and I recovered it with my limited (at the time very limited) coding knowledge. In the end we went live with his solution which never quite achieved its aims, but when the business found new requirements I couldn't face this again and wrote the whole thing from scratch.
One bad dev? Well he was a lot like many of the ones I met subsequently tbh. Far to eager to find the one vague thing in what you asked for and interpret it the wrong way. Far to quick to think that users should bend to fit the software and far too willing to plow on with code, when they should have been looking at a flow chart. The great development managers I have met are the ones that have spent considerable time exploring the domain they are working in, know how to talk to business people, and stop and ask when something is ambiguous. Sometimes you need to get out of the tech stack and think more in terms of processes.
My methodology is now something like this.
1) I write in plain English what I want (thanks Joel Spolsky)
2) I bullet point my definite requirements
3) I explain a process in simple flow chart blocks
4) I send this to my devs in good time
5) I sit down with them and explain it again, drawing charts as I go if necessary.
6) I invite and expect questions/challenges and note them down
7) I amend my docs and reissue it
8) I let someone else translate whatever I wrote into 'user stories' or whatever else they want to do
9) I test them against the requirements I first wrote, now I know that I have conveyed my meaning correctly
10) I meet with them regularly and take plenty of time to just talk through where we are. I like to have a mix of business people and engineering in the room, because it makes the devs talk in different levels.
11) I get into UAT with the least tech savvy people, who have no understanding of the project as soon as there is something to show them. Secretaries, clerks, call centre operators all find different faults than the tech people who don't do their jobs every day. They ask all the straightforward questions that you never thought of.
Sounds obvious, but I meet with a lot of third party agencies and developers who look like they have never seen anything like requirements from a client! I have had them say things like 'but we use Agile, we will collect our own user story' How dumb it that, 'we are the smart ones, we will cut out the people with domain knowledge and guess'! I tell them they can do what they like, but I will test their finished article against my original requirements when I am deciding if I am going to pay for what they did!
The main thread of this is to force as much human interaction between the developers and the business as possible, all the way through the project.
That still happens, but it seems the "cogging" of developers has largely made that a rarity as cheap offshore developers aren't expected to do that sort of thing anymore.
As an aside Spolsky should be required reading for any person who oversees any department that interacts with developers in any way. Most people think of development as a scientific endeavor, but it's largely artistic with mathematical tools.
An interesting and sort of related aside: I do some work with a company that makes old fashioned 'Enterprise software'. Their process seems to be that the back end dev gods write functionality that is convenient for the database, and then the front end people make that accessible to humans in the most convenient way for them. So instead of working the way humans do, the humans have to work the way the database does! When you talk to their back end devs, they talk down to you like you are an idiot for not following the way they work! Their db implementation is actually pretty good, the application is only really usable to people that can write SQL, however! If their API didn't also suck it would be tempting to re-skin the whole app.
The first job of a dev is to define the requirements with the clients. But most dev don't know or don't want to do that.
What do you do if someone says they don’t know how to estimate a particular task?
If as the manager, you don't know enough about the codebase to get a sense of "last X development efforts on component Y took about Z weeks of effort", then it is your job to get to know the components better.
Note that doesn't mean building up the expertise to be able to do the changes yourself. Just that it should not be completely greek to you.
I've never worked on a project where I was estimating something at a scale of 6 months. But if you ask me whether something will take 1 week or 4 weeks, before I've broken the task down, my intuition is going to scream "I don't know the answer to that."
If I told you "Somewhere around 1 week to 2.5 months", would you accept that answer? Or would you think I was trolling you and we should have a conversation about my performance and place at the company?
If I instead told you "2 weeks", how would that be anything other than a lie?
That said, my main point is that time estimates lead to bad negotiations. If someone says it will take six months and you need it in five, what is on the negotiation table? Just a month? This is how our industry often finds itself in crunch time, making up for time estimates gone awry.
Whereas if there is a list of things that can be negotiated, you can order the construction such that things are natural cuts.
If you are able to turn every estimation session into a series of back and forths where it is "how long?" followed by "why?" if you aren't satisfied, then I feel we are essentially agreeing. Whether you are asking for it or not, you want them to estimate the work required and to summarize it into a time.
And to be perfectly clear, going two people removed from the work, this is required. Similarly, getting 3 people removed from the work, the relevant question will not be "how long" but "how many dollars?"
Similarly, it would be nice to think everyone will eventually need the skill of estimating the value of a feature or product. Because, at the end of the day, that is what is most important.
However, I'm assuming anyone asking someone specifically for an estimate is one of their managers. And they should have more familiarity with what they are asking their colleagues to estimate. And I'm also asserting each of these skills is not trivial. And that they build on each other.
I agree no estimation process is perfect. Nor do I think you should do comprehensive estimates on new work. However, the thing you want to know it's how much work there is. Not necessarily when you'd like to release a year from now.
Odds are high you have a deadline. So the incentive is high to keep the estimate below that line.
That aside, the incentive is high to keep the amount of work planned below the amount of work that can be done before the deadline. It doesn't matter how many abstract units or concepts you use, that's what you want to know if you have a deadline.
That is, if you give me a high confidence and a low confidence estimate, how do I know why you missed it when the time passes? More, how "close" to making it were you? If you got half of the work you estimated done by that time, I'd know you were about halfway there. If the date just passed, all I know is that you didn't make it.
And I used "I" up there. But it is really even more personal. All you know is that you didn't make it. You literally don't have anything to learn there.
Contrasted with any work you do. Look back and quantify what you did. Not how long it took to do it.
And only 90% of the work remaining!
It sounds like you are imagining an estimate as a standalone number that isn't revisited or given more detail. If that's how you do it then of course you can't learn from it. It's supposed to be an ongoing process - if you said '3-6 months' when we started then after 2 months I'd expect a different answer. I'd also expect the initial estimate to be accompanied by an explanation that talks about the work to be done and which areas are causing uncertainty.
If you know of anything to read, watch, or work through to learn to estimate tasks, I would be extremely grateful if you could link or describe it here.
As it stands, I’ve only ever been able to estimate tasks if I’ve done similar ones a few times before. If asked about a new type of task or one with a new toolset, I would currently have to refuse an estimate more fine-grained than a week—-the stress and shame of lying to you would be too much otherwise
Or would you feel more comfortable asking how much they will have to actually do? For example, they would have to disconnect the fridge and move it out of the way, disconnect the oven and move it out of the way, then buy enough tile and mortar for the space, which would be X boxes, and then clean and do the work.
Benefit of this way is when you come back, if none of those tasks has been done on day two, you know it isn't likely to get done on day two.
So, try and use the same approach for programming. Don't just say "it will take 2 weeks." Instead, say, it requires updating X component, modifying Y tests, incorporating Z new dependencies, etc...
As a team, you can try and portion size estimates on each of these. But don't spend too much time on that. Experience is the secret, not perfect estimates. (That is, the more things the team has done, the better they will estimate what they can do.)
I’m just worried that at some point, somebody will come along and ask for an estimate and I’ll say I cannot give one and that this means I am not really a professional.
Can't say if that helps strengthen your claim to professional, but that practice certainly sounds professional to me.
If it can be that far off (or more), what is the business value? And how do I know that the person I'm talking to will realize when I say "this will be done by the 20th" that I am thinking "I have no clue."?
Sounds pretty much like:
> Individuals and interactions over processes and tools 
I don't know why, but somehow people don't get it that agile is not a methodology but a spirit.
Because of all the agile coaches, boards, trainings, conferences and companies. It feels then more like a religion.
Oh wow amen to this. I've started calling one of our managers "Reverend". Particularly when he begins a meeting with a statement that's starting to sound a lot like "we are gathered here today to...".
And this is not helped by the true believers constantly saying "you're doing it wrong."
So, point me at someone doing it right then! Because the landscape is coated with people "doing it wrong", and since I'm doing this as a job, I don't have time to sift through piles of pyrite for a single nugget of gold.
Even now, when we ask for something simple like a new Confluence board, we have to actively push back against new rules, additional restrictions, and more gimmicky Atlassian plugins. It pains me that these misguided parasites are paid to make my life worse.
A cult, more than religion. I believe "cargo cult" is the exact term.
The change to Agile is often led by management, not by developers. And when it's led by management, it's done in a way that keeps management central to the development process (which is the opposite of original Agile), which means an over-focus on process.
Part of this is cynical survival skills: management wanna manage. The more forward-thinking ones probably realize that original Agile is an existential threat, and they can 'get ahead of' that threat by controlling how it's implemented. But most commonly, it's just plain simple myopia: Process guys will tend to view Agile as a process because they're process guys. And when they implement it, they will make management of the process the central role in everything.
I've found the best "methodology" to deliver decent results are sticking with short iterations. Software is often about doing something we've either not done before, in a way we've not done it before, with people we've not done it with before. So we will have surprises (aka delays) on the way. The more frequent we check just what these delays are, the more realistic we can be about whether we can make it on time, or if we need to cut scope or pull in more help to make it on time.
This can be true but can also be completely false. Massive differences in productivity are possible depending on how individuals work together on a team.
Great teams can produce much more than mediocre ones, but they too have a limit. When deadline is set too close, one of these 4 things has to give, and it is good to know in advance which one that is, so team can set the priorities accordingly.
So while you can argue it's part of the scope, it's not part of the scope that anybody else seems to think about.
> Massive differences in productivity are possible depending on how individuals work together on a team.
Addressed by this:
>> Exceptions I've seen are with mature and well-bonded teams working on familiar scope they understand clearly, with a timeline they themselves defined.
Time, cost and scope. Pick two.
Do you work with me?
I've been fighting this battle for the last 3 months. They've added almost an hour of meetings every morning at 9am. Half the office shows up at 8:30. That's enough time before the meeting to check email and get coffee, so essentially no work starts until 10. At 10 they get back to their desks, there's a bit of whining about whatever management has changed that day and how stupid the meetings are, some email correspondence and by 11 nothing is done and they go to lunch. They get back by 11:30-12 and finally start doing work. So your 8 hour workday turns into maybe 4 hours of real labor time, and no one works 100% of that.
The relogious people (mentioned in the article) harmed by false adherence. They adhered to the headlines and warped the substance of what the Methodology said. I remember (with pain) a place that wouldn't develop development scaffolding. They had rules for software development, good ones, motivated by achieving near-perfect uptime for customer-facing services. Implementing a scaffolding service or crontab to that standard was a lot of work.
Then there's the non-adherents who eroded the Methodology. Like the scrum shops that eroded scrum by deemphasising the product owner and stories until the result looked more like a waterfall.
The Methodologies may be broken as a whole but the practice I've seen was generally so distorting that I feel it's unfair to blame the Methodologies.
This reminds me of the people on the far right or left who believe, "[Capitalism/Communism] can't fail; it can only be failed."
Some of the failure I've seen can be partly explained by people who wanted to have their cake and eat it. Who wanted, say, the promised advantages of Scrum but were not willing to pay its costs (lack of long-term plans and fixed finish dates).
That's not all. It's part of the explanation for some of the suckage I've experienced.
I do blame people for not making up their minds. The people who invented scrum were willing to give up some parts of long-term planning, and got remarkable results for that. They are not to blame when others later failed by not giving up blah.
Maybe some blame should go to conslutants who oversold the benefits of Methodologies without stressing the costs. "YES YOU ACTUALLY HAVE TO DO THIS, IT WILL WORK BADLY OTHERWISE".
Being introspective of your capabilities and your achievements is an extremely valuable skill that I wish more of us had. However, selling your capabilities and giving vague promises that it will help with development if you only followed these practices is a deceitful way to make some money off of your reputation.
Worse, many of them do this by attacking those that came before them, but then taking the stance that their "teachings" are above attack. And that anybody that isn't getting the same benefits they did just aren't applying themselves correctly.
Note: I understand "failure mode" to mean that rules are being ignored, or guards have been disabled.
* Frequent releases (i.e. do iterations or 'sprints' or whatever).
* Accept that your methodology has bugs and 'fix' those bugs between releases. Most software is horrible and buggy. Don't trust the "methodology gods" that they wrote a perfect piece of working software. It's probably half assed and worked semi-well for their specific use case so they 'released' it along with a reality distortion field.
* Accept that different use cases require different methodologies. Writing space shuttle code? You need vastly different team dynamics to a group of 10 people at a marketing agency running short lived campaigns.
* Follow the UNIX philosophy: don't have ONE methodology that you follow to a T - string together a bunch of small, self-contained rules and team processes that serve your purposes and iterate upon them.
tl;dr fuck scrum. it's the internet explorer of methodologies.
They will not attract the most talented software developers (on average, not in all cases), and the business people for whom the software is a means to an end care more about consistency and predictability rather than quality.
As a result, fungible resources (humans), deeply regimented stories, regular delivery milestones (sprints), and consistent velocity IS the best possible outcome.
I don't think it really matters what kind of company you work for. I've
worked for many software and non-software companies and the same issues
crop up in both.
The main one is that scrum accelerates the accretion of technical debt, which
"business people" can somehow not care about right up until the point where
it drives them out of business.
It has some good ideas (retros, sprints, no deadlines) and some terrible ideas
(treating team members as fungible, story pointing/velocity, too many meetings,
PO has to make decisions about specific pieces of tech debt).
My main problem with it is the teachers, coaches and promoters who take an
all or nothing view of it and who treat deviations from the official
'scrum' policy as, by default, problem with the team rather than,
potentially, a bug in SCRUM.
I used to think that it was a good base to work from, but after arguing
fruitlessly with the people who take a religious approach to it
I've come to the opinion that it just needs to be trashed wherever possible,
because the problems it does have will only be resolved by moving on to
something else. Better to move on sooner rather than later.
So, fuck scrum.
* Your team did more work this week.
* Your team worked more efficiently this week.
* Your team inflated its story point estimations this week.
Any one or all of those things could have happened in varying degrees.
Given all of that, what useful knowledge is it actually supposed to impart?
The best part is the number of story points / velocity can vary wildly depending upon what the team believes the measure is being used for.
Yes, things change: productivity, moral, team members join and leave, some teams are shit at estimating etc - but at least in my experience if you take an average you do arrive at a useful figure.
That inevitably means that developers have an incentive to inflate their estimates, which means story point inflation.
Averaging does not fix this dynamic.
There's no certificate, role, set of tools or prescriptive process. There's no specification, it's not a product, or job title. There's no one true voice on what DevOps is or isn't. It's about attitude, ideas, customs and behaviours. Culture, paradigms and philosophy. It's a way of thinking, a way of doing and a way of being. Practicing as well as preaching. It's a conversation. It's about taking the best experiences and sharing those with others.
I feel like devops to most just means "ops but closer to the code now that there are so many code/release management tools and someone has to manage it all".
Consider Waterfall where development happens, then it's tossed over the wall to testing. Almost everyone acknowledges that this is a bad idea and so testing and development happens concurrently now, but not necessarily by the same people (can still have testers and developers). I guess DevTest doesn't have a good buzzword sound to it, but it's what we do.
DevOps sees the same issue with how developers finish the work, it gets tested, then it's tossed over to ops who don't really understand what the code does (not their fault, they didn't build it). DevOps brings the two groups closer together so ops still does ops, dev still does dev, but as a group they shorten the cycle so that dev can deal with the real issues faced by the operators instead of always lagging months and years behind. It's not necessarily an organizational change, the critical part is opening communication between the two groups.
In a way, it's an extension of agile (little-a, because I don't mean the shit the coaches sell), where the operators become the customer and development gets feedback from them. It's one of those "obvious" things that for some reason isn't very commonly practiced.
Now, that said, in practice it has issues. Management sees it like your comment and tries to make dev do ops or ops do dev, or mash them into one team (which may or may not work). More likely they try to make the devs do ops work and it's a disaster. It may run well, but features are added slowly. Or features are added, but the operations story is a nightmare. They end up understaffing the group (1 devop = 1 op + 1 dev, right?) and creating problems.
The original agile manifesto didn't have a certificate or set of tools either. There is plenty DevOps tooling, andI've little doubt that management types will hijack DevOps for their own gain and offer certifications. It'll probably be easy for them, since there is no single 'DevOps', everyone has their own idea of what it means to them.
I don't know why people buy into the "when we'll be agile enough, everything's going to be ok; until then let us use this whip and self-flagelate for not being agile enough:
The problem is not software methodologies per se, it is trying to apply a software methodology to software development where the priorities of the methodology are fundamentally at odds with the requirements and goals of the software being built. The root of the problem is the notion that there is one software methodology that is efficient and productive for all possible types of software development. I would argue that there is an optimal methodology for most software but it is a different methodology for different types of software.
If we discard the oft-argued proposition that a PHP website, an embedded system, and a high-performance database kernel can -- and should -- all be developed with the same software methodology then this entire discussion goes away. A software methodology is a tool; they work best when you select the best one for the job.
The methodology cannot be effective without creating the right personal dynamics.
Good interpersonal dynamics are important, but the sheer level of irony that the first rule of the Agile manifesto emphasizes deprioritizing the main thing that will actually get you away from waterfall-like development is pretty staggering, IMHO.
Never mind. Forget the tests. If you have a meeting at 11am every morning where anybody who sits down is shouted at then you've done it. You're "agile".
Personally, I am of the opinion that a strong emphasis on test-driven development in the long run will cause waterfall-style development. Tests are all about risk-prevention, instead of risk-mitigation. Prevention eventually becomes exceedingly expensive, whereas mitigation is all about building robustness into the running system. Due to that, the scalability of mitigation systems, such as true micro-services or actor-systems, are inherently more dynamic and cause less latency in development.
I don't understand your last statement. It seems to confirm my position: "... anybody who sits down is shouted at ... ". The process (standing up, not sitting down) is less important than good team dynamics (not getting shouted at).
(edit: down-voters, please share why you down-vote! I'd like to know. Also, please don't down-vote based on opinion, but on weakness of argumentation instead.)
Anyways, I would really like the avenue of thought you've planted in me to flesh out: risk prevention vs mitigation and how it applies to software development.
The one thing that I religiously downvote on HN is complaints about down votes.
I agree. However, this is about tools and processes, so it's not really relevant to my argument, which is that you shouldn't explicitly deprioritize tools and processes.
>I don't understand your last statement.
I'm making fun of people who cargo cult 'agile'. Actually standing up is the least important feature of stand ups. Sitting down during a stand up and seeing people's reactions is is a good litmus test. It highlights those people who put a great emphasis on non-functional rituals.
Obviously, the true spirit of the agile manifesto is the non-prescriptive, ambiguous wording.
Unfortunately, 'scrum' has come to be known as 'agile', which I think is an unfortunate consequence of all the cargo-culting going on. Rituals are fine, as long as they have any depth and identity to them. The Scrum rituals are shallow, simplistic and feeble. Without identity it is no wonder many participant want them to be over as soon as possible...
But it still strongly implies that tools basically don't matter, and I've worked on teams where people did choose to interpret it that way, which ironically led to waterfalling...
I'd really could go on to talk about those underpinnings, saying that they are mystic and grounded in certain Western cultural ideals (that are deteriorating at a rapid pace at the moment). But, HN is probably not the right place to discuss it and it is 1 AM here.
Is it fairly safe to say that the goal of most [software] companies isn't to just make good software or to develop [good] software fast but to find a repeatable process to make good software quickly? And this is why tools and processes are still needed.
We always used a pseudo waterfall but with much shorter iterations. We called it cinnabun. For example, we would cut 4 CDs a year so our iterations were about 3 months. We would plan what we wanted to do in the 3 months (bugs + features), code it, developer test it and throw it to QA. Once we had a good build, we would distribute it, have a small celebration the start over again.
It's similar to agile, but with the 3 month cycles, you could actually plan and design a lot better because you could see out a little further.
I suspect Agile is so popular because the business doesn't want to think of, make and stick to a decision for 3 months.
These days it still happens, it's just that few people call it that unless they want to be rude.
IMHO waterfall is a natural state to revert to if you have a test deficit, technical debt and you don't trust releases that haven't gone through a round of manual testing first. The level of faith you have in your code/tests basically dictates the length of the mini-waterfall iterations.
Continuous delivery (the exact opposite of waterfall) comes equally naturally if you have no test deficit. It doesn't require some kind of paradigm shift or a special 'agile mindset', it just requires an automated regression test suite you can trust.
(this usually only happens when you write the tests first, but it's not strictly 100% necessary)
Waterfall, as described, carried on through the PC revolution but was falling out of favor to version scoping and planning (iterative). Iterations could be whatever you wanted. There were backlogs, usually a list of tickets in some software system like Start Team or whatever. I think Visual Source Safe had this as well. It's fairly simple, so our UI designer just build one in a week or so.
Agile and it's short sprints (2-3 weeks) fits nicely with SAAS (websites) because their distribution costs (comparatively none) doesn't lend itself to versioning (or isn't encumbered by it). I just find that the short iteration isn't ideal because the short iteration, and lack of long-vision planning doesn't allow designers to develop in a "big-picture" way.
>it just requires an automated regression test suite you can trust.
Testing doesn't really fit anywhere in the equation as a differentiation. There was automated unit testing before Agile, there just weren't any common frameworks and it usually involved custom test harnesses (usually a console/terminal app to run and log stuff).
It also requires automated deployment which can be tricky, especially when there is a single / clustered relational database involved.
In conclusion, I like the Agile methodology, I just think the short 2-3 week iterations don't fit every organization and if you can handle 1-2 month sprints, you'll have better designed software because you can see further into the future.
I don't think that kind of planning is particularly valuable. I've worked in many really unsuccessful companies that had a long term plan (which never really came to fruition) and some really successful companies which may have had a blurry long term vision about their general direction but mainly just reacted to customer input and market conditions as they saw fit.
The Linux kernel is developed in exactly this way too and it's hardly a model of an unsuccessful software project. It's not just me.
>Testing doesn't really fit anywhere in the equation as a differentiation. There was automated unit testing before Agile
Automated testing was barely used though. XP, the progenitor of agile, did emphasize it though.
It 'fits' because once you do enough of it (and you have automated releases), it just sort of stops making sense having release schedules. Every change to the code base that gets through code review and the test suite is releasable so why not release it?
>It also requires automated deployment which can be tricky
I think 2009 was the last time I worked on a system that didn't have automated deployment.
>I just think the short 2-3 week iterations don't fit every organization and if you can handle 1-2 month sprints, you'll have better designed software because you can see further into the future.
I've always aimed for iterations that are as short and tight as possible because the future - meaning how your software is really going to be used - is by far and away the least predictable thing you'll have to deal with. The most you can do is react to that quickly before veering too far down a dead end.
Hell, even when I develop software for myself I end up being surprised about how I actually end up using it.
As an example, let's say 100 devs jump in. The task is to create a simple Android app, with a requirements statement provided, with server back end, launch it into the app store, support it for some period with bug fixes and improvements, and then declare it '1.0 released' to wrap up the experiment.
What you'd wind up with is a variety of team sizes, a variety of team experience, a variety of development systems used, a variety of outcomes. But all building the same software.
The key would be that as many attributes of each team's efforts as possible would need to be recorded and entered as data to be studied in search of patterns.
Repeat this n times and I believe valuable insights could be gained.
Rather than trying to control for all the variables of team size, experience, method, you control for the end product being targeted and then look for insights into the variety of approaches that teams took.
From the other direction, even if you get the value of controlling for the project itself, that might also add some bias. Could be that for a project with that setup waterfall actually works pretty well, but is it representative of projects overall? Are most software projects comparable to developing a simple Android app with a well defined specification up front?
I do agree that it would be good to do this kind of experiments where multiple teams get tasked with building similar systems to figure out what works. But I don't think it makes sense to actively avoid controlling for variables. That would make the results very hard to interpret and much less usable.
When you don't understand how or why something works, this is how you go about it. Let's make airplane-shaped things out of coconuts.
Good, Fast, Cheap. Pick two.
That's what you are doing when you are pitting requirements, a time frame, and a budget against each other.
The first problem is this is a company wide process, not just software development. The only thing development tells you is how long it will take given the budget. Development doesn't define the requirements or the budget.
The third statement in the Agile Manifesto, which tends to get overlooked:
"Customer collaboration over contract negotiation"
That is entirely a business process and ultimately determines both the requirements and the budget. It is something that is sorely lacking at most companies, regardless of how hard their engineering department tries to follow agile. It doesn't work without the full company buy-in.
On concurrency side, there were also SCOOP for Eiffel and Ravenscar for Ada which eliminated race conditions by design. Some methodologies in high-assurance sector were using tools like SPIN model checker for it. People spent a long time talking about those bugs while some design methods just removed them entirely. A lot less debugging and damage might have happened in industry with the aforementioned methods getting way better with industry investment.
Another thought experiment: imagine getting two teams of programmers using the same methodologies and everything else and expecting the results to be the same. It's just not practical to perform studies like this because there are too many variables.
Of course, not many people have the resources to do that kind of experiment.
That's what I meant about it not being practical. Who would invest that amount of money? There's infinite variations of different methodologies as well.
I'm not sure what the solution is but it's tiring seeing bad studies used to promote certain approaches.
Tons of organizations and governments could.
20 10 person teams * 5 methodologies = 1000 people, for a 1 month project = 1000 * $100.000 = 100M dollars.
In the grand scheme of things this is insignificant amount -- in a world where businesses spend $15 billion for buying Instragram. A military could do that kind of spending for buying a single airplane -- and such a research could potentially have a huge impact / savings on future soft-eng projects.
And it could be even subsidized or be tax-deductible. Or could easily drop to like $70K or $50K per month compensation.
The first team does A in "Agile" and B in "Waterfall"
The second one does A in "Waterfall" and B in "Agile"
I bet working like this would pull out at least some interesting stuff
Does “scrum for surgery” exist? What is an equivalent of “waterfall” in warfare?
Does something like this exist at all?
Letting apart the problem in always finding “above average people” - at least for now - I think that this has a fatal flaw: what happens when someone leaves and/or someone else joins the group?
I suppose that Hospitals and the Army have this happening fairly often, they must have “methodologies” catering for a wide spectrum of talents, and also accomplish satisfactorily results even when dealing with thorny, unexpected problems.
What do they use? Is this a “methodology”?
(One important thing that I am not sure is adequately represented in IT methodologies is having an established vocabulary to describe situations: we have “Patterns” but these are low-level, and divorced by the actual business-specific scenario - this is just one example but I think it helps pointing out that IT methodologies are trying to standardize the wrong elements).
Also famously, Kanban originated at Toyota for manufacturing.
The most important thing any software team needs is proper logging, monitoring and metrics. No matter how great your process and engineering culture, you'll need logging, monitoring and metrics; things will happen. The worst part is that this is relatively cheap and simple to do (at the scale that most of us operate on), with huge rewards, and most teams still do it wrong. Whether they're logging too much noise, or collecting metrics that show what's right vs what's wrong, or swallowing exceptions, etc. This is the litmus test.
Next are automated tests. Unit tests, integration tests and fuzz testing. The downside with this is that it takes a long time to master. Yes, it costs time at first, but that's why you have senior developers who should be able to use tests to save time and teach others from their mistakes (like too much mocking).
Finally, code reviews and pair programming. Almost every line of code is an opportunity to teach and to learn. No methodology or tool can help if you hire junior programmers and don't do pair programming (or some other really involved mentoring, but I don't know of any).
Technical debt is real. Most of the time people don't have time to do things right is because they didn't take the time (or didn't know how) to do things right in the first place.
Most of the struggle with methodology research is due to the difficulty of objectively measuring code productivity or quality.
Agile/waterfall/etc are PM methodologies used to manage software devt projects. Especially Agile has limited use beyond software devt.
Methods of deving software are things like: domain driven, data models first, TDD, etc.
Then there are programming paradigms: OOP, FP, Actor based, etc.
So the set of "techniques" the article lists in one list are of different types. All of these have to be evaluated against the people that have to use them (don't for OOP on an FP team, or vise versa) and the type of problem to be solved (TDD is less usefull for a simple UI project, than for a complex algorithm involving time and lots of corner cases).
You star with a premise that requirements, time constraint and budget are all set magically right before the project began.
Ultimately a lot of people tend to lose track of higher level objectives when working on the (admittedly complex at times) implementation details. This is probably the biggest productivity killer in the business
How many of us have had that first demo with some people external to the dev team and for all the feedback to be super obvious things that could have been caught before a single line of code was written?
It's not a regular pipeline kind of workflow, like some Taylor-inspired assembly line, or regular old civic engineering.
Besides all those methodologies are unscientific BS invented by consultants, not something derived from actual studies (even when there are some comparative studies involved they are laughable in scope by scientific standards).
After all, if you were repeating yourself, you'd just re-use the methods and classes and packages you'd already written; worst-case, you could copy+paste the code and tweak it.
And since you're doing something novel, of course you're not going to be able to predict how long it will take, beyond extremely broad guesses.
There seems to be a false premise. Methodologies don't deliver consistent results on time and within budget because they attempt to help to figure out those software requirements, so that an actual problem can be solved, not useless requirements satisfied.
What I have seen work is methodologies adopted to get the benefits of what the methodology actually delivers. Not a checkbox so say "we are X".
With all respect, I don't misunderstand TDD or OOP. I agree that those aren't methodologies in the strict sense. But rigid adherence to OOP design or TDD can paralyze a team and focus development on goals that aren't the customer's priorities. OOP and TDD can influence how the team works and what shape the project takes just as much as waterfall or agile. When I read articles claiming that TDD is the sure path to reliable development, that's a methodological claim, not a technical practice.
Better, faster, cheaper, you get to choose 2 and only 2, we've selected faster and cheaper.
The one true methodology. Preach!
This is, in fact, the answer.
You haven't truly SEEN office politics until you've worked on a team of developers. I'm shocked a reality show hasn't come out yet about software development. It would make Survivor look like Family Matters.
I get along with my co-workers too, but social skills are more than just playing nice together.
This run's counter to a lot of my experiences. I don't tend to see much office politics at all with software developers. Maybe because salary isn't tied as closely to rank and it's so easy to shop around and jump ship if you feel like you're getting fucked.
It's the same story at every company: developers are under constant pressure from management, we aren't given the tools or help we need at the right time, we get no credit, but all the blame when things go wrong. When people feel threatened, the open and creative part of their brain shuts down and it becomes fight or flight and every man for himself. Devs become extremely rigid and dogmatic, as every bad work experience leaves a scar on us, and they resolve to never EVER make the same mistake again.
"Hell is other people" is like the personal motto of some programmers. You don't agree with me, fine, I'll just refactor your code when you're sleeping. People read random blog posts and then take it as holy writ, undisputable proof, and if you disagree then let me write a long email lecture to educate you about why I'm right and you're wrong. I've seen people get into physical FIGHTS at scrum, resulting in broken bones. I wish I was making this up.
But that's the problem to be successful you have to be good at politics.