Hacker News new | past | comments | ask | show | jobs | submit login
DIB Guide: Detecting Agile BS (2018) [pdf] (defense.gov)
107 points by gashad 17 days ago | hide | past | favorite | 53 comments



Great document overall!

> The purpose of this document is to provide guidance to DoD program executives and acquisition professionals on how to detect software projects that are really using agile development versus those that are simply waterfall or spiral development in agile clothing (“agile-scrum-fall”).

Actually doing 'waterfall' properly would probably be fine? Or at least not the bogeyman it's made out to be.

The real danger is that the project would just be managed badly, independent of professed approach. Just muddling through badly.

(I am not even sure if waterfall was ever actually a thing, I mostly ever only hear about it as a thing to avoid?)


> Actually doing 'waterfall' properly would probably be fine? Or at least not the bogeyman it's made out to be.

Doing waterfall in the exact sense that is usually described will almost certainly never work well. People usually focus on the design phase, but I think the much more catastrophic part of 'waterfall' is doing testing only once development is complete.

Now, you absolutely can successfully run software projects that are design heavy, that more or less freeze requirements and designs early on, and that seek to execute on the design, instead of iterating. However, if you truly develop for 6 months before any kind of external QA, as described in most Agile talks about what Waterfall means, that is a recipe for disaster. If you do test things by feature, invest in component testing and practice that all along the way, you can succeed.

This essentially describes a 2-stage waterfall: design and execute, where the execute phase includes both development and testing all along the way. Is this 'doing waterfall properly'? Or is it a hybrid methodology? That's a matter of definitions in the end.

It's also important to note that the extent to which this works depends a lot on the project itself. If the requirements are volatile (e.g. when developing a new product with an uncertain market), or if the project is too large (e.g. if the design phase suggests it will take 5 years to finish given the current team), then it's likely that the project would benefit a lot more from an Agile style of development, where you deliver smaller chunks of the project to its end users to gather early feedback on the actual usefullness.


I think the parent post has, but for the reader who is a software engineer take 15 minutes out of your life to read WW Royce's "Managing the Development of Large Software Systems."

In this brief and engaging paper you will find the diagram used by many agile enthusiasts to describe the "waterfall method" and will be shocked to discover that it is held up as an example of a process that never actually works in reality.

You will then read quotes like this, which could have come out of an agile book:

"For some reason what a software design is going to do is subject to wide interpretation even after previous agreement. It is important to involve the customer in a formal way so that he has committed himself at earlier points before final delivery. To give the contractor free rein between requirement definition and operation is inviting trouble."

http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970...


http://beza1e1.tuxen.de/waterfall.html

Incremental and iterative development is not an invention of Agile either. It was used before "Waterfall" existed.


I've used incremental and iterative processes in R&D for many years before "agile" became the current goto. Unfortunately, the obsession in tech with following trends has pushed many to try and modify, IMHO, a better refined process for the domain where agile just doesn't quite work. In R&D, prototyping work is quite challenging and doesn't lend itself well to full-out agile pipelines. CI/CD can actually hamstring you far more than help you, as can other typical PM tools. You need to be more agile than "agile."

For larger projects with adequate resources, agile can make sense, although it's typically modeled after quarterly focused business models and can miss long term opportunities. The core issue with agile is that its structure is ripe for abuse by everyone involved except the actual developer(s). Instead, developers have to reconcile all sorts of poor choices together in a fairly formal environment, leading to headache after headache.


It's important to realize that Agile didn't invent anything. Agile was coined after years of work by numerous people to develop (or incorporate) different ideas into their methods. That is, as far as incremental & iterative development is concerned, history literate Agilers wouldn't claim to have invented it, but perhaps to have placed greater emphasis on it.


Indeed, that's some great added context to the linked paper and how the whole waterfall strawman was created.


Yes, basically people never actually seem to be doing 'waterfall' or even aspiring to do so.

I do agree with tsimionescu that leaving testing (and debugging) for last invites trouble.

Btw, that's an excellent paper.


Agreed; all large, very traditional, very legacy waterfall projects I participated in had:

* Unit testing immediately upon developing any component.

* System & Integration testing starting at some point during development - overlapping phases

* User/Client Acceptance testing also overlapping with end of development (but with planned fix cycles)

* Performance testing... unfortunately depends heavily on the project and its perceived performance but usually in the latter third. Well managed projects see it as "Performance Testing & Tuning", and plan for several iterative cycles of testing and improving performance. Poorly managed projects see it as "Performance checklist" and have a single cycle with no time for tuning.

This is as old-school methodology as it gets, and yet it's been consistently refined and works well when managed well and in right situation/client. A lot of multi-year back-office ERP or infra work for large corporations or public sector inherently needs to adopt similar approach because it's just the nature of client environment. At highest of high levels it's about 5 gantt chart phases with heavy, heavy overlapping and lots of cycles for fixing/improving.

A two-stage dev + test without even an arrow from test to back... I have no idea how that'd work in either theory or practice. Waterfall doesn't have to ignore realities of the world... it just is suited to different realities than Agile.


I think that you bring up a great point regarding a hybrid between agile and waterfall, within the context of government.

This is my own experience, but I was involved with a multi-year project for a federal agency that did exactly this. I think what many people don't understand is that the design phase is a byproduct of the internal processes necessary for even getting to the point of developing a system in the government. What I mean by this is, systems don't appear within based on good ideas or pitch decks, they start with policies, and based on those policies, procedures.

Why is that necessary in government? Because typically, there are laws, executive orders, regulations, and various other policy devices that have to be implemented at the agency level, there is a review period that allows for input from affected external agencies, etc. In some cases, there is even involvement from congressional committees and their staff members. This is especially true if appropriations are necessary for funding a program.

I can honestly say it took a year and a half just to develop the policies and procedures, to coordinate them, and to begin work on the system. That was ridiculously quick, given the scope of the project. The good thing about doing it this way, is you have a very clear roadmap at the outset. The downside is, the process for making significant changes to the system's functional requirements can be a challenge (e.g., changes to policy/procedures, another review period, etc.).

That being said, once we actually began the development phase we took a much more agile approach. We would hold daily stand-ups, regular testing/feedback sessions with customers, established product owner(s), etc. I would say it worked really well, but it was not without difficulties. Those difficulties are far bigger than agile vs. waterfall though, it's just the way the bureaucracy functions (for better or worse).


Software development perspective here, from the receiving end of project management.

In my experience, waterfall needs unlimited time and money. Waterfall fails hard when money or time are limited, in which case the likely outcome is that the head of waterfall will be massive and the tail will be rushed, so that one gets a design-heavy and rushed implementation with little or no testing and scant documentation etc.

A timeboxed waterfall with inflexible sculpted-in-stone time schedules are a recipe for burning in the ensuing deathmarch. The planning either cannot or will not anticipate all unknowns and/or the buffers dampening the impact of the unknowns shrink because of outside pressure. (What do you mean 8 months to make this thing, can't you do it in 2 weeks, haggle haggle, sold for 2 months; available time reduced by a factor of 4)

In contrast, (a theoretically ideal) agile or some other iterative method works fine if time or money are limited. The iterative nature allows for a cut-off after a sprint. Pull the plug and have a result of state-of-the-art at that point; of course the result then might not quite reach the viable dimension of MVP nor even resemble a product.


The Zen of waterfall:

At some level everything is waterfall. If anything is to get done at all, at some point the programmer has to make a plan, then put his head down, arse up and implement it.

At another level nothing is waterfall. If the plan is successful it was shipped to the consumer (which may be the programmer himself), and it's very presence changes things for the consumer in ways that they didn't foresee. They then realise they need a new set of changes.

What we call waterfall model is really referring to the scale. If the plan is grand, the specifications for such a grand plan need to be detailed, the implementation long, and the time between putting the head down and evaluating results is large, and we call it waterfall.

But if the stragegy is to explore the solution organically, the re-evaluations are frequent, the waterfall periods are short and we call the strategy something else.

So in the end everything uses waterfall. The programmer would not get the long periods of intense focus he need to be productive without it. But also nothing is waterfall, because no plan can foresee everything, it must be continually re-evaluated in the light of unexpected changes it brings as it is implemented.


I once ran a 45 person dev organization that used waterfall to ship a half dozen commercial applications a year, including dozens of localized and enhancement versions, and never missed a ship date over five years. And won numerous industry awards.

It can be done with waterfall.


This raises several questions:

1. How experienced were these people in the particular problem domain?

2. How long was a single development effort and how many people?

3. What do you consider Waterfall? Are we talking "pure" Waterfall where everything is truly done in set phases? Or do you have feedback loops in place, like testing integrated properly into the development phase?

4. What was the relationships like with customers? Was it one (or a small number) of consistent customers or a diverse set of customers (closer to contract/bespoke software work)?


1) Pretty young work force, we essentially trained everyone who came in.

2) typical team was 2-3 devs, 1-2 QA, project manager, product manager. Typical dev time was 6-9 months.

3) Waterfall has a pretty amorphous definition, our implementation not very pure, which is probably why it succeeded. Each component of a new release would start testing as soon as engineers had something testable. When all components passed QA we’d go into alpha, then beta.

4) It was consumer software, specifically targeted to graphics professionals on Mac/Windows. We had hundreds of thousands of customers, and delivered physically in floppies and CD Rom.


Thats impressive, was it your shop?


I built the process. Or should I say, we built the process to best meet the specific requirements for our products and customers, as communicated by our product marketing team.

Our main strength was actually our Product Management director. He was excellent at collecting and communicating highest priority customer requirements. He was always questioning and pushing, and helping my engineers come up with better approaches and implementations.

He was also excellent at building external relationships do we had really good partnerships, and training/leading team PMs so they were good team leaders. He was so damn good at it they eventually he moved in from our little company to run all mobile for a $100B+ company.


Waterfall was a thing, it actually is a thing. It is a terrible thing. It is an idealized version of running a project that assumes you can actually plan everything out for the next 6-60 months with teams from 5-5000. I have only ever seen it work when the projects were small or the teams were experienced and working on a mature system.

"Work": I don't mean that the projects fail but that they fail in some aspect. Like they deliver late, or don't deliver the full requirements, or they don't deliver the real requirements (because the requirements were written 5 years ago by a consultant, this might be a total failure in many cases).

"agile" (not Big-A Agile the faddish cult) resolves a lot of this by one simple thing: frequent feedback between developers and customers while developing smaller increments. Which, ironically, was actually Royce's point in the paper: Feedback loops, not necessarily with the customer, need to be incorporated in order to develop large scale systems.

One of the issues in discussing this is that it turns out that when most people say "Waterfall" they mean a modified version. When you dig into what they're doing it's either a small modification (we bring the customer in during testing, which is good but that's still 4 years into the project) or a major modification (V-Model which incorporates all of Royce's feedback loops and then some). Others have gone to doing incremental & iterative development or evolutionary but still call it Waterfall because they don't know any other term.

But yes, Waterfall exists, it is a nightmare, and I hope to never be involved in it again.


If folks remember, one of the seminal Waterfall documents was actually a "Don't do it this way." document[0].

That said, sometimes, Waterfall is the best way to do some projects, but I'd say very, very few. It does work reasonably well for hardware production. Many hardware companies apply Waterfall to software, because it's the process they know.

TDD is also a technique that can encourage a "waterfallish" approach, as the design needs to be fairly complete, right at the beginning (to be fair, it is possible to do TDD iteratively, but that takes effort, and many shops like to reduce effort as much as possible). I tend to keep my designs as fluid as possible, refining in a JIT manner[1], and prefer using test harnesses to fixed unit tests[2].

I personally have a beef with the way many shops handle the concept of MVP. I feel that a significant number of shops use it as an excuse to shove out a hastily-built lashup; favoring adding features to ensuring quality.

I have come to believe in the concept and purpose of MVP, but I am also one of those "grizzled, cranky oldtimers" that has seen many, many prototypes become the "heart" of applications, and even infrastructures. I won't mention some rather obvious examples.

I feel that it's important to ensure quality from the first line of code, and to accept the fact that the MVP will become the core of the system.

[0] https://en.wikipedia.org/wiki/Waterfall_model#History (Look at Royce's presentation)

[1] https://medium.com/chrismarshallny/evolutionary-design-speci...

[2] https://medium.com/chrismarshallny/testing-harness-vs-unit-4...


A few years back, I had a friend in another department ask me to build a prototype tool help manage their team's work more efficiently. I told them very directly that this was a hastily built prototype to help them see if a business case was viable. 3 years later, their entire department still uses that prototype and they have a small contractor team that makes annual updates to it. Nothing more permanent than a temporary solution.


The Correct Answer is chart the critical path, balance the triangle, mitigate risks. Everything else is details. aka PMI.

Our industry really jumped the shark with the Agile and XP nonsense. I flip the bozobit for anyone who uses "kanban", "sprint", liar's poker err planning poker, and other Dilbertspeak non-ironically.

20+ years later, best I can tell, the anti-methodology methodology Agile cargo cult band wagon was for people who couldn't be bothered to sit thru a PMI seminar. The unholy synthesis of pop-biz fashionistas and corporate climbers bouldering along the bullshit jobs facades. Artistes, pointy-haired bosses, "consultants", post dotcom era geek wannabes, and other assorted hangerons and poseurs.

I've always struggled to articulate the core problem. Agile is for those proudly belligerent ignorant people who reject expertise, wisdom, or anything else that would expose their grift.

A cult, more or less.

Buy me a beer sometime and I'll tell y'all how I really feel.


> Or at least not the bogeyman it's made out to be

Well... why do you think it's made out to be the "bogeyman"? The reality is that people tried (and continued to try) to run software projects that way: state up-front everything you're going to do, and then do it! What could be simpler?

Anybody who's ever tried to do that has run headlong into the reality that: they didn't know up-front all the things they were going to need to do. For a long time, people believed that this was just a matter of experience and perspective and, after a reasonable amount of practice, software developers would be able to not only recite each task ahead of time, not only predict how long each was going to take, but would be able to do so in orders of magnitude less time than the actual task would take.

This view of software development imagines programming as mostly just typing without much more thought involved than, say, laying bricks. That this model, if accurate, could be automated away seems to escape the attention of the project managers who insist on managing software projects this way. If it were possible to specify software in such a way that it could be predicted and planned out the way "waterfall" demands, it could be automated to take humans out of the equation completely. (And the project managers themselves could be replaced with a spreadsheet).

If you go back and read the original Agile manifesto, it was written by people who were trying to explain that software is inherently unpredictable or - more to the point - that the parts of software that you need humans to perform are the unpredictable parts. There's an old saying, though, that nobody ever went broke telling people what they wanted to hear, so a cottage industry of agile "consultants" who'd never tried to develop software themselves made fortunes telling upper management that software is, as they wished, completely predictable, and the only reason schedules slip is because they're not mistreating their programmers harshly enough.


Approximately 13 years in the field. At least four to five different organizations. Worked on everything from embedded aerospace systems to web apps and everything in between. Start-ups, mega-government labs, and again everything in between.

I've never seen a project use "Waterfall". It's just not a thing. People use it as a hypothetical boogeyman to "Agile".

As with everything "Agile", all words lose meaning.

Edit: There DOES exist the CMMI Systems Engineering processes, that generally involve various design reviews (PDR, CDR), IOCs, FOCs. These are essential for large-scale procurement that perform mission-critical functions. Superficially similar to "Waterfall", it still isn't. For example, you don't want to be on an aircraft or spacecraft that was "Agile"-ed to completion.


Waterfall is certainly a thing - although it's taken most of my career to run into it. Where I work currently (banking) some of the projects are waterfall, some are agile. The waterfall group has requirements gathered for them by the stakeholders, they build to spec, it's tested by the stakeholders after development is complete, issues are fixed and then it's deployed, all managed by a project management team. There's a series of "phase exits" that need to be signed off on by executives in a big meeting, and the project doesn't move backwards. Massive amounts of documentation are generated (and never read, except by external auditors) for every aspect of the application - every workflow, procedure and logic built into the application. Iteration doesn't happen at all, everything is a massive project, or bug fixes.

All the "boogeyman" activities happen - scope creep goes crazy, testing is run short to hit deadlines set, etc. Like you I never really thought it was a thing until I saw it with my own eyes.


CMMI isn't a process, it's a process model. Or that's what they like to emphasize.

The distinction is this: You can't take the CMMI model and execute on it, it lacks sufficient detail. Superficially, if you read the model and try to execute on it you will end up with Waterfall. What you're supposed to do (and why they're making CMMI 2.0) is map your existing processes (or develop new ones) to the model. That is, if you look at the things required for verification it's not complex: you need test cases, test reports, and some other things. It doesn't have to be heavy weight, point out your test scripts and reporting system (CI/CD platforms all have these) and how you maintain them and train people to use them. Done. But if you're not careful, people will write your process per the CMMI model and it's absolutely junk (witnessed in last job, one of the reasons I left).

CMMI 2.0's problem is that it's written as if Agile (Big-A) is the one true God. But it will almost certainly suffer the same implementation issue, it's not a process but people will try to make it one. As such, the resulting processes they do make will be process theater and hinder work, or at best have no effect but to waste some corporate resources. It does lay out a case, better than the previous CMMI model, for picking and choosing parts of the model to implement and get certified on. So that may help a little bit (less all-or-none attitude).


I dont know, the paper that described waterfall used it as an example of what not to do, they covered the better method on the following page but it seems people only read the "headline".


Which of course is not an argument in itself against parent's point.


Very true but thought it was worth mentioning, the original paper is very readable and it's amusing that waterfall lasted until the 90's when it originated in 1956 and the paper covered the flaws in 1970.


> Actually doing 'waterfall' properly would probably be fine?

It's not fine if there is a need for the particular things that agile serves, which is why contracting requirements would specify agile and contractors would, to be response, claim to be agile necessitating the people reviewing the submission to detect agile BS.

Government requires waterfallish process in contracts all the time, but a document on identifying Agile BS isn't addressing those cases.


Interesting that all these issues are completely different from my own experiences with Agile BS. We would have 10 hours of exhausting meetings every 2 weeks in order to plan our sprints. Unconsciously, we just ended up hyper inflating estimates so our team would joke about how the only thing we did each sprint was "slap a box on it" (in CSS, or some similarly simple task).

I left that job when all the developers completed their tasks a few hours before the end of the sprint, giving me (QA) just a few hours to test, merge, and deploy their code, which because of our terribly clunky and manual deploy system, just wasn't possible. I was placed under an internal investigation for not being productive because I held up the sprint of the "most productive team in the company" and made us look bad to out of state executives.


I worked at a failing startup that had embraced agile BS. 4 week sprints, 2-3 days spent on retro and planning meetings. They eventually moved to 2 week sprints, but 2 days of that still went to planning. Each day had a 45 minute standup (with about 20 people from the whole "back end" team.) Each week, there was a department wide standup with over 100 people, from all of engineering! It was insane.

They used an enterprise-grade source control system run by the IT department (Perforce.) It was almost impossible to create a branch. In fact, I saw only one created during my brief 1 year time there.

Since there were no branches, you had "shelve" your changes and get an "in person" code review to merge. If you added something minor, like a new getter, but didn't have a unit test for it, you'd get flagged (even if it was used else where in the code.) It basically took forever to get anything done.

Oh yeah, they never shipped any of this stuff, either.

I could go on...


This is because words lose all meaning with "Agile".

Functional teams and companies will make good products and set up their staff for success, with or without agile.

Dysfunctional teams will have all sorts of perverse incentives and set each other up for failure, with or without agile.


testing should have been part of the estimates and you should have thrown that right back at the rest of the team during retro


Exactly. But my team didn't think it was a problem, we just forgave ourselves and moved on and I finished testing and deploying early in the next sprint. The problem came when external executives noticed how little we completed compared to our usual throughput and insisted that I be punished for it since I was the bottleneck. My team was just as confused as I was, but I had been meaning to move from QA/Test automation to being a developer for some time, so that just hurried me along and I left within a couple weeks.


I love the document and will distribute to my co-workers. The short story, however, is not really that there's a checklist to determine BS Agile, but rather that all/most Agile is BS.

In my career I've seen "Agile" throw a wrench in the works for so many projects. Embedded systems; data center distributed systems that are air-gapped; aerospace safety systems; R&D work. Agile in these cases just isn't the thing to do, but unfortunately the culture these days is that everyone MUST be Agile, and so it creates another bureaucratic nightmare of dysfunction.

The funny thing is, people will always go to bat for Agile (MUCH more so on Reddit than HN).. and I don't understand why. I think furthermore the discourse around this has become so weird. For example, I asked someone kindly what is "Not-Agile"? There's no answer, other than "bad ways of making software". The discourse has become caveman-like "Agile good. Not-Agile bad."

At the end of the day, Agile is probably great if you're making a mobile app or a web app with a small team for a client with a small-to-medium budget, which accounts for most work software developers do, which is why its so popular. But is inherently too short-sighted and unable to address technical challenges that go more than superficially deep.


I love this document as well for very similar reasons. It's interesting how attached people become to the ritual of doing things instead of thinking about why they are doing it.

In my experience it seems like a ton of things that people want to do with agile is get to skip the writing and design work that has to happen to make deep technical projects happen. There is a really strong desire to just start implementing and then be able to refactor to working code. Of course there is always pressure to deliver faster so the refactor only ever gets half done and then there is an architectural mess.


The advantage that Agile brings to the table is that it enables more tracking and measurement at finer-grained time intervals than other approaches. Tracking and measurement is the key feature of Agile or any other software development methodology. It enables the business to determine what needs to get done, if it's on track to completion, and if not, what the pain points are. Having a short OODA loop for these three points is paramount to the business's success and in order to do that you need to track and measure. You can't manage what you can't measure, and Agile processes give the business data and analytics about how their development efforts are going at least at a sprint level of granularity; depending on local practices it could be as fine as one day. That's huge. It could potentially add visibility to all aspects of the business -- so get ready to see Agile everywhere.

TL;DR, if you believe Agile was created to help you do your job better, you might also believe that open-plan offices were really adopted to "foster collaboration" and not make you easier to spy on at work.


Agile really scratched that "cult" (weird names, daily rituals, special roles, people who don't do it are either ignorant or evil) part of the brain that programmers are susceptible to, the part that starts so many flamewars over emacs versus vi or top-posting versus bottom-posting. It's as if people forgot that programming existed before Agile came along.


This is actually brilliant. Fantastic way to put it. Especially the part that people who don't do it must be some kind of outcast: ignorant, evil, pathological...


I think it's a case of the "nobody has been fired for choosing an agile methodology"


>data center distributed systems that are air-gapped

So do the users visit the datacenter to connect them, or what?


What's DIB? Just trying to understand who is the target audience of this guide.

It's a practical set of traits to spot. But inevitably a question comes up "what to do next?" Re-educate, enforce, hire/fire, disband?

One needs to remember how the Agile processes were being "installed" back in the day in organisations/teams of various degree of dysfunction. Lots of those teams went through trials of "templates", including waterfall, just with the same outcomes.

Too often, the failure is not at the team but on org-level. The base tenet of Agile success is buy-in on all levels. Yet it's easier for the management to "buy-into" a structure and attributes, not into actually empowering and trusting the teams.

So, this detection approach may find all right attributes, tools, lingo, roles... but not the actual practices. A beaten down example is a morning stand-up, which disguises the dreaded subordinate reports - best indicator of such theater is a presence of a "clip-board" or note-taker person.

I'd think for such guide to be of better practical value, there should be a section which would outline ways to detect the constraints and obstacles for adoption of a proccess which would be effective in a given team's case. It does not have to be Agile-or-wrong.


DIB = Defense Innovation Board, https://innovation.defense.gov

Made up of various tech industry leaders (mainly CEOs, it seems). The purpose was to try and modernize the way, or present a path to modernization, defense software systems are developed and maintained. Because it's presently a cluster fuck.


DIB = Defense Innovation Board, I believe


My current problem : customers who don't want to participate in the agile process and do want to have a "simple" pre-agreed specification to use to determine project success/failure. Of course they can't write such a spec and want me to do it - and of course I can't without doing large parts of the work to implement it (because if I'm wrong I'm on the hook for a large sum of money wasted).


Why can't they write it? Conduct a user story workshop, slice out an MVP release, and that's your "spec."

You can't force customers to "get" Agile. You can force them to understand the risks of not participating in the process.


>Why can't they write it? Conduct a user story workshop, slice out an MVP release, and that's your "spec."

Well - yes, this (variations) is what we do, but guess what the outcome is.... "I'm not convinced that we've got this right...", "I was never fully signed up to that...", "I think we have invested a lot of effort in a process that isn't generating business value..."

The problem with Agile is that it doesn't account for politics, if people play nice and are all signed up to get the best done with the tools and people available it's brilliant. If you've got to deal with corporate politics it leaves those with good intent exposed in 100's of ways.



It started well, then it completely fell on the corporate bullshit side.

For example: "Some current, common tools in use by teams using agile development".

This is the kind of reason why a lot of people are required by management to use useless overkill stacks for their needs like docker or kubernetes.

Also the "questions to ask" are typical ridiculous agile corporate bullshit like "have you a product charter", or common forced process oriented questions.


I don't think that listing some example tools is a problem. Especially for the audience of a primer like this, a categorized vocabulary list like this can be indispensable for giving people a quick lay of the land and a preview of some names they'll encounter.

My complaint would be that, "Tools listed/shown here are for illustration only: no endorsement implied," should have been inline instead of buried in a footnote.


Given agile is 20 years old and most of those tools are somewhat younger, it is obviously possible to use other techniques to manage development and deployment without having to tick the 'at scale' type boxes that are so loved by the current batch of web deployment models.


I interviewed recently at a defense contractor and experienced something like this. I know you’re supposed to ask questions during an interview but I usually only really come up with two: what is your git/hg workflow and what does your automated test coverage look like.

Usually the second one has an answer along the lines of “not good but we’re working on it” which is fine. This place though tried pretty hard to convince me they were using git to manage their code right up until I asked the second question. The senior engineer sort of mumbled a few things ending with something along the lines of “we’re still figuring out exactly what the transition from svn will look like.”

I’m not sure why they didn’t decide to hire me but I feel like that interaction really upset someone and may have been a big part of it.


For me the biggest clue is always a fixed "agile" process. By definition any notion of fixing a process is anti-agile.

In my current team the only thing that I have asked to do when "going agile" was biweekly retro to discuss what to improve. Seems to be going pretty well even if most problems have solutions from various described "agile" process templates.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: