Hacker News new | comments | show | ask | jobs | submit login
The way government does tech is outdated and risky (washingtonpost.com)
169 points by madelfio 1142 days ago | hide | past | web | 136 comments | favorite



Federal Gov employee here. Can't speak for a project with this scope, but the procurement middlemen get into everything, far for the worse.

Two years ago our team wanted to buy a small cluster (~300 cores, ~$50K). We talked directly to two good vendors (good recommendations from university partners) and came up with a fine machine and 2 bids for it. Sent recommendations to procurement.

Procurement put it out for bid, and a fly-by-night company undercut the bid by $10K... by noticing that procurement had not specified details of service level (that were in the bids we'd gotten and forwarded). Procurement, once it goes there, is a true black box. No communication, no understanding.

Five months later, we were basically delivered 2 pallets of unassembled parts and no instructions. Believe me, we spent 3-4x as much in labor as the $10K savings to get it working, and it's been plagued with issues that would have been under the onsite service warranties for the better companies.

The biggest irony is: I firmly believe that procurement acts this way not because the government is fundamentally incompetent, but because the Public, and thus Congress, BELIEVES we are incompetent, so puts so many levels of "check" bureaucracy in place that the people who know what they want can't participate directly in getting it.


I just had a conversation with a federal employee last week talking about the issue of procurement offices being a blackbox and procurement officers not understanding the technical specifics of what they're bidding out. His solution was just to become certified as a procurement officer (apparently it's 72 hours of coursework) and deal with the bids himself. He said these kinds of problems have pretty much disappeared for him since then.


> His solution was just to become certified as a procurement officer (apparently it's 72 hours of coursework) and deal with the bids himself. He said these kinds of problems have pretty much disappeared for him since then.

sounds like a loophole one can procure a truck through :)


According to US federal procurementology, would it be a conflict of interest to be both procurer and recipient of what is purchased?


He's overseeing the bidding process for contractors for projects that he will supervise; he doesn't personally benefit from the outcome of the selection (i.e., he doesn't own stock in companies that are bidding or anything like that), so I'm not sure how it would be a conflict.

So no, I don't think so.


> I firmly believe that procurement acts this way not because the government is fundamentally incompetent, but because the Public, and thus Congress, BELIEVES we are incompetent

There's almost 3 million federal workers. Many more if you include people who work on government contracts. The Federal Government is by far the biggest enterprise in the US by both employees and revenue.

With such a large organization, there are undoubtedly large swaths of both incompetence and competence.

The challenge with any large organization is that the rules are there to reign in the bad people, but are equally enforced on the good. (For example, most people won't abuse their company's T&E policy, but some will, so everyone has to be treated as suspect.)

Honestly I'm not sure what a good solution would look like, but I don't think it's as simple as "trust us."


> Honestly I'm not sure what a good solution would look like, but I don't think it's as simple as "trust us."

Didn't mean to imply that any sufficiently large organization shouldn't have an audit trail and reasonable accountability!


I would offer that those two concepts are optimized for ex-post-facto blame. It’s not simply gov’t, as you point out, it’s large organizations.

To contrast this with more modern techniques, ‘audit trail’ is simply source control. And most of us don’t go into source control looking for a smoking gun.

It’s rarely necessary if a process is agile/iterative. Bugs will be (relatively) small and recent in time. So the notion of going back six months and figuring out ‘what went wrong’ is just not a thing. Wrong happens every day, in small amounts, transparently.

Conversely, a bug if truly large and undetected, and explodes a year from its creation, then the whole team is to blame. We’ve all looked at the code hundreds of times in that period.


> Honestly I'm not sure what a good solution would look like, but I don't think it's as simple as "trust us."

Maybe not, but "treat me as a liar" doesn't seem viable either.

Any time you take an extra measurement you introduce a chance for that measurement to be in error. If you make that it so that any single measurement is a show-stopper, then every time you add an extra check you make things a little bit worse, right up to the point where your false positive rate overwhelms your data.

Even with an extremely low error rate, the number of best deals in the world for a given thing is 1. If the goal of your system is to get that, then making any measuring system a single point of failure that tests once is a death sentence. If the number of good and honest software houses that will respond to your call is low, then your false positive rate is going to be extremely high... and again, having such a system is going to be ill-advised.

Especially if the system itself suffers from not having the people who are actually going to be using the system, and people who understand how the systems should be created and run, making at least part of the decisions.

If there were a greater degree of feedback between procurement, requesters, and providers, with the ability to modify the plan, then you could potentially check your work - reducing the consequences of such failures. Not "absolute trust," but at least "hear my side of the story, maybe you've just misunderstood something."


> Honestly I'm not sure what a good solution would look like, but I don't think it's as simple as "trust us."

I agree -- this is not a simple challenge. But I don't think stultifying bureaucracy is the answer, either. There must be some government out there, somewhere in the world, that has sorted out an efficient, effective procurement process.


> But I don't think stultifying bureaucracy is the answer, either.

Something that seems hard for people to grasp is that "stultifying bureaucracy" wasn't an "answer". It's the natural consequence of not having an answer. We have the luxury of sitting back on an Internet forum pontificating on the drudgery; they have to enact the laws that Congress passed.

> There must be some government out there, somewhere in the world, that has sorted out an efficient, effective procurement process.

The main examples of the same scale and scope as the US are Brazil, China, Russia, and India. All of these have been regularly painted as worse: more corrupt, more stonewalling, more favoritism. Whether that's just American propaganda or if there are specific processes that can be imported without breaking things is worth examining.


Perhaps you should look at the smaller countries. Like Singapore?


I like that idea. There might be a good process in place at a smaller scale that can be scaled up.

What is it about Singapore's procurement process that stands out to you? (I don't know anything about it.)


I don't know enough about the process to comment, but judging by the results it must be pretty good. They have a good track record of finishing government projects on schedule and budget.


'Honestly I'm not sure what a good solution would look like, but I don't think it's as simple as "trust us."'

This might be a good starting point:

http://www.amazon.com/Liars-Outliers-Enabling-Society-Thrive...


Procurement rules are more to guard against corruption and cronyism than incompetence.

Without them, you could reward your friends with fat government contracts, regardless of what's in the public's interest.


The irony is that they don't really do that. It's common in both procurement and hiring in the federal government to see bizarrely-specific requirements whose fairly transparent intention is to limit the potential bidder or applicant pool to a single company or individual.


* Must have a company name which SHA-1 hashes to e8b06511aa36381fc2306eb6f8181204585c5453.

Hey, there's a theoretically infinite number of company names which meet this requirement...


> Procurement rules are more to guard against corruption and cronyism than incompetence.

Former federal IT contractor here.

The procurement rules were designed for that, yes. But in real world scenarios, those rules effectively do exactly the opposite. Since there are so many hoops for potential vendors to jump through, only the most established players get to bid on most contracts. And in my experience corruption and cronyism is still alive and well in federal IT contracting.


Yes. I should have added that I understand the need for basic checks and auditing (and that the public expects us to be uniformly fraudulent as well as incompetent), but that the level of checks and removal from the process of experts who know requirements goes far beyond this need.

It's really all about optics rather than reasonable checks - no department wants to be the one that Congress targets, so especially in poisonous political environments the "checks" are significantly more expensive IMO than the actual "waste, fraud, and abuse" they guard against.


"Without them, you could reward your friends with fat government contracts, regardless of what's in the public's interest. "

Except this happens anyway. Much like patents, people just become better at drafting. What is currently done is not an effective mechanism for stopping cronyism and corruption at all. In fact makes it easier in a lot of cases, because it provides plausible deniability (It's not that we gave it to our friend, it's that you didn't meet the requirements!)


And like most things with government, you end up getting both: the bureaucracy to prevent cronyism as well as the cronyism. Speaking from experience. Sure it might be more obvious if the bureaucracy wasn't in place, but it's still there.


Well, federal government worker here too, but not on the US government.

Those rules are there because a malicious worker can cause a huge amount of damage. They are a pain (one entity I worked once spent about $15k (in people time) contracting $100 worth of ssl certificates, in a process that took more than a year (so, no certificates for the site during a period), we were forbiden from contracting the service for more than a year... and the contractor was another governmental entity. The rules are maddening, but they are necessary for a democracy.

The problem is that governmental IT is out of place. The government will never be competent in contracting software development - the only known tool that works in keeping government contracts honest is auctioning, and agile is simply not compatible with auctioning. The only possible way out is by doing IT in-house.


An in-house team serving several departments can work well: providing infrastructure, defining communication protocols, quality and security standars, helping subcontracting.


Gov't procurement is actually no different from any other bureaucratic procurement organization in Big Co. They are always seen as a blocker (rarely an enabler) and the people who sit in their roles usually hate their responsibilities. (I'm generalizing of course, but you get the point) The consulting company I used to work implemented techniques for procurement orgs based mainly on behaviors.[1] It's the first time I've seen such a thing and believe it is something very necessary to scaling procurement organizations.

[1]-http://www.youtube.com/watch?v=7Xgh-A1ZCfg


The federal procurement process is one of those cases where separation of concerns hurts things. I know that things are this way to try and prevent bribery and cronyism etc. But it's clear that the pendulum has swung too far.

I've been part of the procurement process a few times from the vendor side and the layers of nested black boxes makes solving procurement issues virtually impossible and once the procurement is made massive overruns are almost inevitable.


"But it's clear that the pendulum has swung too far."

While I think that's likely, I'm a little bit hesitant to claim things are clear when we can't observe the alternative.


Yup. The heavy equipment process create an additional barrier to entry. If you want to bid for a govt contract, you need to hire someone who has done it before, otherwise you'll drown in paperwork. What's really sad about this situation is that it is stable - eg. existing stakeholders (contractors & govt procurement personnel) will work hand in hand to ensure that the process stays in place - the former to make more money and the latter to retain their jobs.


Out of curiosity, why aren't you naming the no-support supplier? Are there any rules forbidding you to do so?


Honestly, tiny company, couple years ago, don't remember now, I wouldn't be surprised if it was a couple dudes who look at Fed announcements and say "hey, I can buy that off the shelf and ship it to the feds for a profit".


maybe watson should handle procurment


This is distressing.


That diagram for the "waterfall" approach that they yanked from Wikipedia is a complete straw-man representation. It's nonsense.

Here is the actual, original source for the Waterfall approach, first published in 1970:

http://leadinganswers.typepad.com/leading_answers/files/orig...

If people would just bother to scroll past the first couple of pages, they will notice that the approach already includes some iteration cycles between steps.

In other words, this whole "agile vs. waterfall" debate which has wasted countless hours of human effort is based on a complete misunderstanding of what "waterfall" is in the first place. No one ever seriously proposed a model without iteration. It simply never existed in the first place!


"What often happens when you have these big requirements up front, is the people who are specifying the product are afraid of not getting all their ideas in, so they overscope the project. And then the development team is on the hook for delivering everything, not just the essential elements."

Isn't this the essence of the critique, though? Its not the lack of iteration, but the logic of the spec-formation.

To put this another way, you are right that it has nothing to do with the waterfall model per se. But it has everything to do with accepting the same set of behavioural assumptions. Namely, that the people who spec the model have perfect foresight regarding the spec itself.

What needs to be accepted is "incomplete spec". The problem with this is then the hold-up problem. you get held to ransom to fill in the incompletions. So what is really needed is a capability to execute this more in-house. This would prevent the re-negotiation of the economics (the hold up), because the project manager would just execute properly, directing resources (already paid for) through fiat rather by then re-negotiating vs a modified spec.

One problem with this is accountability. There would need to be more accountability, because execution of the incomplete spec will not be the same as farming responsibility for the spec out to a third party vendor.

Project managers need to get used to the idea of working intensely with a development team rather than asking for a specific thing and then walking away


Here here!

It would be lovely if vendors would bid based on their experience, and be compensated for it, at a time and materials basis. The more experience and better you prove to be, the higher we're willing to pay for a better outcome, sooner.

Except that world is rife with bait and switch. And writing the code that delivers the spec is just a small part of the picture. Companies are terrible at specifying the non-functionals, delivery process expectations, operational requirements, supportability requirements, etc. When they do get it right, the costs go up, because many places quote without any concept of these things.

The sad reality is that the people who get asked to quote for these things in the software world are generally clueless about the actual domain, out of touch, and wildly wrong most of the time. And the people that specify these things are often barely any better. And lets face it, software developers are also terrible at quoting times to do a task, though that can be mitigated with a lot of experience of that task and the code base to do it on.


The conflict isn't between "agile and well understood waterfall" it's between "agile and waterfall as understood and implemented in the real world"


Of course, there's a fourth quadrant: "agile as understood and implemented in the real world", and that probably has as much to do with the way Agile is promoted as gargantuan waterfall catastrophes do with '80s style software engineering practice.

In my jaundiced view, none of this shit works right, and we'll be much the better off the sooner we all stop pretending that buying bags of magic beans will do anything other than make a bunch of cynical grifters rich and their naïve dupes miserable.


No, I wouldn't call that a complete misunderstanding; I'd call it a minor simplifying inaccuracy.

Quoting Dr. Royce:

> Management of software is simply impossible without a very high degree of documentation.

How much documentation?

> In order to procure a 5 million dollar hardware device, I would expect that a 30 page specification would provide adequate detail to control procurement. In order to procure 5 million dollars of software I would estimate a 1500 page specification is about right in order to achieve comparable control.

5 million 1970 dollars is equivalent to about 30 million 2013 dollars, so for a $30 million project, his advice is:

1. Write a huge stack of documentation. Literally stop everything until that documentation is written.

2. Get feedback from reality and from your stakeholders exactly once.

Sure, having one opportunity to act on feedback is better than never having any, but it doesn't fundamentally change the process or the risks involved.

What the Agile folks realized is that reliable software development is an experimental process. If you only do one experiment, you're still missing that point.


Yes but the waterfall paper goes out of its way to say that iteration between non-adjacent steps is a bad thing. It's still the anti-thesis to Agile.


For the MIT alums out there, I remember a 6.170 exam that had the question: "When is it appropriate to use the waterfall model of development?"

The answer was any time you are developing software for the government! The professor specifically mentioned it in lecture once, so that alone was enough for full credit on the question (other reasonable answers were fine too).

Later I TA'ed the class twice and made sure to eliminate these pure lecture-attendance-check questions.


Interesting fact - the original design of "waterfall" isn't what we perceive as "waterfall" today: http://leadinganswers.typepad.com/leading_answers/files/orig...

My theory - agile/iterative development rarely gets sold because we continue to believe we aren't susceptible to planning fallacy.


What do you perceive as waterfall? That's exactly the way we learned it in school...


Much of our industry believes Waterfall means a single-pass development process. Dr. Royce explicitly said to do it twice. (It was a military officer who later took the process and made it into a single pass.)

If you learned it as a two-pass process, then you learned it correctly.


This is simply historically incorrect. Waterfall means a single pass by definition.

Royce described the pre-existing state of the art - the single-pass model (Waterfall) - and suggested a modification to a 2-pass model. (This can be seen as a precursor to Boehm's n-pass Spiral model.)

To suggest that the single-pass model was invented later as a corruption of Royce's paper is nonsense. Virtually all software was developed this way both before and after the paper.

What is odd is that the earliest and most commonly cited reference to the Waterfall methodology is a paper that explicitly says that it doesn't work. Let this be a lesson on not burying the lede.


In the version of the paper linked earlier in the forum, I take the following to mean two passes: build a model and then use lessons learned to build the final product. But it's a matter of interpretation and semantics. My own interpretation is contradicted further down in my comment by someone from that era (see the Larman paper linked). C'est la vie, I'm sticking with my interpretation. I think we can agree that Royce did not mean our modern "Agile" approach.

"If the computer program in question is being developed for the first time, arrange matters so that the version finally delivered to the customer for operational deployment is actually the second version insofar as critical design/operations areas are concerned."

At least 2 methodologies existed before Royce's paper as described in this paper: Waterfall and "iterative and incremental".(http://www.craiglarman.com/wiki/downloads/misc/history-of-it...)

Note that, in the section referencing DoD-Std-2167, the author of the DoD standard does state explicitly that he understood Waterfall to be one-pass. Certainly he implicitly promoted it as such.


The mistake I believe you're making is in assuming that because most people reference Royce's paper for a description of Waterfall, that means that the definition of Waterfall is whatever the main subject of that paper is. This does not follow.

The paper describes a number (more than 2) of possible models. The main subject of the paper is probably the "2-pass waterfall" diagram in Figure 7. However, when people refer to this paper in the context of the Waterfall model, they mean Figure 2. The Waterfall model is called the Waterfall model because Figure 2 looks like a waterfall, with the water never flowing uphill.

Figure 7, however, was largely ignored. (Though Brooks later recapitulated it as "Build one to throw away; you will, anyhow". I don't know whether it was influential in the development of the Spiral model or not, though it seems like a logical progression.) As far as I know this model never even got a name attached to it. Its influence pales in comparison to Figure 2, which was the first and best published reference to a design process that almost everybody accepted as "ideal" at the time.

Note that when Brooks tells the story in The Mythical Man-Month about making the mistake of getting a large group of mediocre engineers to write specifications instead of giving the job to a small group of elite engineers because he didn't want the larger group just sitting on their thumbs for a year, this was all happening in the mid 60s. Here's another contemporary quote:

The most deadly thing in software is the concept, which almost universally seems to be followed, that you are going to specify what you are going to do, and then do it. And that is where most of our troubles come from. The projects that are called successful, have met their specifications. But those specifications were based upon the designers’ ignorance before they started the job. —Doug Ross, 1968

Winston Royce did not invent the Waterfall model in 1970, he was just describing the already-dominant paradigm as a prelude to proposing something different. The model had been around forever, Royce provided a convenient diagram (Figure 2!) in the course of trying to critique (or even replace) it, and the name "Waterfall" got attached later.

> Note that, in the section referencing DoD-Std-2167, the author of the DoD standard does state explicitly that he understood Waterfall to be one-pass.

Of course he did, and he understood it correctly. The thing he didn't understand was that it was a terrible idea.

Had he (properly) read Royce's paper, he would have seen that it actually advocated a slightly better (but still pretty terrible) idea, but that idea was not and is not the Waterfall model. The Waterfall model is the one in Figure 2 that looks like a waterfall. Hence the name.

> My own interpretation is contradicted further down in my comment by someone from that era (see the Larman paper linked). C'est la vie, I'm sticking with my interpretation.

Um, OK then.

Thanks for the link though, it was really interesting.


Note: in the 20th anniversary 2nd edition of The Mythical Man Month (http://www.amazon.com/The-Mythical-Man-Month-Engineering-Ann...), in one of the new chapters, Brook's backs off of his original "Build one to throw away" advice.


I always thought waterfall was inherently iterative you could jump back up the stack at any point in the process one sucsessfull project I worked on at BT (management system for the Uk's SMDS network) had several instances of going back and redoing stages before we even got to the final development phase.


Have you ever seen water jump back up a few stages of a waterfall? (Hint: no.) It's called the Waterfall model exactly because that kind of thing is explicitly not part of it (see Figure 2 in the Royce paper linked above). That's the whole point of the name.

It goes without saying that no software development effort has ever lived up to this standard. Nonetheless, the fact that it is not possible to develop non-trivial designs (for software or anything else) like this in no way prevented people from advocating it as the "ideal" design process.


The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce,[4][5] although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.[6] This, in fact, is how the term is generally used in writing about software development—to describe a critical view of a commonly used software development practice.

http://en.wikipedia.org/wiki/Waterfall_model


You're being a bit too literal. The name came from the resemblance of a diagram in the paper to a waterfall. The name wasn't meant to constrain the methodology to strict characteristics of a waterfall. The steps are meant to be followed in order. It is gated. But, nothing constrains the steps from being repeated. That is the crux of the difference between Dr. Royce's paper and what the military formed as a standard based on the paper.

I took a college comp sci course a few years ago. The professor talked for a few minutes about the Waterfall methodology. In short, she said "we in academia gaffed by promoting the Waterfall methodology for several decades. It's now cause for laughter in academic circles when someone proposes the Waterfall method for software development."


I wonder if they will say the same about Object Orientation or the "cloud" in ten years time innocent smile


One case I've heard is when you don't have any changing requirements and the domain is well known. E.g. engine control software.


If the requirements don't change and the domain really is well-known, why are you writing new software? Why isn't there pre-existing software you can reuse?


Because you can save a quarter cent per unit by changing microcontroller vendors, but you have to use a new programming language for the new one?

Because you are involved in a lawsuit with the contractor that produced your previous version and using it would possibly be an admission it met requirements, even if your in-house developers had to completely rewrite it to make it safe?

Because new safety regulations mean you have to use a FIPS approved compiler and testing procedure?

Because it's a niche product and the only people in the world who know the problem domain work for you?


If you are designing a new car engine there isn't always of the shelf software already designed and tested. There are instances like the Ariane 5 maiden flight[1] where Ariane 4 software was reused.

[1] http://www.di.unito.it/~damiani/ariane5rep.html


Because it doesn't exist yet.


Not in my backyard syndrome.


You mean Not Invented Here.


I kind of like Not In My Backyard as an opposite to NIH.


They are not synonyms - NIH is correct in this case. NIMBY means something rather different.


I said they made good opposites. I suspect you misread.


The government will often request an RFP that basically entails doing the entire project so that the vendor can do the 5 minute demonstration to show that they are capable of completing the project. This greatly inflates prices, since the stakes for the vendors are so ridiculously high.


This isn't going to change until there is a thought process shift for leaders on both the government and contractor side. I worked on a project that fought tooth and nail to create a project using an agile development process and it was one of the best projects I worked on for the government. It was killed due to politics, but the feedback, functionality, UX, and collaboration up to that point were great. Everything else I did was waterfall and we always has the same cycle.

  while (true) {
    // Contractor working for a year
    Government: This isn't what we wanted!
    Contractor: We met all of the requirements... See all of the boxes are checked.
    Government: Well we want to change 1,2...n things.
    Contractor: Okay, Let's do a follow on contract.
    Government: Okay, Here is the money; Go.
  }


Sounds a bit like the contract I sent to India a few months ago.

One of my requirements was that it was able to send text messages.

What I got, was a ticked box, and an application that could also send text messages to 1 number. No, a number I could choose. One number as in, one cell phone.

And it wasn't even my own cell phone


On their end they are probably saying "Mission Accomplished." I'm laughing because of the situation, but that is just terrible.


Has anyone noted that the government perhaps just shouldn't be in this kind of business to begin with?

Any market based solution website has to be very agile and responsive [edit: to succeed at it's goal] but the government can't be super responsive and in many ways shouldn't be super responsive. The state spends all of our money and enforces mandatory decisions concerning our lives. The state shouldn't have the agile qualities needed to produce the beautifully flexible websites created by the private sector.

In general, I'd claim the state should certainly be smaller but that it shouldn't be less bureaucratic, shouldn't be more like a corporation. Civil service is boring and bureaucratic by design, specifically it was created to combat the "spoils system" that plagued the early American state [1] (though the prize of modern state eclipse what Tammany Hall etc could have imagined). Modern corporations are agile by having a command structure which lets them quickly maximize profits - which is great if we believe the market system benefits everyone when operating properly. But states with the ability to trample the fences of ordinary market shouldn't not be also given the ability to move quickly and agilely to do this. Corporations have no internal limits to their "greed" but we citizens of democratic market capitalism are assuming that's OK, indeed desirable, as long as the corporation face the strong external limit of markets and individual choice.

The current fashion for what could be called "state-enforced private consumption" is sold as giving us the best of all possible worlds but in reality gives us the worst (IE. the reality is the wealth of this society is indeed being vacuum-out by a kind of private-public rent seeker limited by neither the traditional bounds of the democratic state or traditional bounds of the market).

Note: I'm not a conservative rooting against Obamacare. It seems like it was a terrible approach for achieving affordable healthcare but I still would prefer it succeeded that failed because, well, I and many friends need it.

[1] http://en.wikipedia.org/wiki/Spoils_system


> Any market based solution website has to very agile and responsive

This is provably false by visiting any number of old, large company websites, esp in healthcare or banking. In addition the government isn't in this kind of business, it outsources almost everything to "free market" companies, who extract as much margin as they and their lobbyists can get away with.

Complexity has much more to do with size of and number of evolved entities than whether they are profit motivated or not.


Large companies in well established industries sometimes operate with near monopoly power and are only threatened when the entire industry undergoes a long term, permanent change. Most people don't consider such a situation a functioning, healthy market. Healthcare and banking are two great examples of industries whose dynamics tend to result in a small number of powerful entities calling the shots, the former due to legal reasons and the latter due to economies of scale.


Bad software development processes are everywhere. I've seen more than one successful companies that aren't massive corporations futz around and produce garbage. I've also seen highly effective government organizations produce some awesome product.


While I wouldn't call it "awesome" (bit too clunky for that), the Medicare.gov site that CGI Federal is responsible for (don't know if they built it or the recent Plan D shopping and enrollment part) is pretty good, very solid and gets the job done.


I should make have said something like "need to be ... to succeed". That many "market" sites aren't agile isn't the point, the point is that to be useful for the purpose of impelling lower prices for consumers, the Obamacare website needs to be agile. To the extent it isn't agile, it won't be helping it's cause.


I'll second this one, I spent five years at a major consulting company working in large banks and insurance companies, and there was NOTHING agile about development.


"A tank is a tank is a tank, pretty much, plus or minus a few bells and whistles."

Geeze, such amazing ignorance. If you're vaguely interested in this sort of thing, and want to learn all the process and engineering reasons the Abrams M-1 became the King of the Killing Zone, get a copy of the book by that name: http://www.amazon.com/King-Killing-Zone-Orr-Kelly/dp/0393026...

Written by someone who initially expected to castigate it due to early (mis)reported teething problems (e.g. the whole "it throws tracks (more than other tanks)" was due to a proving ground's faulty tension meter), he got completely sold on the tank which has since totally proven its worth.

Lots of fun stuff, from their modeling everything with strong constraints like weight (i.e. what bridges can it cross), e.g. they didn't want to provide a heavy M2 .50 BMG but the tankers demanded it. To the successful development team's leader, a grizzled Chrysler car exec who drove them crazy with "that doesn't look good" sorts of complaints.

Which often turned out to be a boon (ignoring that weapons should look good so their users feel good about them, which the M-1 delivers on). Said it was too high in an ugly way, so they figured out how to shave a foot off, which is very important for the European theater (not so good for deserts). Didn't like how the armor skirts didn't extend all the way to the back. So they gave in (I'm sure the modeling said it was only a minor net loss) ... and found that made a critial difference in keeping cruft thrown up by the tracks out of its turbine engine.

Very much an iterative process, in a domain where you truly "bend metal" to get things done.

So take the author's words with a big grain of salt, she's woefully ignorant of a huge domain in which we've been building for a very long time the world's most sophisticated artifacts, and learning how to, and how not to do it ... with stakes no less than national survival. Digital computers used for IT are a very recent development as these things go.


The US (and rest of world) should take a leaf out of the UK's recent initiative: GDS (Government Digital Services)

http://digital.cabinetoffice.gov.uk/

Aside from creating https://www.gov.uk/ which laid down a lot of principles on how to fulfil a government contract (as well as the foundations of what goverment websites should look like and how they should be developed), GDS is also looking at the problem of procurement.

The GDS team essentially are wrestling back from the big contractors the major contracts, breaking the work down into a large number of bitesize contracts and then farming them out to a wide variety of smaller vendors.

So instead of finding a Fujitsu/Siemens JV team, or an IBM Professional Services team, operating a £50m project, the plan is to offer 100 x £250k projects to a large number of smaller suppliers instead. Each project having a clearer purpose that is more able to be fulfilled.

Of course there are obvious overheads in managing so many projects, and of course some of these projects will fail. But... overall the savings will be such that the overheads are cheap, and the failed projects will only have a smaller impact on a major programme initiative than a failure would today.


This week a UK parliamentary watchdog described a failed National Health Service patient IT programme – the cost of which has spiralled to £9.8bn – as “one of the worst and most expensive contracting fiascos in the history of the public sector”. Earlier this month the Department for Work and Pensions admitted that it had written off £34m of IT costs, incurred in an attempt to overhaul how social security benefits are paid. A week earlier Co-operative Bank said it had written off the £148m cost of a new IT system that would no longer be implemented.

http://www.ft.com/cms/s/2/794bbb56-1f8e-11e3-8861-00144feab7...


All outsourced.

GDS is trying to in-source a lot of work that should be controlled by a central publishing and transaction design group, and contracting out the relatively boring task of following their rules. Good idea, as long as their architecture/design/program management review boards are well staffed and motivated.

I don't think they will stop these large failures. And many of these large failures do have clear up-front requirements that cannot be changed. Iterative waterfall is not so terrible.


Funny thing is, as far as we can tell this is how that project was done, minus the contract size sorts of splits. The government didn't hire a integrator, HHS's CMS took on that responsibility including integration testing.

Of course the minor fly in the ointment is that CMS didn't even vaguely have the expertise to pull this off; the Pentagon can do this for medium sizes weapons projects (which are a rather different field), but no one else in the US government has been said to have it.


All projects that were definitely not under GDS' management, more likely Crapita.


I would also add that the project had too many cooks in the kitchen, by some accounts. I have heard there were upwards of 50 distinct companies subcontracting on this project.

I work on projects that are probably on par in terms of complexity. We typically only involve a handful of firms. And even then, coordinating them all is a challenge. I can't fathom making the process work with 50+ firms.

Maybe that number was hyperbole. I don't know. But if it's true, I shudder at the thought.


This is just the way government contracting works. The "Prime Contractor" is often explicitly required to subcontract out a "meaningful" portion of the work to sub-contractors for a variety of reasons. Mostly because, as touched on in the article, the DoD/lobbyist revolving door has legislated a huge system of what is effectively corporate welfare in the form of contracting. The government throws money and a few "senior engineers" and management staff to oversee the Prime, and the Prime does the a similar thing to oversee the subs, plus some higher-level integration work (which may or may not require a significant staff), and the subs frequently either buy pre-made "Commercial Off The Shelf" (COTS) and sell it at a markup (a big markup) or are selling their own "COTS" product to the Prime (that they develop with in-house/contracted staff), also at a substantial markup.

The public reason for this crap is that the system is supposed to be structured to make sure that accountability exists, favoritism is minimized, and employment is boosted (by spreading out the work). The reality is that the system is designed to maximize the enrichment of the lobbyist-connected owners of the contractors.


I think there are a lot of secondary objectives in those rules. Being secondary, they are not pushing for a working website. Stuff like town and country planning (putting a building in an area that need jobs), economic (the made in USA, small business), social (hire old people, disabled, women in tech etc.), competitive process to keep the prices in check (haha), bulk buy to get some discount etc.

Obviously when you have so many targets, you don't hit a lot. Maybe the obamacare website was perfect in some secondary objectives.

But it also arises from perfectly true premise, that big spending is a force on the society and that it as secondary effects, and that maybe we can twist those effects in a political manner. The whole Silicon Valley is living on federal actions.


That doesn't seem to be the problem with healthcare.gov, or at least not in the way you suspect it might be.

The government, that is, HSS's CMS, took on the role of integrator, including integration testing (perhaps "Prime Contractor" as mentioned elsewhere). They aren't known for expertise in this (the Pentagon can do this with medium sized weapons projects, which are a rather different field anyway), and ... really screwed up:

They and those above were late with specifications and requirements, kept changing them (7 major ones in the last 10 months per the NYT), were making changes in the week before launch, and when they did a simulation test of 200 simultaneous logins just before launch the modules locked up. As did the site shortly after its midnight launch.

Oh, yeah, three days after the launch CMS panicked and proposed to fire Quality Software Services Inc. (QSSI, a unit of United Health Group) and punt their identity backend system (based on an Oracle package that's known to work), but eventually decided that would take longer than QSSI getting it to work. Who knows, but that's another sign of CMS as the integrator failing hard while distracting both QSSI and CGI Federal.

Now, maybe it ended up being too many cooks because CMS didn't provide strong oversight and coordination, but....


This is called "teaming", and it's a big game of you-scratch-my-back-I'll-scratch-yours in gov't contracting.


This also has to do with the laws that mandate projects over a certain size need to have x% of small businesses, y% of veteran owned business, etc.


Yeah, and that's one of the reasons I'm a bit wary of this "open-source is magic" mantra, that's putting a lot of cooks in the burning kitchen. Open-source means community management, public relations with opiniatred people, Linus-grade emails, and if you have really a big participation but no strong leader, it ends up like GNU hurd (is it dead yet ?).

It's all about organization, and trying to have just enough people to get the work done and nobody more, and the right people, and this has nothing to do with open or closed source.


GNU Hurd died because Linux appeared, folks like RedHat combined the GNU userspace with the Linux kernel.

It was a massive success -- everyone put in the part they did well -- kernel + userland + distribution = WIN.


> folks like RedHat combined the GNU userspace with the Linux kernel

Once Linux was good enough, RMS himself set Hurd aside, put the Linux kernel into the GNU project that he started, and (almost literally) declared mission accomplished. It wasn't folks like RedHat (that only came years later), the developers of Hurd were the first to kill it.

But then, since Hurd has quite an interesting architecture, people keep developing it, like dozens of OSs out there that'll never get anywhere, but are fine with that.


Making the code open source isn't going to solve all those problems - but it would make things more transparent. People both inside and outside the government would be able to see what they got for the money spent, and if things are being done in stupid ways it'd be a issue earlier.


It's not just the US government that has crazy, weird, inefficient technology. I'd be surprised if every government wasn't like this. The biggest IT failure in the world was the UK's attempt at healthcare computing that cost 12Bn GBP and didn't deliver a functioning system.

I work as a contractor for the Australian government. I personally know of multiple project failures in the 10s of millions of AUD range and a few in the hundreds of millions. These stories don't even make the news.

I've worked at small companies, research institutes, universities and now in government. I've not worked big corporate but have heard that it is similar, although more efficient than government. Size means you get less feedback on what is really useful.

Government fundamentally lacks feedback on what really matters. In the US the department of health cannot be driven out of business by another department that does what is important 10% better. In private industry that discipline and feedback makes things work.

If you build a widget X and it isn't something that people want you go under. That doesn't happen in government. If you build a donkey but it's the donkey they paid for it could be in service for 20 years.

It's hard to see how to make it all better. Perhaps trying to keep components small and having multiple groups build them and select the best might help. Then at least 2+ groups would have to compete to build a better system.


You make some good points.

Thinking a little more about the situation, consider that the majority of startups fail. I would not be surprised, if one were to take a look at the success rates of internal projects in large companies, to learn that those are fairly low as well. So if the private sector is more efficient than the public sector in IT, the differences are probably more subtle than one would at first think. In both cases you have lots of capital being spent on projects that won't come to fruition. Perhaps the incentives in the private sector are a little better aligned towards a successful outcome.


Yes, it's believed that most of those internal projects fail. Canceled, declare victory because of the internal political stakes but quietly not used (very much), delivered with a fraction of the original features, etc.

For some time I've thought this was one of the primary attractions of offshoring: if you must maintain the pretense of developing new programs and systems, it's a cheaper way to inevitably fail....


I kind of like the idea of every government function has two independent providers that compete for funding, and citizens could choose which provider to use.

I'm sure that system would blow up in some other way, however.


> I kind of like the idea of every government function has two independent providers that compete for funding

This is what the current government contractor system looks like.

> and citizens could choose which provider to use.

And how would we receive these choices? Balloting is a government function... Should we ask Diebold?


I think you'd need a lot more than 2. It's hard to get meaningful competition from only a smaller number because it ends up being much easier and profitble to collude (see telcos/cable companies/etc).


Just vote with your feet.


Also, it might be worth mentioning where I work Agile development is used. It does help some but it isn't a silver bullet.

Also, I didn't explicitly say so but small companies are more efficient. If they are not, they tend to go insolvent. That's what is so good about them.


If this is the software that's developed for millions of public users, imagine the software developed for in-house use, where the users are too few and too unsavvy to see how the software could be better (this is the case with most businesses, not just government)...this applies to basic information processing and to software interfaces for our sophisticated weapon systems.

And even the software for info systems can have dangerous consequences. Does anyone remember the underwear bomber, who almost brought down a plane and caused a nice surge of invasive security measures afterwards? His own father exposed him, but the State Dept's visa system failed to find the terrorist because someone misspelled his name when entering it into the system

http://www.cnn.com/2010/POLITICS/01/08/terror.suspect.visa/i...

Think about it...the State Dept has been dealing with foreign terrorists well before 9/11, whose names are easily misspelled by Westerners...there's not even a consistent way to spell Osama bin Laden, depending on you interpret he phonetics. And yet no one thought that a fuzzy spellcheck would be useful, apparently. And a whole bunch of people almost died because of it (and the security apparatus greatly increased)


Software for in-house use might be fine.

Bureaucracy reduces efficiency.

The more organizationally significant the software the worse risk there is.

Processes and procedures are the ways institutions manage risk.

So the more significant the software is the less efficient the production of it is.


ex-cgi contractor here and I am not surprised. they did the Massachusetts health care system that cost over millions of dollars and didn't deliver on the first day. The government had to shell out even more to keep it running bcoz Gov. Patrick didn't want that to fail on his watch.

The way they work is purely in water-fall project management mode. Project managers are gods and spend insane amount of time on ms-office calculating hours per each task that are 2 years out. then they bring CGI indian sub-par programmers on L1 to save on costs. Technology is least of their concerns since its about shipping code. Also blame shouldn't be just on CGI, the government is at fault as well. Simple request for information would take about 4 biz days to get it. everything is slow and the Gov IT staff has no clue on how to scale. Anyway, when I heard CGI won this project, I knew it would fail.


I've had federal employees tell me with a straight face that waterfall development is the only model that works, and that is why 'all of the tech companies use it'. These people have often gone and gotten certifications for stuff like six sigma and CMMI. They will never change their tune. You basically have to wait for all of them to retire. The average government tech worker is so different from the commercial tech worker that they may as well be a different industry all together.


I once worked for a very successful software company with a customer base that included government (IT installations), academic (college courses lab work, hardware research), financial (banks), and industry (computer hardware manufacturers).

We built a platform, and had a consulting wing that built custom "apps" on top of it.

We did Waterfall and CMMI. Waterfall was most intense for the consulting projects. I remember being assigned to build feature 3.2.2.1 in the spec.


The problem is not necessarily Waterfall, it's people's unimaginative approach to it. I've done plenty of projects for clients that wanted a Waterfall methodology, and I did it by writing the documentation and the prototyping code at the same time. In other words, Agile fits inside Waterfall. The requirements gathering phase in Waterfall projects is so incredibly long that you can definitely afford to make a prototype or 5. And you win huge points with your client when you're done with the requirements phase and get to say that development will take "only two months".

You have to treat prototyping as part of the requirements gather process. Then, when requirements phase is done, you have to treat "development" as really "testing". Because, for the types of clients that are going to insist on a Waterfall project, the final testing is really only a cursory user acceptance testing and they really don't have the skills necessary to determine if you've met their requirements or not.


With the government, depending on how "involved" your customer is in the contract, they might have a shitfit if they find out you are doing this. As another poster in this section noted, there are plenty of government and quasi-government[1] employees who seriously believe that you can't start writing code until you have defined all your requirements and prepared a design to meet those requirements.

[1] People who work for companies like MITRE that are basically privatized extensions of the government.


You are right, which is why I don't actually do government contracts anymore. It's just too easy to end up working for a complete, abject moron.


I should say, this is also why I don't work for Fortune 500 companies anymore, and also why I don't do work for fly-by-night, no-technical-cofounder startups, either.


It's interesting that the UK seems to be getting around this by doing stuff in-house. So now we have the open government license (http://www.nationalarchives.gov.uk/doc/open-government-licen...) and government websites that take pull requests for content.


It's the only way that works. You can contract out parts of your implementation, but you need a certain level of in-house technical expertise just to not get taken for a ride by your contractors.


Does anyone else find it ironic that Obama's campaign was a picture of web execution but in his administration it's the opposite?


Campaigning and governing are too very different things. One is sexy and draws the attention of very talented people for a limited period of time. The other is boring, bureaucratic, (probably) extremely frustrating.


Not really. The campaign is centered on a single objective, that of victory. the government tool has to be forward-compatible for decades to come, not to mention that it is subject to Congressional supervision and so on. If it was run like an agile development project then there'd be loud screams about the lack of accountability, process, etc. This is a 'big iron' project if ever I saw one.


Not really, neither the environments nor the personnel directly involved on software development are even remotely similar between the campaign and the executive branch.


The health care site involves a web application with high number of business rules and data structures.

The campaign's site is fluff.


bchjam said "Obama campaign" while you dismiss just the static website. That wasn't where the impressive IT investment in the Obama campaign was directed. You underestimate the amount of technology used for canvasing, volunteer management, exit polling, etc.

"Someone counted nearly 10 distinct DBMS/NoSQL systems, and we wrote something like 200 apps in Python, Ruby, PHP, Java, and Node.js."

http://arstechnica.com/information-technology/2012/11/how-te...


The campaign's site was also static (see http://kylerush.net/blog/meet-the-obama-campaigns-250-millio...)


Only people who don't know how the government works.


Or the laws that are put in place mandating certain requirements for contracting and acquisition. It's a fucking mess.


Campaigns are private entities designed buy a couple people to win

Governments are huge entities, with laws that look like they're there to protect public trust, but are really written to often reward key players (for this area of government).


Well he promised to close Gitmo too...


If someone does a bad job on a campaign, he gets fired and replaced with someone that can do the job better. Campaigns run on a limited budget - they don't have all the money in the world. They have a well defined goal and strict timeline.

Pretty much the opposite of a government project.


There's a big difference in the amount of bureaucracy needed to organize thousands of volunteers doing one thing versus millions of employees doing an uncountable number of things.


Have a look at what's happening in the UK:

https://www.gov.uk/transformation (fully responsive design pages together with new service backends delivered with Agile / Scrum)

and the teams doing it: http://digital.cabinetoffice.gov.uk/ (they are hiring more than a dozen people in the moment)


This is not a bad article. The reporter manages to cover a complex subject in an easy-to-understand way. I liked it.

I will point out, however, that there's a huge assumption lurking in there that wasn't explicitly stated: somebody on the government side has to know what they want and be willing to take the heat if they get it wrong. _This_ is the reason so many agencies prefer waterfall -- there's enough obfuscation and paperwork involved that when somebody complains, and in high-risk projects there'll always be complainers, nobody is really at fault. The coder guys can point back to the designer guys. The designer guys can point back to the requirements guys. You'd think that the requirements guys, the guys at the front of the waterfall, would catch all the blame, and they do. But they just write bug tickets because some aspect of the process wasn't followed well enough.

You can spend hours or days trying to figure out what went wrong and not know anything more than you did before you started. Which is exactly why the system has evolved the way it has.

I hear a lot more government projects are going to be Agile. Here's wishing them luck. If done correctly, Agile will 'debug' the organizational problems that lead to this bad performance over and over again. If they just sprinkle a little Agile nomenclature on top of things, it won't do anything at all.


So, for everyone who feels that they have anything resembling a better solution, I'd suggest that you go and actually try implementing it at a smaller scale first. Start with your homeowners' association, your neighborhood, or your nearest town. You'll have relatively few people to convince, more access to decision makers and funding sources, and less capable contractors gaming your system.

Get that down, and then get several neighboring communities--again, HOAs, neighborhoods, or towns--and get them to adopt your ideas as well. With that amount of variation, you've got a strong base from which you can convince a major city, or a county, to adopt your ideas: after all, many of their constituents are already on it and can endorse it.

This isn't meaningfully different from founding a startup taking on governments as clients.


I don't know why websites can't run on magic and fairies. The rest of the government does.


The most important part of this article is paragraph two, for without the exclusivity present in the government procurement process, it's unlikely we'd have a lack of innovation in regard to development practice.

I've led or worked on tech contracts and grants for ED, HHS, NSF, CDC, and others. Several people have pointed out some important points that are not getting enough attention in my opinion:

by @mcone: "The procurement rules were designed for that, yes. But in real world scenarios, those rules effectively do exactly the opposite. Since there are so many hoops for potential vendors to jump through, only the most established players get to bid on most contracts. And in my experience corruption and cronyism is still alive and well in federal IT contracting."

It's true, incest between government and industry is rampant and has led to wide spread cronyism despite the system's best efforts to limit the effects. People that once worked for company X, now serve on the proposal review panels when company X competes for work. No, they don't receive direct compensation and thus there is no immediate conflict of interest, but the reality is humans are drawn to (or don't want to disappoint) the people they know (former colleagues) and thus pick their old companies. In addition, they know there is a chance they may once again return to said company (so there is long term conflict of interest potential).

Another point that hasn't been discussed is how the government's procurement process provides next to no incentive for companies to efficiently produce good products. If our industry loves the DRY concept, everything about the gov procurement process points to a !DRY (or do repeat yourself). We built and rebuilt the same database for offices of the government that shared a building with one another. But because everyone is in a silo, they don't collaborate well and don't realize that they could pool their needs to develop more universal products. (And on the industry side, as long as gov continues to work this way, they don't even have an incentive to re-use their work or propose innovative, generalizable solutions.)

And for those of you that might say, 'but can't you win a gov contract by bidding lower by working off of existing work?' The truth is price has very little to do with who gets selected for a gov contract.


Maybe US should look to the number of successful websites launched in European governments and public sectors. There were obviously a lot of failures along the way, but that is what you learn from. And if US can skip some failures, thats maybe a few billions not lost. Worth looking into.


I have respect for people who are doing waterfall and admit to it. In my personal sphere I see far too many people talking agile all day while actually running waterfall projects. It seems like people think agile means "waterfall, but skip most of the upfront planning".


the DoD does not know what it wants ahead of time; they are in a tight feedback loop with contractors.


actual problem: government is incompetent at leadership because it is saturated with politicians whose main skill is only convincing you they are competent leaders. no leadership skills actually required.

if government were competent stuff like capitalist economy and privatising public services would always be bad (provably - actual proof, not just overwhelming evidence - but there is overwhelming evidence too).

leaders intuit things like agile and don't label them - instead of picking it up from a book and implementing it badly because they don't understand first principles.


Coming from a 6+ year stint in a defense contracting, I can safely say that the issue with this approach happens well before the testing. The problem more often than not occurs at the requirements level.


Bingo. The government, which took the role of integrator, kept making requirements changes, right through the week before launch. They also did integration testing ... and of course ignored that that failed hard.

I can't see any way CGI Federal et. al. could have won.


> The government ... kept making requirements changes

So, just like every client every developer ever had?


If they were all that bad we'd never get anything done.

In drafting the above I removed some thoughts about CGI Federal pushing back, since that struck me as something that probably wouldn't work in this context. But I for one have pushed back on clueless customers before, saying the usual "this change will cost time, money and/or quality".

Or, heck, I'll bet CGI did some push back, but there's no reason to believe the inexperienced bureaucrats and political appointees in HHS and CMS would have listened to them. We've been reliably told they were told it was going to hell, and there's that one fed who switching in March to "I hope it won't be a Third World experience" which I take as a sign he saw trouble on the horizon.


I would argue it happens at an even higher level the 'planning' level.

One of the basic philosophical differences between the agile and waterfall approach is that agile assumes that you cannot know all the requirements at the beginning of a project. You must start building things before all the little dirty edge cases become obvious. Additionally you don't actually know if you're going in the right direction until you have something concrete to work with, even if it's mock ups or wire frames.

What I think they should of done in this case is rolled out the site in stages, probably starting with one state that had an easy backend to work with and gradually adding complexity to the site.


Oh really, ya think?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: