Hacker News new | comments | show | ask | jobs | submit login
“The real problems are with the back end of the software” (marginalrevolution.com)
125 points by wwilson 1427 days ago | hide | past | web | 148 comments | favorite



"The front end technology is not the problem here."

Let me fix that statement: "The front end technology is not the worst problem here."

Looking at the resources loaded for the sign-in page, I counted 58 separate Javascript files. Including one that implied by name was minified, which on inspection clearly was not. I didn't bother counting CSS or image resources. I returned to the page two days ago, which indicated it is down for scheduled maintenance. It remains in this state.

CGI obviously borked this project. The government deserves its own special classification of criticism, but poor planning, change management, etc. from the government is no excuse for CGI not building an architecturally sound web site.

The contract was $350 million? Good grief, they overpaid. Nonetheless, if we could go back in time AND assuming we needed to spend this budget, here's what I would have done:

1. We make investments of $15 million in 20 different startups, and tell them to implement the initial phase -- let's say we call it the "minimum viable product" or MVP. Each startup has the same deadline for delivery.

2. On the delivery date, all companies meet with us to review their MVP. We call it a "demo day" and view all 20 demos.

3. Through some set of criteria, we create a short list of five companies from the 20 demos. Those five companies receive an additional $5 million investment, and another delivery deadline.

4. The companies iterate on their MVP and come back for another demo, this time with a deep dive.

5. We pick a winner from those five. The winner gets another $25 million investment and is responsible for any additional work to be completed.

TechStars for government, essentially.


I don't see how giving 20 startups $15 million each would have led to success instead of failure. Even if you found a better company to do the implementation than CGI Federal, there were several enormous problems that no company would have had any control over.

1. Requirements were delayed so much that development didn't start until March of this year. That spells doom for a system with this kind of complexity, regardless of who the implementor is. You need several months of functional, load, and integration testing so that effectively means you would only have had 4 to 5 months to code healthcare.gov. And that's assuming there weren't any big requirements changes.

2. The people responsible for integration (Center for Medicare/Medicaid Services) had no large IT project integration experience. These are people that thought 1 week of full integration testing would be enough.

3. Healthcare.gov had to integrate with legacy systems from the IRS, Medicare/Medicaid, Social Security, in addition to the various state exchanges. Any number of those systems could have serious flaws that would make it extremely difficult to interface with. On any project a poorly implemented legacy system can dramatically affect the effort needed to be successful. Again, even the best companies would've had a sizable challenge dealing with that.

4. One of the biggest challenges in government IT is the customer. The government decision makers often don't know enough about software engineering to make sensible decisions on requirements, timelines, testing, you name it. In this case there was the added political pressure of, "This cannot fail," even though it should've been clear at least a year ago that there's no way they were going to make the deadline. But you get people who think that you can just deploy and fix it as you go along. Or you get people that think you can just add developers to make up for lost time.

5. And what does being a "startup" have to do with it anyway? Either a software company can do the work, or they can't. Whether they are a startup or an established entity really has nothing to do with it.


It looks like you and jroseattle are demonstrating the main two competing views about how to approach new software projects: 1) deterministic, controlled, strong planning, avoidance of failure vs 2) nondeterministic, flexible, embracing uncertainty, expect and handle failure gracefully. There are many different versions of this in technology: one big expensive server vs many commodity servers, waterfall vs agile, ACID vs BASE, BigCorp in-house R&D vs distributing risk across startups.

Large parts of the technology world operate according to the latter model, and they do so for a variety of very valid reasons. Obviously, the government and government contractors do not.

jroseattle's comment presents a speculative model for how to apply a distributed, fault-tolerant model to this kind of technology project.


I'm not saying that you can't or shouldn't distribute risk across many small projects, I'm saying that in this case doing so would not have helped you at all. If you had a competition to build a MVP of healthcare.gov where the requirements didn't get delivered until 48 hours before it was due, then the requirements changed substantially 30 minutes before the cutoff time, none of the teams would have succeeded.


The distribution among startups is about spreading risk for the project. The existing approach was one integrator, and they failed. Because there are so many reasons that are provided for why this had no chance of success, i.e. "Even if you found a better company to do the implementation", it makes strategic sense to me to try multiple approaches.

But I would contend the problems weren't unmanageable. Problematic, yes, but manageable. I will try to address your points:

1. Requirements delay - sure, specifics were delayed. But, there was a basic premise for what this system would entail, and core work could progress without incurring much risk or waste. (I'm unsure when project work by CGI was started versus development work.)

2. IT integration experience - this is dependency management 101. If this is a recognized risk, then CGI should have raised the red flag. This looks like very poor project management.

3. Dependencies - yes, there is a possibility that external dependencies can affect your system. You design and architect for these scenarios, including failure.

4. Technical know-how - I would not expect anyone in government to know one iota about software development. CGI's job, in this scenario, is to elicit those requirements in a way that's useful.

5. Good point, but this project would benefit from some competition. Why not let more teams try this, rather than wrapping it all up with an entity who's only proven competency is simply receiving more contracts?


The integrator in this case was the government (CMS is a division of HHS), there's no getting around them as a single point of failure.

1. Having a basic idea of what the system will do is helpful, but in a system that has a lot of very specific business logic that has to be defined and HAS to be right in order to comply with the law, you're not going to get very far without good requirements. Furthermore, depending on how the contract was structured, CGI Federal may not have been allowed to start development work until it was authorized by the customer.

2. CGI may very well have raised multiple red flags, and CMS and HHS could have promptly ignored them. That is one of the maddening things about government IT. You can tell the decision makers that they are heading for disaster and they can simply ignore you.

3. Let's say you plan for every one of your dependencies to fail. What useful product could you deliver in that case? If you don't get certain information you cannot give the user a health insurance product according to the law. This is system where you cannot reasonable ignore a component if you can't get it to work in time. Let's say the interface with the IRS is going to take longer than you thought. You can't just cut that out or substitute dummy data. That would amount to failure to deliver a functioning product.

4. Again, you can elicit as hard as you want, if they don't give you the requirements because the law is still up in the air, or because the president might change his mind, or because someone wants to meet with all the stakeholders in the other agencies first, there is absolutely nothing you can do. I've been on projects where the stakeholder with the final say couldn't be bothered to give their input until the software was done. I've also been on projects where the decision maker prevented us from getting the input of the employees that would actually be using the application. Sometimes the people you're dealing with are not rational actors.

5. Fixing the contract award process is certainly part of the solution. Normally many teams could submit a bid for the work, but due to the insane time scale it was skipped for healthcare.gov. I think having a proof of concept competition after some sort of initial proposal selection gate could be useful. It could sort of be like how the military has contractors build prototypes for weapons systems competitions.


Comments added inline here.

> The integrator in this case was the government (CMS is a division of HHS), there's no getting around them as a single point of failure.

This makes it easy to identify where the roadblocks are, and to present that information to interested parties at the appropriate points in time.

> 1. Having a basic idea of what the system will do is helpful, but in a system that has a lot of very specific business logic that has to be defined and HAS to be right in order to comply with the law, you're not going to get very far without good requirements.

Without requirements, how would one expect very specific business logic? And what is so incredibly precise that it derails any possible planning for it? I'm not familiar with that type of problem.

> 2. CGI may very well have raised multiple red flags, and CMS and HHS could have promptly ignored them. That is one of the maddening things about government IT. You can tell the decision makers that they are heading for disaster and they can simply ignore you.

This is very true. Of course, with basic project management 101, these things are duly noted so that when you're hauled in front of Congress, you've documented the situation. Based on CGI's responses the other day, they surely didn't do that -- otherwise, they would have raised it. Unless government IT work of this nature is classified?

> 3. Let's say you plan for every one of your dependencies to fail. What useful product could you deliver in that case?

One that allows you to fix those problems independently, rather than tying the success of the entire project to the success of all integration. In a project where the requirements are constantly shifting, I'd say the "useful product" definition is dynamic and iterative. A strict all-or-nothing definition makes little sense in that environment.

> 4. Again, you can elicit as hard as you want, if they don't give you the requirements because the law is still up in the air, or because the president might change his mind, or because someone wants to meet with all the stakeholders in the other agencies first, there is absolutely nothing you can do. I've been on projects where the stakeholder with the final say couldn't be bothered to give their input until the software was done. I've also been on projects where the decision maker prevented us from getting the input of the employees that would actually be using the application. Sometimes the people you're dealing with are not rational actors.

I get it; I've worked with federal agencies as well. It all goes back to project management and documenting interactions. As for logistics, there is always someone else to go to when a stakeholder is a bottleneck. Here's something that I found that worked in my experience with some fed agencies: identify those who are considered a project dependency, and that project progress relies on specific people. When facts are presented simply, and project management details are documented...individuals get motivated to NOT see their name in lights. As always, your mileage may vary.

> 5. Fixing the contract award process is certainly part of the solution. Normally many teams could submit a bid for the work, but due to the insane time scale it was skipped for healthcare.gov. I think having a proof of concept competition after some sort of initial proposal selection gate could be useful. It could sort of be like how the military has contractors build prototypes for weapons systems competitions.

Absolutely. Basically, it's an iterative phased approach, and let multiple teams go for it.


"> 2. CGI may very well have raised multiple red flags, and CMS and HHS could have promptly ignored them.

Of course, with basic project management 101, these things are duly noted so that when you're hauled in front of Congress, you've documented the situation. Based on CGI's responses the other day, they surely didn't do that -- otherwise, they would have raised it. Unless government IT work of this nature is classified?"

Depends on the stakes. At the moment, they're kind of low, I doubt sufficient to piss of your monopsony customer. I mean, when CGI's VP was handed this straight line (http://www.washingtonpost.com/politics/house-panel-grills-co...):

"Later, Rep. Leonard Lance (R-N.J.) asked the contractors whether they could conceive of “a more incompetent administrator” than CMS."

And replied:

"“I have no opinion on that,” Campbell replied."

Yeah, I'll bet she has no opinion on the clowns that put her front and center before the nation's Grand Inquisitor (the US Congress).


What it boils down to for me is that the best contractor in the world can't make up for the worst government decision makers. After a project goes down in flames the contractor can point to specific failings at the government level, but at that point it's too late, you've already wasted a lot of money and you don't have a working system.

In my opinion, multiple facets of government IT have to be fixed:

A. Government leads need training on how to successfully manage IT projects.

In practice this is a huge undertaking because there are a lot of organizational challenges that have to be tackled before a lot of IT projects have a chance of success. Good managers in government will also have to tackle dysfunctional bureaucracy in order to be successful

B. Government employees need to be held accountable for their failures.

C. There needs to be more full time IT professionals in government in positions of leadership

D. Hard deadlines should not be determined politically before it has even been determined if that the deadlines is achievable

E. The contract award process needs to be reformed.


Sounds like delusions of tech bro intelligentsia... [1]

[1] http://jacobinmag.com/2013/10/delusions-of-the-tech-bro-inte...


I don't see how this article is relevant at all.


It is not so much the specifics of the BART strike described in the article that I find relevant, as it is the general attitude displayed by the “lucky elite class of tech workers”. [1] There is certain mix of privilege, arrogance and ignorance reflected in the idea that all it would take to implement Obamacare is to give 20 start-ups $15M each, and the problem would be solved. Voila, a quick, easy and financially viable technical solution! Sadly, the problems runs much deeper than just the technology (see other comments reflecting on procedural and political issues hampering development, changing requirements, unrealistic expectations and so on), and to ignore the systematic issues is naive.

[1] Quoted from the linked article


I get where you're coming from, but it's not just the privilege issue. Fundamentally youth often has the ignorance to see the path to success where the more experienced can only see the roadblocks. Usually (especially in government) they'll quickly get clobbered by reality, but every once in a while they'll do something all the graybeards thought was impossible.

That's not to say hand-wavy armchair criticism of what is obviously a quagmire isn't annoying...


Jacobin is such an annoying magazine. Every time I commit to seriously reading (as opposed to skimming) one of their articles, I still come away with the same conclusion: that it contains almost no substance. Though I guess it's a pleasure to read the flowery language if you already agree with the premise.


While I'm sure you're partly correct, until we know how bad the CMS on up management was for this specifically, can we really say that for sure?

What we know is:

The project didn't get seriously started until February/March (e.g. the election created a 3+ month freeze on HHS publicly visible work).

The NYT reported that in the last 10 months, 7 major requirements changes were made.

We've been told the "no window shopping" one was made in August or September.

We've heard from multiple sources that changes were ordered through the week before launch.

Given all the above, how much do you see incompetence, and how much "just not done yet" pre-alpha stuff? I'm mostly a back end developer and am not up to date to judge this, I'm really interested if the above makes any refinements to your judgement.


e.g. the election created a 3+ month freeze on HHS publicly visible work

That's not precise. There was no legal requirement that HHS freeze publicly visible work around election time. The administration, for campaign reasons, didn't want ongoing work to trigger anything that might get in the way of the re-elect. Remind me again which party is obviously oh-so-serious about governing.


I was trying to be polite.

But in all seriousness, this is the first major entitlement program passed without bipartisan, widespread political support (more from me here on that: https://news.ycombinator.com/item?id=6622456). The Administration was insane not to factor in that there would be all sorts of "glitches" in the development process because of that. Someone deadly serious and competent, like the shivved Tom Daschle (https://news.ycombinator.com/item?id=6622284), should have been working on this the day after it was signed, if not before. Then again, very few people seem to have that much power in this Administration ... see again the possible object lesson of Daschle.


I'd take Tom Daschle over Harry Reid running the Senate (or at HHS over Sebelius) any day, but I am skeptical of the theory he was "shivved". You're right that the car service issue should not have been enough to sink him, but I think that was a convenient out: IIRC, Republicans were getting ready to ask some uncomfortable questions about his wife's lobbying efforts, and I can understand why neither he nor the President were excited about going down that road.

Also keep in mind that his tax troubles came to light after Geithner's and others', so there was less slack left to give.


Urk, I left out "possibly" in front of shivved, follow the link and I'm much less certain, and in fact still lean towards my initial opinion which you've fleshed out/reminded me more about.

Still, suppose this was the one decision that sinks Obamacare....

Well, that's 20/20 ... erm, forcasting???

And, yeah, Sebelius was a known disaster when picked (I live right across the border from Kansas); I mean, she let the state run out of money one month (http://en.wikipedia.org/wiki/Kathleen_Sebelius#Tax_revenue_c...), rather a surprise as I remember. I strongly suspect she follows in the usual Obama Administration pattern Cabinet members being figureheads and cheerleaders instead of true executives, which we'll likely be about to guess about when she testifies this week.

Reid ... I'll just go with a Wall Street Journal writer's description of his coming across like a "slightly overeager undertaker" ^_^.


Context is everything, so it depends on the "change". But never ascribe to malice what can be explained by incompetence. Pre-alpha is not an excuse for improper architecture upfront.

As I understand it, there is a lot of background integration with various agencies, systems of differing reliability, non-standard interfaces, etc. There are plenty of strategies that can deal with that environment (SOA, asynchronous queueing, etc.)

It's quite apparent this first version was not built with much flexibility in mind. Not flexibility regarding requirement changes, but flexibility for dealing with the infrastructure already in place.


I'd add that all participating startups would also need to develop out in the open on something like github or bitbucket. This way we can publicly observe the progress and quality of the work from the 20 different startups and claw back any unused money from startups that are clearly going to fail on the way. Furthermore, we could mandate a requirement that all the code produced is effectively free and open source and that any of the 20 startups can appropriate particularly well built parts from each other.


You're aware that the Obama administration are already strong backers of open source?


Personally, I'm very disappointed with the level of support for open source I've seen. A Github account with a dozen or so projects all related to a CMS is not what I call supporting open source. At best its a concession to the developers you've attracted to help with your political platform. You just won't get certain developers if you don't also allow them to participate in open source. It's calculated and expedient here more than altruistic or in the name or transparency, openness and letting the American people own the works for hire they have paid for through taxes.

If he (or any politician) wants to really support the idea of open-source they would require that everything that can be open-sourced will be open-sourced. The only exception I see to open-sourcing would be limited to almost strictly to military or defense related software and hardware projects. For every single other system, the source code should be out there in the public. Start with every single non-critical piece of software that is not part of infrastructure like transportation and public utilities and once security practices and norms improve and stabilize among the non-critical stuff, slowly and carefully open source all the rest except for military and defense projects.

At the price tax payers are paying for software, we should own the code outright many many times over.


Like for Healthcare.gov: https://news.ycombinator.com/item?id=6539349 ?

I haven't looked into this, but can you point to any concrete examples, as opposed to the usual political platitudes?



poor planning, change management, etc. from the government is no excuse for CGI not building an architecturally sound web site

In the world that most of us live in, this is true. But for companies like CGI their business is not really building architecturally sound systems, it is keeping the doors open to an endless stream of government contract dollars. When the government rewards failure by granting the same vendor another contract to fix the problems, the predictable happens.


Given our current federal procurement rules, is it even legal for the government to discriminate between bids on the basis of the past competence of the people making the bids?


As I understand it, it's allowed to a degree. Of course an award protest is even more likely....

Companies can also be disbarred from government contracting, but obviously that never? happens to the big, politically entrenched ones. But I wouldn't want to work for CGI Federal today....


Until you've got that legacy system that the project has to integrate with. At that point, the legacy system is dealing with twenty different new large projects trying to integrate with it, instead of one.

MVP doesn't work well with deep integration. You can break this down to a form that takes one input, and returns one result on a following page. From a UI perspective, this seems like one Agile story. But that one round-trip can spawn so many integration steps. I just got finished with a health care IT project like that. One round-trip step involved integration with a single-sign-on service (which needed to be reconfigured), a rickety SOAP service provider (which had limits in how many test boxes they could set up and was controlled by a different bureaucracy and needed approval processes to turn on each required API method), a separate box returning chunks of patient data wrapped in html (don't ask, this was again out of our division's control), and our own backend system through REST so our resultant data would not be stored on the same server as our webserver (cluster). If some of these backend servers were told "okay, you now have twenty implementations to deal with instead of one", it would have drastically reduced the probabilities of completion.


Actually, I would say that MVP is critical with deep integration. In my experience with deep-integration projects, often the most important aspect is cutting through the clutter.

I've worked with some of the federal agencies that have been listed here, and it's true -- some of these things are nightmares. But, there are architectural patterns and development approaches to deal with those.

Integration makes things challenging, but not impossible.


Unfortunately, it can be hacked by taking the initial $15m, doing a totally half-asses effort (worth say $1m) and raking in the handsome profit of $14m.


This is why you invest in twenty companies, not one.


But what's stopping the majority of companies from just phoning it in and collecting free money? There would be only one winner so 19 of those companies are wasting their time anyway.

If you really want companies to build prototypes, they have to have some skin in the game. Otherwise you are all but ensuring fraud.


I might be misunderstanding. Is the suggestion here that several companies would induce them to be less competitive, but a single large company would be highly motivated to succeed?


No, I'm saying that in order to get the $15 million there has to be some sort of measures in place to make sure that people don't game the system. If you just give them $15 million with no strings attached, they might decide to do the bare minimum and pocket the rest.

Maybe you could stipulate that each company has to meet a minimum bar in order to get that $15 million. Maybe advance some of the money up front, then pay the rest if they meet the bar. If they fail to meet the bar, they have to give back some of the money.

In any system you implement you need to give a lot of consideration to how someone might abuse the system because someone absolutely is going to at some point.


I didn't expect it was necessary to spell out the details of the arrangement to support the validity of the approach. The purpose of the suggestion is to approach the problem differently to achieve a successful outcome.

Administering this type of procurement arrangement, while certainly important to all the points you have raised, has very little to do with the success of building and delivering healthcare.gov.


Surprise! I have twenty companies that all look about right. Net profit: 280 million. And the result is still shit.

The solution is obviously to pick good companies, not lemons. Except, that was the whole problem in the first place.


Ok, thanks for the snark.

If we're so clueless that we can disambiguate from companies in this fictional scenario, then we're hosed anyway.


I don't know about US gov procurement but I do know a bit about European, which I don't think has a particularly better track record.

On these big gov projects you would not believe how terrifyingly thin the margins are for integrators due to politicians being very sensitive about being seen to be responsible with the public purse strings, particularly given that they don't understand the technology. In addition to the thin margins, you therefore also see extensive offshoring and very low blended rates.

The projects are still incredibly expensive in the end and barely perform because the cheap labour incurs massive technical debt - and the thin margins means that the integrators try to insulate their risk with many layers of project management, and huge amounts of rigid enterprise architecture and planning up front.

Given this mentality of cutting cost to the bone via forward planning, even proofs of concept are hard to get through, let alone investing in 15 prototypes. I think the way to do this would be as part of a gov tech investment scheme rather than attached to a specific project or program.


One of the questions I have is why/how no one came to think of the back end issues at early stage of the development. (I assume a $350 mil project involves a lot of experienced people.)

I have seen some infographic on how large the code base is and at the beginning of the parent article, I thought the guy is going to argue that the code base is huge because they had to circumvent/workaround the back end problems.


"I assume a $350 mil project involves a lot of experienced people."

That's a safe assumption.

What's not is that the people calling the shots listen to them. We've seen mountains of evidence so far that they didn't.

E.g. CMS should have started integration testing of back end components and therefore issues long before the 1-2 weeks before launch, as testified to at the House hearings by representatives for CGI Federal, and more critically QSSI, which is responsible for a lot of those back end connections. Here's the latter guy (http://www.washingtonpost.com/politics/house-panel-grills-co...):

"Andrew Slavitt, an executive vice president of Optum, said the testing did not occur “until the last few days.” He said that “ideally, integrated testing would have occurred well before that date.”

Pressed on how long in advance of the launch such testing should be done, Slavitt replied, “Months would be nice.”"

These contractors are experienced, they've seen it all before, including customers from hell like CMS.

Fortunately, as noted elsewhere, CMS is out, QSSI is now the integrator and the fix-it czar is saying the right sorts of things.


You could do it as a SBIR project and get your MVPs for $150K, and still get better results.

"Phase I awards are typically $70,000 to $150,000 in size and the period of performance is generally from six to twelve months"

http://www.acq.osd.mil/osbp/sbir/solicitations/sbir20133/pre...


Hyperbole aside ("... an act which would border on criminal negligence if it was done in the private sector and someone was harmed ..." - what does that even mean? So all of us who have shipped buggy software for our customers are borderline criminals?) - this doesn't surprise me in having dealt with the VA. They have legacy upon legacy upon legacy, with all sorts of fun limitations like not being able to have a "\t" in your content because that'll screw up their backend which relies on tab-delimited data. Health care in the US is playing catch up technology wise to almost every industry. And not for lack of technology, but for lack of political will power.

My favourite example of this was trying to deploy an app within the VA that was written in Django. I was told "Python is not on the list of acceptable languages." So we came back to them and said, "Good news everyone, we ported it to Java." Of course, it was just Jython, but that's the sort of stuff you encounter.

Multiply this by the complexity involved in trying to herd all these cats into one backend like healthcare.gov and it was doomed to fail.


Such projects are doomed to fail because Big.corp, as well as USA.gov doesn't get that they're not using the right people.

High profile project? Want it to work? Hire the right people. Want the right people? Pay whatever it takes.

I've been involved in more such projects than I care to remember, and the problem is always the same. A project manager with rudimentary delivery process knowledge owns a large technology project. What's needed is a technically astute lead that knows how to abstract away from delicate backend dependencies, knows that some projects need big design up front, and knows people that have the specializations he or she doesn't.

.Gov projects unfortunately are turf wars, where people scramble for a piece of the cake because money smells good, and success is someone else's problem


Pournelle's Iron Law of Bureaucracy tells us this will almost always be the case (http://en.wikipedia.org/wiki/Jerry_Pournelle#Iron_Law_of_Bur...):

"In any bureaucracy, the people devoted to the benefit of the bureaucracy itself always get in control and those dedicated to the goals the bureaucracy is supposed to accomplish have less and less influence, and sometimes are eliminated entirely."

And this amplified restatement:

"...in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions."


I guess the alternative is to subject the implementers to a truckload of meetings. Implementers don't like that. I can see a pretty clear cause-effect chain here. You let the implementers focus on implementing. There becomes an agreed-upon definition of what sort of matters are in the implementer's sphere of focus - that anything outside that "sphere" is a distraction to the work they need to focus on. Usually those items outside the sphere are a step up in abstraction - product requirements, etc. The implementers then start to get blocked by matters outside their sphere, and they rely on people outside their "sphere" to get them unblocked. People are assigned to roles to handle the issues that are outside the implementer sphere... and they are usually not implementer (because someone with implementing skills is needed on the implementing team) and presto, bureaucracy.


You have some interesting musings, but let's reify this to one particular "political" (especially in the pejorative sense we use that) decision:

In CMS's integration testing in the week before launch, they tried 200 simultaneous simulated logins, and the system seized up.

It was decided to launch anyway, knowing the system was non-functional. By contrast Oregon and Hawaii declined to launch their state exchanges on Oct 1st.

I submit that those three decisions are intrinsically political and you can't change that.


I think the administration was worried if they didn't launch on schedule at the start of the shutdown/ceiling fight they might have to accept house republicans demands of a delay in the ACA. Apparently they were gambling the problems wouldn't be discovered until after the shutdown/ceiling fight was over. They won the gamble and the ACA survived but the problems remain and fixing them looks challenging.


That's for completely supporting my point that this was a political decision.

But I'd say the jury is out on the wisdom of this decision. A Healthcare.gov launch delay would have joined a bunch of other delays and waivers, and ... well, you and I don't seem to live in the same universe, I don't see any scenario where Obama and the Senate Dems would have been forced to accept anything they didn't want.

There's in fact more than a little worry from the liberal side of things that this most recent crushing of the eGOP was a step too far, e.g. http://pjmedia.com/richardfernandez/2013/10/16/the-third-par... which I'm just starting to try to grok.


Even without being cynical about the motivation of the players: it's the unknown unknowns that eat them alive, even if they have the best of intentions.

It's also a tough problem to solve because very competent people don't want to work on a boondoggle, and any large government IT project has boondoggle written all over it before it even goes out for bid.

I wonder if Obama asked for any advice from some of his campaign people. If he didn't before I'm sure he is now.


I don't buy the unknown unknowns. Big system integration problems follow patterns, and an experienced tech will recognise them. The problem with any half-assed strategy is that people gloss over the knowns, which is fatal. First thing anyone should do on a large (actually any) project is to assess the current situation[1]. When you discover a big nasty piece of legacy you need to integrate with, you look at how it works. The rabbit hole must be followed to it's end, or something will bite you. Blaming unknown unknowns in IT is the same as telling the teacher that the dog ate your homework.

https://www.wittenburg.co.uk/Entry.aspx?id=46870dcd-70cb-4ef...


It was a poor choice of words on my part. I meant to suggest that they were knowable unknowns, but just that the people involved didn't have a clue.


The project is full with 'known knowns' that have bitten them very hard; their excuses may be valid if they were harmed by the unkowns while handling the basic stuff at least somewhat appropriately - which they have not.


that last paragraph is the most interesting.


It's actually deliberate that they're not using the right people. It has very little to do with accidental incompetence, mistaken application of domain knowledge, knowledge applied from incorrect domains, or inexperienced project managers and engineers (though the latter can be symptoms of the real issue).

This is the result of the correct observation indicated by the last sentence in your comment.


an act which would border on criminal negligence if it was done in the private sector and someone was harmed

For example, someone thought they got coverage for something and they didn't. That mis-communication can lead to actual harm when it comes to medical and financial issues.


The good news is that the head of the new management of this project, the fix-it czar Jeffrey Zients, is saying the very top item of his punch list is to stop feeding garbage to the insurance companies.

The bad news it that it's taken this long for anyone in authority to even admit there's a major problem there, when there's only 7 weeks left to correct it for those who lose their insurance Jan 1.

Someone in another discussion brought up the Space Shuttle accidents. I'll bet you more people will die because of this monumental monstrosity ... except they didn't volunteer for something obviously dangerous.


While I agree with the rest of what you said, it's jumping to a few conclusions to say that lots of people will die because of this. People will think they are insured when they aren't, but the usual outcome of that is bankruptcy -- or a bunch of ugly and public lawsuits over who pays the bill.


I'm making a wild guess than more than 14 people will fall through the cracks and die. Certainly such stories have been used to sell Obamacare.

Ah, I just remembered that economists can predict over large populations how many will prematurely die if you remove X dollars from the population. I forget the now obsolete figures, but I'll bet on that basis alone there will be that many casualties.

Any time the visible foot of government makes such big steps people will die, it's a given.


I know those stories of "X number of people die every year from lack of insurance," which were sincerely believed by their speaks but lacking in rigor.

I'm expecting over the next few years, as the number of uninsured drops close to zero, that there will be no discernible drop in the number of annual deaths.


"I'm expecting over the next few years, as the number of uninsured drops close to zero"

Errm, it's as I understand it a well established fact that tens of millions will remain without coverage. Try a search like this: https://www.google.com/search?q=obamacare+won't+cover+everyo...

The first two links said 30 and 26 million, which agrees with my general memory. It was obviously a very early criticism of the ACA.


What I find is that the people who criticize the ACA on this point are also often opposed to the solutions that would shrink the number. It's dishonest.

1) Offering insurance to undocumented immigrants would shrink the number 2) Strengthening the mandate would shrink the number 3) Expanding medicaid to the states that "opted out" due to the Supreme Court decision would shrink the number


You also have to factor in life expectancy of various cohorts. Poor people will be healthier. Also note that the goal of ACA wasn't entirely to drop deathrates. Unlike before, the combination of pre-existing conditions, and elimination of annual/lifetime maximums means that people are now able to get essential health care without risking bankruptcy from medical bills. There really is a true out-of-pocket maximum now for essential care, for anyone that has health insurance. This was not true before the ACA.


I think that is very much setting the wrong bar. This sounds awful and cynical, but you have to compare the deaths the website might lead to, to the estimated counterfactual - how many would die in the status quo. The ACA prohibits "pre-existing condition" rejections, and it also prohibits "annual and lifetime maximums" (where the insurance companies say, "Oops, looks like we already paid out a million bucks on your cancer, you're on your own now!") And there's also the expanded Medicaid. So you have to factor in the lives that will be saved by the law.


Indeed. Although I'll note the not all plans really eliminate maximums, some require 20-30% (from what I've seen) copays after you max out your deductible.

People are going to be trying to figure this out for a long time (even if the ACA gets terminated with extreme prejudice in a few years). There are a LOT of factors:

The cost of guaranteed issue and community ratings is huge price increases for the under 50 set to subsidize the inherently more expensive 50-64 set (or so I've read, but I'm sure its roughly true).

A lot of people are losing the ability to work more than 29 hours at one place, or are just plain getting fired because they now cost more than they can produce, and that loss of wealth will have significant effects. Heck, the #1 piece of advice to get more subsidies is to make less money.

Then we get into the messy dynamic effects, where "Europe" is the object lesson: due to the changed game, a bunch of companies are not going to be founded or make it, or fail ... and in the longer term people are going to notice that there's no scenario where being married doesn't result in higher costs. Accounting for the effects of the latter is supremely difficult.

And then there's the argument this with all else Obama et. al. are doing (most certainly including the eGOP) is moving the US rapidly to failed nation status. It's hardly beyond the realm of possibility that will kill 10s of millions. If it denies me the medications I need to live, I'll be one of them....

ADDED: now that you've labeled me as dishonest (e.g. for not wanting to provide health care for illegal aliens (which, yes, is a harsh position)), I'm not sure there's much point in continuing to discuss this.

But also add the effects of reduced medical innovation. Starting with the 2.3% excise tax on medical devices, which we know from the history of "socialized medicine" will continue. One reason health care is so expensive in the US is that we pay nearly all the burden of new drug development....

If we manage to curtail the already feeble new antibiotic development efforts the results world wide will be measured in ... biblical proportions.


Just to respond to your first sentence. All ACA plans have a "coinsurance maximum". Meaning, you pay your deductible. Then you pay your 20%-30% coinsurance. And then when you reach your coinsurance maximum, your health insurance pays 100% of your health care costs. The coinsurance maximum varies with the plan but they're not very high. Mine is around $3500, and the biggest I've seen is $12,500 for "out of network care". On top of that, "annual/lifetime maximums" (where the insurance company STOPS paying for your care after a certain amount) are prohibited. And that is different than before. The upshot is that everyone on a health insurance plan truly has an out-of-pocket maximum now. (There's always the chance that I have missed something here, but this understanding is consistent with the intent of the law.)

I'm in my low 40's and my premium went down over 20%. I think the premium increases for people are also very much about the improved quality level of the plans, and some of that quality increase is "hidden" due to the old plans having lots of against-the-patient "tricks" that are now outlawed.

BTW, you have referred to my other comment a couple of times now. I am not saying it is dishonest to not want to provide health care for undocumented immigrants. I am saying that is intellectually dishonest to have that position, and simultaneously criticize Obamacare for not covering that number of uncovered patients. I believe that if someone criticizes a problem, it is their responsibility to also be in favor of the solution. That's a basic value in rational discourse/dialectic.


I'm covered by Medicare so I don't know these details; I do agree it would be insane to have unlimited coinsurance, and a little research shows that it's capped through the out-of-pocket maximum. Which in the Humana plan I just looked includes the larger deductable.

WRT to dishonesty, "e.g." -> for example, illegal aliens aren't the only ones included, just the most clear cut and harshest example for me; trust me, you're labeling me as dishonest.


>what does that even mean?

It means that if you are writing health-critical software, where lives are involved, and you deliver low quality dreck (as in this case), then you should be held criminally liable.

There is a lot to be said for SIL-4. (http://en.wikipedia.org/wiki/Safety_Integrity_Level) It seems that someone goofed by not giving healthcare.gov a SIL-4 requirement ..


Well, but healthcare.gov is not health-critical software, just as most other health related tools. Software on an X-ray machine is health-critical, but the software used for billing X-rays is not.


I don't know about that - this system is supposed to help people track their expenses on drugs and treatment, and so on .. so to me it seems its not 'just' a billing software package, but rather a healthcare management system. I'd certainly feel better about using it if it had some sort of safety-critical level certification, but then again I don't have the problem of needing healthcare in America ..


No, its a glorified insurance application system. The most complex piece is that they try to use your real info (taxes, etc) to calculate subsidies before presenting the pre-canned policies available in your ZIP code. Then you can fill out the application and it goes to the specific company to continue on. There is not a whole lot there, if they had instead deferred the real income info part, and just asked the user for that info then verified later like a bank loan, they could have launched much more a scalable system.


Would that level of quality be possible in 3.5 years, given all the other constraints?

I agree about the lives, BTW, this screw up will kill people: https://news.ycombinator.com/item?id=6622634


To answer your question: I believe it would be, yes. There's nothing 'hard' about SIL-4 programming - its just a process that you have to follow in order to ensure the quality of your software..

Of course, SIL-4 may not be the solution .. but it sure seems to me that there is no way healthcare.gov would have gotten through the process if SIL-4 was applied to it..


Another reason is the fact that these large projects rewrite business processes of the organisation (instead of merely automating them). This obviously opens up a huge turf war on how the post-deployment world is going to look like (esp. if there are to be layoffs). Additionally, in .gov, the business processes are always at least partially defined by law, which in consequence needs to be changed. The actual software project is the easy part.


Slow, legacy backend systems are not an intractable problem. You can do things such as copy the data to a faster cache, or you use some kind of queuing system so that queries are processed only as fast as the backend can handle (of course the frontend needs to be able to "check back later" for the results).

This does support the widely held disbelief that this system will be fixed anytime soon. Clearly the management of the project and the design of the architecture are/were fundamentally flawed, and its very unlikely that it can be fixed in 30 days or whatever at this point.


Regardless of how hard it was to make a functioning "data hub", they should have, at the very least, built a functioning registration system that is independent of all other fail points. This way, at the very least, they could have the names and contact information of everyone who wanted to fill out a form. Then they can go back and process the forms at a later date.


Supposedly the problem there was a very late change from somewhere in the White House to CMS, in August or September, to preclude the possibility of window shopping and require registration with a lot of verification up front.

E.g. we know Experian is part of that process, said to be for identification and income verification. It's been reported that people who don't have credit records (e.g. those sufficiently ill they've stayed under the care of their families) get told by Healthcare.gov to contact Experian, who then upon discovering they don't have a record of them tells them they have to get the Healthcare.gov people to create a “Manual Process Identity Proof” ... which few if any of them know how to do, or whom to send someone to.


That's what I hated most about healthcare.gov. Other sites I could get a quote quite easily and decide to go with them or not. healthcare.gov had a bunch of forms that didn't work so I couldn't get a quote after hours of trying. Even if I let long loading pages sit there for long periods of time, some failed out and other times my data was lost.


Indeed, as you note there are a bunch of fixes possible for slow back ends. A lot of the data is sufficiently static (e.g. from the IRS, what are the numbers for your 2012 taxes, what's your withholding to date for 2013?) that occasional exports to a database inside of "healthcare.gov" might do the trick. In many cases setting up a fake server mimicking the legacy one with such a static data store behind it could get around some of the "can't change the architecture in 7 weeks" issue. Maybe (that deadline is for those who are learning Obamacare outlawed their old policies, 16 million by one estimate for that subset).

We've also read that Experian is doing both identification and income verification, so they're probably not as hopeless.

The management was, as just about everyone is noting, fatally flawed. However it's been reported to have changed, to QSSI becoming the integrator, and the fix-it czar is saying the right reality recognizing things, like the top item on his punch list is to stop sending garbage to the insurers. Presumably the managers still in the chain of command at the White House on down have been convinced to stop making requirements changes....

We'll see.


I've worked in and out of the public sector for the last 10 years and unfortunately, this actually _PAR FOR THE COURSE_.

This is not the contractors fault. Its the government. Before I left to work with a startup, I was abhorred by the lack of ownership on the client's side. Everybody is looking to shuffle responsibility, keep the lowest profile, and do the least amount of work.

It doesnt matter who's writing the code, unless they find somebody competent and passionate on the government side, large projects are destined to fail and better left off to be written by the public sector. This is government waste at its best.

I'm neither republican or democrat but just to add, if my rinky dink app I was working on for the Dept. Of Commerce gets shown to the president when its in 'ALPHA' state, there is no way the most informed person in the world didnt know that the site was going to fail from the get-go.


"unless they find somebody competent and passionate on the government side"

The newly appointed fix-it czar, Jeffrey Zients, definitely sounds competent from his current remarks (see elsewhere in this topic his top priority), hopefully he can muster enough passion for the Maximum Effort required.

And, yeah, I've done some work for the public sector and it's that bad, sometimes worse. In the last case, an entity had Lockheed make an at least half-bespoke (custom) system, which worked pretty well. Then put out continued maintenance on bid, Lockheed didn't win that contract, and years later, as the DEC Alpha systems were nearing their end of life (the line was of course killed by Compaq/HP), it was discovered the sources Lockheed left behind wouldn't compile into the binaries used (e.g. they used SCCS (!!!) ... until a few months before launch). First by a guy they had to let go after a month or so because of budget screwups, then by me over a year later. But of course the plan and budget was predicated on a doesn't match reality estimate of work to be done by another contractor long before.

Needless to say the clients didn't understand the difference between source and binary code, or how they'd painted themselves into a really difficult to exit corner.

As for failing, CMS in its role as integrator did do integration tests 1, maybe 2 weeks before launch. They of course failed hard.


It's the contractors' fault, too. Both parties are playing the same game, here: corporate welfare for the contractors, who usually are owned or operated by people with connections to the government, and long-term job security for the government project management officials who oversee these things.

I don't know if you've ever worked for a contractor, but I have, and I guarantee you the same responsibility shuffling, profile munging, least-amount-of-work attitude exists there. Without it, these contractors wouldn't be able to keep feeding at the trough with the rest of their corporate welfare recipient friends (while they bitch to each other about how evil liberals are, how disgusting entitlements are, etc.).


The argument i'm making is that no matter who comes in, so long as they dont have a dedicated and passionate lead, it will be all for nothing.

I dont argue that the contractors are/can be part of the problem but from my experience, the government _IS_ the limiting factor, especially when they say that they're the systems integrator, not the contractor as is in this case.

I did my damnest to write/architect/design the best system that I could but if the government forces you to use RH JBoss 2 versions behind the latest, just because they have a support license then refuses to buy JRebel because it was too expensive after paying tens of thousands of dollars on said JBoss, and then changes the system requirements every 6 months you're doomed. Worst of all we were _NOT_ allowed to talk to end users. We can only talk through our Task Order Managers and supposed Technical Leads (that have little or extremely dated technical experience).


"There are no easy fixes for the fact that a 30 year old mainframe can not handle thousands of simultaneous queries. And upgrading all the back-end systems is a bigger job than the web site itself. Some of those systems are still there because attempts to upgrade them failed in the past. Too much legacy software, too many other co-reliant systems, etc."

30 year old (1983) mainframes and databases were designed to handle large transaction loads. For example, airline reservation systems and banking systems were built on them.

And upgrading a mainframe (at least an IBM mainframe) to a faster mainframe isn't such a daunting task, since all the code from 30 years ago (or even from the 1960s) is still object-code compatible with the new machines - you can make it run even if you've lost your source code. There's still lots of 30 year old (and older) Cobol code running on mainframes today.

I agree that re-writing the 30 year old software would be hard, but simply getting it to run faster could probably be done just by spending money on the latest mainframes and disk drives. But if nobody ever did a load test on the site, they wouldn't have known that they had to do this. They probably just thought: "Oh, we have to write a web site that talks to a bunch of databases, how hard could that be?" (By the way, they could have written test code to do a load test on those legacy systems without even having a web site running. In retrospect, that's the first thing they should have done, and it would have shown them that their critical path wasn't the user interface.)


In theory you could at least improve the hardware the legacy systems is running on. In practice that may not have been anywhere near enough to ensure success.

1. In this case you would've had to start benchmarking the performance of the legacy systems early, far earlier than when they started development in March. If you determine that hardware upgrades are needed, then you'd need to initiate procurement and upgrade projects at one or more of these other agencies. Projects like that may not necessarily be quick to implement.

Maybe the physical space can't accommodate new hardware. Maybe there isn't enough budget to do an upgrade like that. Maybe there aren't enough personnel resources to plan and implement an upgrade of that scale quickly. Maybe those organizations are just barely keeping their heads above water with the way those systems are currently functioning. Maybe they don't even have a handle on what their hardware configuration is. I know someone who worked at an agency where they had to start unplugging stuff to figure out what server did what.

2. This is all making the assumption that the data in those systems is correct and well-formed, and the business logic in those systems is free of bugs. Maybe you get the database schema and find out that A. It's out of date, B. There's no data dictionary, and C. There's 250k lines of business logic tied up in undocumented triggers. Good luck.

Load testing might just be the tip of the iceberg in situations like this. But bottom line is, if the people leading your project don't even think to start looking into this kind of stuff very early on, you might be screwed before you even started.


Having integrated with these kind of systems, they aren't up to internet-scale traffic. Apart from raw capacity they are usually architected for transactional consistency, not large indeterminate numbers of concurrent users. This is why we have things like ESB's and async enterprise integration patterns.

I'd be surprised if those weren't already in place on this project but bet it's really poorly done. My money is on a totally manual test process which means a deployment misses loads of cases and takes weeks and where a load test is 10 guys in India hitting f5.


You make very good points about early load testing. Unfortunately, as detailed in so many places including the OP, the integrator, the government's CMS, clearly didn't have the expertise to realize this. E.g. any vaguely competent at software development organization knows you have to freeze the requirements well before the week before the launch, and not make major changes less than 2 months before (the heavy registration process instead of allowing window shopping).


"Failure isn’t rare for government IT projects – it’s the norm. Over 90% of them fail to deliver on time" Is this really much different than the success rate of startup culture where VC's count themselves successful if 10% of their investments yield a return? The startup environment has the "success rate advantage" that if the venture really isn't getting traction, you can walk away from it, or change directions a do something related, but not your original objective.

Government projects like the healthcare exchange don't have that degree of freedom - if they go down the wrong track, the only choice is put in more resources until it's back on track. Giving up or changing objectives isn't a decision under the control of the project - it's a legislative or budgetary question.


The answer is that you can't structure the transaction as a realtime query. You have to structure it as something that's sent and gives you a ticket, and the reply associated with that ticket will come back in its own time.

Stick the processing pipeline in Twitter Storm (which can retry any step until the whole pipeline is done) and structure the requests as nearly-idempotent (so a repeated reply is harmless, and the first arrival associated with the ticket wins). Finally, you have an "inbox" where people can wait for and see their answer, with optional SMS and email notification.


Interesting, but at some point people have to be shown a selection of offerings. That selection must allow seeing if your doctors are in the network.

Your suggestion would allow for that, but not I sense in the "instant gratification" way we're used to dealing with web sites. I.e. "thanks for your input, wait for SMS/email/N hours till you log in again for the next step".

BTW, I've read it's a 30 step process. Not all will require "take a ticket and wait to be called", but more than 1 or 2 I suspect.


It's not delivering instant gratification now.

More importantly, it's very standard practice to do asynchronous jobs and notify users on completion, as seen in the data export or input features on many web services. Whether users get results immediately or need to be notified is a UX issue, and a requirement that the entire system be designed to deliver results immediately sounds exactly like the kind of speculative, unconventional UX requirement that non-UX people come up with around a conference table.


Well, they aren't getting instant gratification now, are they?

I suspect a lot could be cached - that is you take a list of questions up front, what comes back from the "ticket reply" contains any further questions you'd need to answer and enough canned data to process the replies immediately. So you wouldn't need to take a ticket more than once.

But even if you did need to, at least it would be something people could fill in and walk away from. They wouldn't be kept around, drumming their fingers and worrying about being late to work.


"Well, they aren't getting instant gratification now, are they?"

Indeed!

But I was referring to the initial architecture of the site. E.g. the March objective, “Let’s just make sure it’s not a third-world experience.” expressed by Henry Chao, CMS CIO, who in theory should have been in the front lines. Sigh, I bet he was, knew what was coming but didn't have the authority to make it work, or as you implicitly suggest, down-grade the architecture/expectations into something that might possibly work in the time left.

And then there's the last minute (well, month) change to requiring a heavy duty, validated up the wazoo registration before doing anything. Doubt they could have made that work in less than 2 months ... maybe. This was needed, but at lower transaction rates, e.g. otherwise window shopping would allow a lot of people to see they couldn't get a better deal through the exchange (e.g. no subsidies for you, which only the federal system is allowed to calculate).

Suppose prior to that they arranged for "X rate of idempotent reads" with e.g. the IRS, and now they need a 10/100/1000X rate, meaning they should have cached everything that doesn't change quickly. Almost certainly too late to do it by then.

My, my, years from now this is going to be one of the most significant case studies in project development.


Queues solve some problems. Others that are read heavy, but rarely updated (queries for which Doctors are on the system) can be cached and read from slaves. Problems that benefit from being async should be async, those that aren't shouldn't. It's usually pretty easy to do both in a system.


I have not followed the development of this news closely, but skimming these updates has been amusing. I do have a couple very basic questions. If these are stupid, I apologize ahead of time.

My understanding from previous coverage is that some of the state exchange sites, such as California's, are performing acceptably. If that is true, do those state sites also connect to and query the same legacy systems as the federal site? If so, why doesn't the federal government simply ask for or take that code? Surely it's been made available to them? If not, are the legal requirements for the states' exchanges somehow different than the federal site? That seems unlikely since my understanding is the federal site is simply standing in for states that elected to not create exchange sites. I don't see why it would be subject to extra requirements.

What am I missing here?


First thing is only 4 state sites are reliably reported to be working well, and California's isn't one of them, although their's might be fixed by now. I haven't heard of any that didn't really do integration testing, and Hawaii and Oregon declined to launch their not ready sites Oct 1st.

The obvious answer is that until late last week, the grossly incompetent CMS (and some people in the White House and perhaps others in HHS) were running the show. Now that it's under new management, maybe that possibility will be investigated. On the other hand, with a 7 week deadline for those who must get new insurance for 2014, like the 16 million with now outlawed, not grandfathered individual policies ... there's not much time to do something that disruptive. But an idea worth checking out in the next week or so.


> Amazingly, none of this was tested until a week or two before the rollout, and the tests failed.

This is absolutely incredible.... two weeks?! Dealing with these legacy systems should have been the absolute first thing tested, is it not the most likely point of failure/bottleneck? Someone on the team had to have been screaming about this and ignored, all the while shitting their pants waiting for go live for the whole thing to crumble.


I'm wondering why states were allowed to build their own systems and opt out of the federal site. From the Washington state site we get passwords emailed in clear text, a failure to even allow people to enter all components of their income (resulting in inflated tax credit decisions), using monthly income figures where annual ones should be used (again, more incorrectly inflated tax credits). In Oregon they say they can't even log in or get through the application. Each of these state-specific sites cost tens of millions, each resulting in their own unique set of defects on launch, to implement a federal program.

The press seems very focused on the obvious availability and performance problems as well as the errors that come up within the sites that prevent someone from completing their application. There are a whole slew of second-order defects that make it appear your application was successful and correct but were based on incorrect calculations, incomplete data, or other bugs that are not obvious to the user at the time they complete the process.


The original intention was for all the states to build their own exchanges. The more liberal democrats wanted a single federal exchange, but compromised to state-level exchanges to get enough votes for passage. Then the conservative states declined to make their own exchanges so the federal government had to do it anyway, because the American political system makes no sense.


Minor correction that proves the rule: "Then all conservative states except Idaho declined to make their own exchanges...."

I'd correct your latter statement to "most recently, the American political system makes no sense". Prior to Obamacare, every major entitlement program, from FDR's Social Security to G. W. Bush's Medicare Part D prescription program was passed with large bipartisan majorities, strongly suggesting they had wide majority support.

Obamacare famously did not, heck, it only passed within the Democrats by a whisker, and a lot of them found themselves spending more time with their families after their next election.

So any sane plan by Team Obama to implement it should have factored in very strong resistance in a lot of places. Heck, I along with 71% of the people in my Purple, very much not Red, home state of Missouri voted in 2010 to outlaw mandates (http://ballotpedia.org/wiki/index.php/Missouri_Health_Care_F... ) and then in 2012 62% voted to outlaw the creation of a state exchange unless authorized by the people or the legislature (http://ballotpedia.org/wiki/index.php/Missouri_Health_Care_E... ).


Curious why you voted against the mandate. System-wise, a vote against a mandate is basically a vote for allowing rejection of pre-existing conditions.


Note, you're having labeled me as dishonest, that ... frees me a bit from my usual constraints (not that that's costing you upvotes from me for posts like this one: https://news.ycombinator.com/item?id=6623010):

How about, "Because I have a fucking clue and could see that the ACA would make things at net a lot worse, and very possibly result in my premature death?"

Did you even note the vote was almost certainly symbolic? Not that making the most firmest statement (almost 2/3rds of Missouri voters) was pointless.

Anyway, you're focusing way too much on the micro-details of political promises. I prefer macro reality.

ADDED: ah hah, I now see the ACA is in theory going to save you 20% over your previous costs. No wonder you're defending it. I just hope for your sake you don't e.g. experience a shift from access to healthcare to access to a waiting line, or as Obama himself said, Liverpool Care Pathway style, you don't get a pain pill instead of treatment that will save your life.


You're asserting that my defense of the ACA is because my premium went down. That's a pretty uncharitable interpretation of my views. In that same comment I wrote that the chief goal of the ACA was not to lower our premiums. To make it explicit, I would also support the ACA if my premiums had gone up.

It sounds like the rest of your views are based off of your beliefs of how things will play out, which is of course difficult to argue about.


In the context of your labeling me as dishonest, I think it's a fair counter jab. More seriously, not that you can be bought for so few pieces of silver, but let's suppose your costs tripled as have so many others?

These particular long term views are relevant in the context of my likely (at the time) symbolic Proposition C vote; it's indeed difficult to impossible to argue about them, but I hope I've at least demonstrated a basis for my "Obamacare belongs in the ash heap of history" mid-2010 opinion.


The American political system makes quite a lot of sense. It isn't supposed to be top-down. The writers of the law wanted a top-down centralized approach and that isn't how the US Constitution or US States are structured. You can get away with it for a lot of things, but anything this intrusive just doesn't work. Its not the first or, sadly, the last that the federal government will try this sort of thing.


Focusing on the performance or scalability of these ancient backend systems is beside the point. It's simply not a great idea to connect a significant number of backend systems run by different organizations in one synchronous online transaction. The overall probability of failure may simply be too high, irrespective of any scalability issues.


I think the easiest fix at this point is to simply design around the known delay in synchronizing all the 3rd party data calls.

Have people enter their info, then show them a screen that says "your quote will be emailed to you in 24 hours." Then the integration system has 24 hours to retry any failed data pulls, match up all the data, and generate a quote.


Problems:

An insistence on a heavy (throughtly validated) registration process.

People want to have choices. Maybe they don't want to get insurance from company A. There are also monthly payment vs. deductible tradeoffs in the Bronze vs. Silver etc. plans. And some demand cost sharing after you've used up your deductible (I wouldn't really care if the plan had a 10 million dollar limit if I was expected to pay 20-30% of that...).

ADDED: plus you must be able to see if your doctors are in a plan's network.

Despite the "one size fits all" new minimum gold plated plan, there's still a lot of tradeoffs ... and then email isn't necessarily reliable enough for the response. I can't see them avoiding a "or check back on the site tomorrow..." option.


I don't know how much of this is true, but I bet the truth is no less hilarious. It wouldn't surprise me if this system has no concept of usability and offline processing queues. No matter how complex it is to process an application it's common sense to just give the user immediate feedback "Thank you for your order. We'll contact you by email within N days to followup and report your application status." Do these people expect Amazon to process orders in realtime and fling physical goods at their door in minutes? Should buying health coverage be zero conf one click instantaneous?


Well, I've heard there was a "7 second" response time metric that was part of the plan.

But, yes, Amazon's "eventually consistent" system takes its time, e.g. while it's never happened to me, I know that somewhat delayed confirmation email is only made when the whole system has reserved a book for me, even if there's only one copy in stock and someone else made an order for it at about the same time. Etc.


The frontend assumed the backend was fast enough. That's the problem. If the frontend was made to handle really slow responses from the backend it would look different. It would not make people wait while transactions occurred. Or if might have a page that that displayed your progress: in other to do this for you we need to contact 10 databases - here is the progress of each:

    Database One:   [=======----------]
    Database Two:   [============-----]
    Database Three: [==---------------]


Heh.

But I would half joke that that would result in a bunch of angry people with torches and pitchforks showing up at sites of the owners of Database Two....

Which might be totally unfair if it's not really their fault.


My ASCII graphics were trying to show Database Two as fast and Database Three as slow. That's where the pitchfork toting mobs should be.


Oh, yeah, progress toward completion, which would obvious to most everyone using the site.

But I'm sure some would get it wrong even then!

Visions of a family gathered around the monitor:

"Go IRS!"

"Social Security, we KNOW you can do it!"

Etc.

(I do really like your idea, it just seems to be inevitably politically impossible.)


I don't believe in this anymore

"Everyone outsources large portions of their IT, and they should. It’s called specialization and division of labor. If FedEx’s core competence is not in IT, they should outsource their IT to people who know what they are doing."

These days I believe each department of government that needs an iPhone application would do better to hire an iOS developer full time to maintain and polish the fuck out of it, continually.


I tend to agree here.

The siren song of 'outsourcing' sounds great, but it's just moving the goal posts, really. The dept/org/staff still need to understand their internal problems well enough to document and communicate them properly, including translating to the outsourcing company.

Hiring IT people internally to be there long term to really understand the agency/dept/org/staff and their problems from an IT perspective should be a requirement for any org. Without a competent person who understands IT and the business needs, and has a longer term investment in the business itself, there's little chance of being able to choose an appropriate outsourcing company to do the job (or indeed to define the job competently in the first place).


The key word is "portions". Likely the iPhone app you have in mind is a core business application and shouldn't be outsourced.

Procuring and maintaining your exchange server, active directory, fileserver, HR and finance applications, networking, routers, wifi, security, desktop, laptops, printers and help desk for all of the above for a fixed annual price per seat? Oh, yes please. Of course you should still have a couple of competent guys who can make sure the contracts aren't stupid so you don't end up with 50 mb quotas on the mailserver and no way to increase them without huge penalties etc. But it's 1-3 guys on the COO staff, not an entire department.


I agree with you. I have worked for a number of companies where you often hear "Our business is not software". Then they wander why they cannot do all the projects that would allow them to move fast, provide better service, or save money. Any company that is large enough needs to make software a core competency.


It's interesting that a core competence of government isn't IT. How much does the government depend on IT to run its day-to-day business?

If the answer is 'not much,' I can understand the opinion that work should be outsourced.

If the answer is 'a lot,' then IT should be a core competence and the government should invest in acquiring in-house IT expertise.

I'd guess the answer is 'a lot'.


Currently government IT is operating under this idiotic idea that almost all IT should contracted out because :

"We can fire contractors if they don't do a good job"

"Contractors stay up to date on technology"

"Contractors are cheaper because we don't have to pay for their government benefits"

So for one, contracting companies that do a bad job may lose out on future work. But the reputation of the big companies is never hurt enough by this to get forced to improve. Even then, the fundamental problem is that government employees are very difficult to fire or even hold accountable so people think it's easier to hire contractors. When really the problem that needs to be solved is accountability in government.

Secondly, even if developers who work for contractors stay more up to date, it's really irrelevant because the government is extremely slow to upgrade on anything. We're still running Windows XP in a lot of places.

Third, if the bill rate for a developer is about 3-6 times what you'd pay a full time government developer, you're not really saving money on benefits. The thinking is that you only pay a contractor for a few years while they develop something for you, but IT is a continuing need. If your operation relies on software (which almost all of them do), you're going to be paying 3-5 times the cost per developer for the foreseeable future. Over time that certainly costs more than government employees.


Strangely, based on the title I thought this was going to be about future startup trends. i.e. For the last 6 years or so we've seen a revolution in interface design as a competitive advantage when creating a new startup, but as the low hanging fruit opportunities are used up, as lot of the really meaty opportunities are going to be in software where there is a significant backend component performing a lot of heavy lifting and magic.

I'm not in the least bit surprised to see that a lot of the work and resulting problems with healthcare.gov are on the backend.

I just wish the government realized that we have all these amazing developers over in the Bay Area that can do a better job than the majority of those developers currently writing software for government contracts. I'm shocked no one in government has said to themselves "What do we have to do to make our software problems accessible to the types of engineers working at the Googles and Dropboxes of the world.


If you've ever done government contracting, you'd know as things stand now a large fraction of those "amazing developers" wouldn't put up with its insanities when they have good alternatives. E.g. would you be happy filling out time sheets for exactly 40 hours, no matter what you did or should do?

I've done this twice. Once for a NASA project where my smaller company was doing work NASA judged as too hard for CSC; that my manager was one of the best ever certainly made a difference, as well as getting any tool I asked for (ah, I was among other things cleaning up after another consultant to this company abjectly failed). Not sure how long I would have done it if that was an option (the NASA consulting contract was eliminated altogether when Clinton downsized NASA).

The other one I've mentioned elsewhere in this discussion, and I quit in disgust after less than 2 months.


Having the best developers in the world is useless if you have incompetent management.

In this case, requirements were delivered extremely late, and they were still in flux up to a week before deployment. The system integrator went forward with only ONE WEEK of integration testing. There were 55 companies involved in this, not to mention all the state exchanges and other federal software systems that had to be integrated.

Better developers would have made a positive impact, but nowhere near enough to make this project a success.


If the managers are not developers then they are by definition incompetent to manage a software project like this. I have met a lot of competent managers that were not technical, but I have not met one that was competent enough to manage a project involving legacy systems with poorly written code and little to no testing or automation. If you are not a developer, you simply cannot understand these types of problems enough to manage it.


We can infer from their mistakes, which GVIrish succinctly lists, that the managers in CMS on up to the White House weren't developers, otherwise they would have instinctively known that e.g. requirements had to be frozen for some time to give the developers time to finish implementing them.


"Unless it is enjoyable or educational in and of itself, interaction is an essentially negative aspect of information software. There is a net positive benefit if it significantly expands the range of questions the user can ask, or improves the ease of locating answers, but there may be other roads to that benefit. As suggested by the above redesigns of the train timetable, bookstore, and movie listings, many questions can be answered simply through clever, information-rich graphic design. Interaction should be used judiciously and sparingly, only when the environment and history provide insufficient context to construct an acceptable graphic."

"Interaction considered harmful", by Bret Victor http://worrydream.com/MagicInk/


The issue I see here is the author of this article has marginal experience with federal contracting. "The people who wrote the code for these systems are long gone...they are prone to transaction timeouts" ... wrong, wrong and wrong. There are plenty of coders still around maintaining these systems, even with the obscure technologies like MUMPS and all and they are running on rather robust hardware in huge datacenters.

Second of all, the government should NEVER outsource integration - the systems integrator requires an authority to manage other contractors that only the government is capable of holding.


Are you really sure about your first point? I have a state government counterexample here: https://news.ycombinator.com/item?id=6622223

As for the second, whatever might be right, we've been told that only the Pentagon has retained that ability for "medium sized" weapons projects. Anyone know anything to the contrary?


As an aside, when I hear about the problems they're having understanding the legacy data formats, it makes me wonder how far you could get with a high-powered, big data NLP system to "parse" the data. Sort of like how Google translate works.

There are rules, after all, they're just not written down. Why not let the computer figure them out, with continuous training from people until the computer's accuracy is high enough?

I suspect instead they tried to write parsers and trusted "the spec", which was never even right the day it was written down. :)


Can anyone verify this info? I was a bit surprised to see it wasn't better sourced than the comments of a previous Marginal Revolution post. (Nothing against MR, but this seems like huge news if true.)


Well, it's been established all these external data sources are being used.

So its either on-line, maybe with caching (maybe someday), or using a periodically run off off-line copy inside of Healthcare.gov, right?

I'll bet the House hearings made clear which is happening.


"Or IBM, which has become little more than an IT service provider to other companies?" Come now, that's absurd. Incredible things happen at IBM research.


Yes, and the problems are probably still worse than this.

Because integration means integrating _requirements_, leading to determination and priority of requirements. The current organizational structure doesn't seem to have anyone responsible for even coordinating that. But even if there were, they would need terrific knowledge of each agency's internal systems and legal requirements to determine what is and isn't necessary. And enormous authority, meaning both credibility and power to dictate, to get their determinations to stick.

Absent someone looking over the process, each agency will just "require" everything they might need or want. Leaving something out is risky, unless you know a lot about what you are doing and what will happen next and trust your management. Even if they had all those latter characteristics, bureaucracies don't do risk.

We all know how complexity grows exponentially. I bet the requirements document for this thing doesn't exist, and if it did it would be a clusterfuck of epic proportions.

Here is my wild theory: The possibility this could succeed died the day Tom Daschle withdrew his nomination for Secretary of HHS. Not that Daschle himself is special, though he is pretty bright. But he was slated for an unusual joint role, running HHS and a White House appointment running the health care effort. A position like that might have had access to the specialized knowledge to know what needed doing and the Presidential delegation of power to get it done. If IRS says "we must have X" and Daschle KNOWS they don't because a real expert knows they don't, he can get them in line or they can explain the problem to the President's chief of staff.

Here is the wild part. Daschle was canned, inexplicably, over a truly stupid tax issue (didn't declare a car service as income), while others had far more serious issues waived (Geithner lied about CASH income despite instruction to declare it). Why? I speculate, precisely because the role he designed for himself was remarkably powerful, and effectively outside any review because of the complexity and specialization of its task. Wouldn't the President want someone with the power and knowledge to implement his most important policy? Yes, but not someone beyond his control. Politicians are about power. JFK didn't use the legislative skill of Johnson because he feared Johnson would serve Johnson's interest, not Kennedy's. Once Obama and his people realized that Daschle could become effective President, and Obama something of a titular head of state, they shivved him.

It's all speculation. But it is all plausible enough to suggest why government doesn't work. Massively complicated projects like Google work because its people are, by and large, working for a common purpose on tasks that are commonly understood under common accountability. Government and bureaucracy are fundamentally divided in purpose and understanding. The components can be united by power and knowledge, but by its very nature the system resists establishment of such power and knowledge.


"I bet the requirements document for this thing doesn't exist...."

The NYT reported that in the last 10 months CMS on up made 7 major requirements changes. We've heard that the one requiring up front thoroughly verified registration was made rather late, August or September. And we've heard from multiple sources changes were made *through the week before launch".

Someone in CMS probably drafted one ... well, maybe, they aren't experienced in their former role as integrator. I'll bet it has less of a relation to reality than any requirements document I read in my quarter century career as a programmer....

Very interesting theory about Daschle. At the time, I thought it was "one scandal too many for an appointee" ... but as you note, the administration did allow others through, with Geithner as the head of the department that collects taxes being intergalacticly beyond the pale. So, yeah, why drop Daschle? And I can't think of anyone on the Cabinet with as much juice as he, excepting perhaps Holder, an old Clinton Justice Department hand, and the special case of holding over Gates for the DoD, which is not unheard of when you're in the middle of a war (while it was intraparty, Truman and LBJ did it).


and if it did it would be a clusterfuck of epic proportions

I've worked on tiny government projects and their requirements docs have such proportions.


"... for some inexplicable reason the administration decided to make the Center for Medicare and Medicaid services the integration lead for a massive IT project despite the fact that CMS has no experience managing large IT projects"

Time Magazine's "Bitter Pill" article stated Medicare had an IT system that made them more efficient than private health insurance providers. Isn't such a system large enough?


Doesn't mean CMS was the integrator for any of the systems it uses, or any really big ones.

E.g. I don't rag on CGI Federal as much as some others, because they're responsible for CMS.gov and Medicare.gov, and the latter is more than "good enough for government work" ^_^.


I just hope that when this whole mess of a project goes online, and is hacked to death, NSA & friends won't be using this an opportunity to tell us that "see, this is why you need to give us bigger funds and spy on everyon - to protect you against those hackers (offensively)!" - even though the whole issue would be the bad programming and security of the system.


...and the website is not the worst part of the ACA ride we are now on.

Part of me has been ignoring a lot of the chatter around the ACA as potential right wing fabricated drama. Too much noise and bilateral bullshit being thrown about these days.

That was until a few days ago, when I would learn our insurance has both more than doubled in cost and is also scheduled for cancellation. Doubled and cancelled. All as a direct result of the ACA. Brilliant! To say this was shocking is an understatement. Our annual cost will go well past $15K.

There's a tragedy of unintended consequences, side effects and direct effects, being played out in the background that hasn't completely come to the surface yet. We certainly can't be the last family to get news of this kind. That means in the coming months it is likely hundreds of thousands, if not millions, of additional individuals and families are going to receive these dreaded letters. Apparently hundreds of thousands already have. Last week was our turn.

At one point this and other issues will be difficult to ignore. And they will dwarf the IT issues. The website, as much of a disaster as it is, is likely to pale in comparison to all of the other, non IT, issues.

Some of what's happening is related to the incredible disconnect between Washington and technology. All you need to do is listen to some of these folks talk about the website issue to see how little they understand. I heard one senator say something akin to "they just have to re-enter a list of five million codes". In other words, the term "code" to some of these guys means "numbers" and that someone made a data entry error in copying "codes" into the website.

BSS (Balaji Srinivasan) covered some of this in his excellent Startup School talk:

http://www.youtube.com/watch?v=cOubCHLXT6A

A talk which, he comments, has been mutated into something far different from what he said by the modern equivalent of the "broken telephone" game.

https://news.ycombinator.com/item?id=6619068

I agree very much with his suggestion that an "exit" is required. Not meaning that we ought to pull-up roots and go, but rather that the tech community ought to almost ignore the dinosaurs and go ahead and evolve a society more aligned to modern realities. In his talk he gives examples of various US cities that have been "exited" to some extent through technologies developed in the free market.

To some extent, it's an Innovator's Dilemma kind of a problem.

http://www.amazon.com/The-Innovators-Dilemma-Revolutionary-B...

The only way to make step changes is to do it well outside of the organization looking after the status quo, because that's all they know and that's all they can focus on.


"as potential right wing fabricated drama"

Perhaps you might want to start looking at reputable alternative news sources that e.g. would have told you you're among the 16 million who's existing policies were outlawed by the "Affordable Care Act" ... 3 and a half years ago.

To those of us who've been paying attention, this is no surprise at all ... maybe not even an unintended consequence, e.g. why aren't you part of some collective (company or whatever) getting group insurance? Yeah, that's going in the direction of "right wing" fever swamps ... but entirely consistent with a lot of undisputed things we know about Obama et. al. E.g. could you be considered a modern day kulack?

As for getting out of the game? I think not. E.g. assuming you and yours are less than 50 years old, one of the things you'll be paying for in whatever policies you get (note, when someone spent some quality time with an ACA calculator he could find no scenario where being married didn't result in a higher total bill) is the expensive 50-64 set. And you're not saying you can't afford paying twice as much ... you think the Federal government will make it easy for milch cows like you to escape?


This is a huge story that is really difficult to get a handle on, because the premiums changes are affecting everyone very differently, and the patterns behind them are very tough to tease out. I myself saw my premium go down by over 20%, with better benefits. I've also seen a lot of accusations of lying on either side, which isn't helpful either.

Some factors to keep in mind:

Health insurance plans have to be ACA-compliant. Plans that aren't have to be canceled and replaced.

Just because your new plan is ACA-compliant doesn't mean it's on the exchange. Check the exchange, too, as the price differences could be extreme.

Some companies (Humana in particular) have been sending out inaccurate cancellation letters, with inaccurate "new premium" amounts. They got in trouble a few weeks ago in Kentucky for implying that people had to switch to the replacement plan even before the Kentucky exchange was up. Humana also recently got in trouble in Colorado and had to send out a second letter retracting their first one.

It's possible that some companies may be sending out inflated premium amounts to lower risk, while knowing they will have to send out refunds in a year.

Your old plan may not have been ACA-compliant. Specifically, it may have had annual or lifetime maximums (where the insurance company says, "Sorry, we have spent enough money on you; you are on your own now.") These are now prohibited. Even though this raises premiums, this is a good thing.

Some states have crappy competition - Wyoming for example. There's no one else there to encourage an expensive hospital to lower prices, which means expensive premiums. That's a failure of the free market, not a failure of the ACA. (I say that because if the solution to an ACA's woes would mean MORE ACA instead of less - say a public plan - then it doesn't really fit the narrative of the ACA causing these problems.)

And most importantly, the chief goal of the ACA was not to lower your premium! It was to lower health care costs over time (not compared to now, but compared to what they would otherwise be). It was to raise society's health over time (compared to now). And it was to protect people from going bankrupt from health care costs. (This is already successful thanks to the elimination of pre-existing condition rejections, and the elimination of annual/lifetime maximums.)


It was to lower health care costs over time

I'm not sure how this is possible. The other goals you mention are feasible, but only to the extent that they prevent cost saving.

the chief goal of the ACA was not to lower your premium

Well, it was intended to lower some, and raise others. People with preexisting conditions cannot be insured. They can only be subsidized with something falsely called "insurance". The losers in ACA are pretty much the same as with single payer: The young, healthy, and productive. Call it fair or unfair, the situation is much different than with single payer in that ACA is going to let people see just how much they themselves are getting boned.


I think the CBO has already projected a lowering of the future "cost curve". I guess we'll have to see how it actually plays out.


Only supports 200 simultaneous transactions ay? Well just put a big ol' queue on the front of it with a hard thread limit and tell people they'll get an email when it's ready.


But it crashed at 200 in the testing the week before launch....

For all we know it crashed at 2 "truly simultaneous" attempts. I've watched more than one project run beautifully until they attempted a second simultaneous logged in user, which is not quite the same thing, but....


The real problem is with the weird indirection of the US government providing payment to doctors/hospitals/pharma through subsidies, tax breaks, and tax penalties granted to or extracted from citizens that can only be spent at private insurance companies or mitigated by spending at private insurance companies who then, in turn, pay for your healthcare.

The sheer complexity of this rent-seeking indirection makes keeping track of the millions of distinct participant-instances that can play out in hundreds of different ways, involving integrating tens of massive legacy systems with new, flexible business logic (for a law in flux), impractical.

With single-payer, they could have scrapped the vast majority of this complexity.


The problem is government contracting. CGI will continue to get contracts.

The problem is a system where if you don't deliver you get paid millions of dollars and still get jobs.


Since when have they even been good enough at this to critique?

They're going to continue to suck royally, as royalty does.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: