Hacker News new | past | comments | ask | show | jobs | submit login

A lot of posts here talk about open source projects or the relative simplicity of implementing something like this at it's core: I agree with the idea though I understand that emergency systems require a high level of reliability similar to mil-spec hardware I imagine. If there was any justification for that price of 88+ mil, that would be it. Looks like that went right out the damn window. All systems have bugs, I think we can all agree on that, but I'd like to know that they followed the most sensical procedure to rolling this out. Did they test it prior to release? How did they test it. Did they bother with unit tests at the lowest level? Did they perform thorough integration tests? Did they consider rolling out incrementally (if possible) the new system so that they wouldn't be completely left out in the rain when it failed? What contingency plans if any did they build into the system upon failure? What was wrong with the old system, and what did the new system promise to deliver?

It's a lot of questions I wish more journalism would answer. On another note. It's interesting to think about open source. As some have mentioned, it's difficult to imagine the feasibility of actually doing an open source version and getting it adopted (though that may be because I don't know how EMS stuff is structured). Though I think all would benefit from a source code release of this system so we could actually crowdsource a better friggin' version because this one clearly isn't doing its job...since May.




I work in the call center branch of a fairly large corporation. Obviously we're not saving lives and dispatching fire trucks, but for some reason it seems like we're dead set against doing incremental rollouts and user testing in any meaningful way.

Don't get me wrong, I see things get to user testing phases and we have reasonable ways of logging bugs and problems, but almost every project I've seen had kept on trucking to meet an arbitrary deadline regardless of how poorly any of the testing goes, then the system hits the floor, chaos ensues and everyone runs around in a panic until someone managers to staple together a solution.

Kind of depressing that it looks to be the same in the government with calls that are a little more important than ecommerce sales.

I want to say that the guy that is in charge of the 911 phone network spoke a few times at a call center conference I recently went to.


"but almost every project I've seen had kept on trucking to meet an arbitrary deadline regardless of how poorly any of the testing goes, then the system hits the floor"

And the reason for this is simple..the one person who can pull the plug on a project of this magnitude, won't. Simply because s/he doesn't want to have to answer to having already dropped 88 million with nothing to show for it. There becomes a single person to blame at this point, whereas a failed system that gets implemented, suddenly allows for MANY people to be thrown under the bus.


Totally agree. Even in projects at work, the worst possible thing people think they could do is admit that something they're working on doesn't work the right the first time.

I will say, though, that in the case of most projects I've worked on that are failing at some point, we do have something to show for it. We just have things that need to be fixed. Even a totally failed try is something you know doesn't work.

It seems like people equate admitting any failure as admitting total failure.

It's one of the only reasons I like my job. My boss is comfortable saying "this didn't work, that's fine, let's figure out why it didn't work and make something that does."

I recently had a project to reduce a certain call-type to our call centers. We implemented a new system in one part of the company after doing some research into the call drivers, and did a follow up a few months later. My followup showed nothing had changed. I don't have any problem saying that. It just means we need to look into which part of the plan failed, why, and how we can fix it. Either way we'll have a better understanding of the problem. I'd rather do that than fudge the followup, keep the problem, and pretend everything was fixed.


"My boss is comfortable saying "this didn't work, that's fine, let's figure out why it didn't work and make something that does.""

The taxpayer is much less forgiving about "failures" and tend to trot out pitchforks and torches at the drop of a hat. They're much more guided by perception than actual facts. So for them, there's very little room for a distinction to be made between utter failure, failure, almost failure, not quite a failure, maybe a failure, no failure...


I don't actually know the answer to this question, as I'm not an overtly politically savvy person, but, are they really that much less forgiving?

Maybe I shouldn't say my boss is 'comfortable' with failure. He's just got the fortitude to deal with the realities of most situations and the track record to back it up.

Most of the managers and bosses I know of deal with projects in exactly the same way as this 911 project looks like it was handled. They push push push and don't accept any hint of failure because they think the perception others have of them would be unfavorable if they admitted some failure and tried to fix it. Of course, in this case it's the perception of the management and upper management members, and not a voting public. I have to say, though, their opinions don't seem much less fickle at times than the voting public.

In an anecdotal way, it doesn't seem like public works/government projects are immune to delays and push-backs. Seems like it might depend on the type of project as to whether there was public backlash?

I'm really just spitballing about something I'm not well-informed of at this point.

The main idea being that there's not a whole lot of forgiveness in the corporate environment I work in, but I stick with the boss I have because he's got the reputation and fortitude to own problems, when necessary, instead of trying to underbus everyone.


Even worse, this is a small, at least in $$$ part of a $2 billion total upgrade of emergency services. And it's the "customer's" entry into the rest of those systems.


I recently attended a lecture about developing shutdown software for a nuclear power plant. The development process involved three key stages: Formal Requirements Documents, a Software Design Document, and finally, Coding. Each step in the process is accompanied by formal verification processes and an audit which produces a Hazard Analysis Report. Code also goes through code review and verification.

I wonder if Intergraph employed a testing plan quite as thorough.

Development cycle diagram: http://i.imgur.com/RaBSNHN.png (from the below PDF, page 4)

The entire paper: http://procon.bg/system/files/28.18_Lawford_Wassyng.pdf


I think that mil-spec or nuclear QC standards are probably overkill for civilian systems such as 911. Granted there should be some high standards in place, but the systems are a) not secret/classified and b) not as high liability. Yes, failure to handle a 911 call is serious problem, but not like a failed nuclear reactor shutdown in terms of cost, number of people affected, and as a last resort, ability to fall back to a paper-and-pen backup system.


I agree that this process might be overkill for a civilian system, but it seems to me that an ongoing failure to respond efficiently to 911 calls (over the course of weeks, months, years) is quite worthy of a development process that reflects the life or death nature of the software's purpose. Regardless, hopefully Intergraph fixes their software quickly so that the operators aren't put under more stress than they need to be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: