Hacker News new | past | comments | ask | show | jobs | submit login
Faults in Post Office accounting system led to workers being convicted of theft (bbc.co.uk)
289 points by scratchy_beard on Dec 15, 2019 | hide | past | favorite | 103 comments



The insistence that a machine is perfect when that's impossible is also key in some of the cases Ross Anderson has looked into where banks insist that their customer is a crook rather than a victim so as to avoid compensating the customer.

Here's one about a government "watchdog" blithely accepting that something doesn't happen because the banks say it doesn't, even though victims say it does: https://www.lightbluetouchpaper.org/2015/07/29/fca-view-on-u...

Ross and his team did lots of work on flaws in EMV (Chip and PIN) either in implementation or design oversights - and there are a bunch of cases where a court is told the only explanation for the evidence is that the accused is committing fraud, only for Ross called as expert witness for the defence to say something like er, well, that's true if say your Random Number Generator actually generates random numbers, but here's a sample from a real EMV terminal like the one in this case: 49051, 49052, 49053... those don't seem very random to me.


Whenever I hear about large outsourced IT project disasters in the UK I go to check ‘computer weekly’, and sure enough, they’ve been investigating this for well over a decade

https://www.computerweekly.com/news/252475310/Post-Office-se...

It’s the same large IT contractors whose names seem to come up again and again in these public cases


Thats because winning a large government contract requires a great deal of expensive jumping through hoops. Those contractors treat the hoop jumping as their core competence: what they do after the contract is awarded is much less important to their bottom line.


Thank you for the link. Reading through the articles, I am not led to the conclusion that the people accused are clearly exonerated or that one smoking gun bug(s) in the accounting software caused a huge accounting error. I see a decade-long legal and political battle, in which the existence of any software bug, for example a logout bug, was used to cast doubt on the entire situation. In the end, the drawn-out case was settled.

I of course don’t know the full facts of this case at all, and I don’t doubt the software in question is shambling, but I would hesitate before lending credence to a simplified version of the story.


For this specific case, probably the best analysis is at https://www.postofficetrial.com/2019/01/articles.html


Who are these IT contractors?


There's Raytheon, who the government fired after they fell way behind schedule on e-borders.... and the government had to pay £220 million to fire. [1]

There's CSC, who were paid £10 billion for a failed NHS IT project [2] - and Accenture and Fujitsu who made billions from the failed 'NHS Connecting For Health' project [3]

There's HP Enterprise, IBM, Accenture and BT who have so far made £12 billion on the Universal Credit IT system which was supposed to cost £2 billion. And it's still beset by problems, of course. [4]

Obviously, there's a complex set of problems and incentives at work here. For one thing, these contractors are often paid by the hour - so they make more money if a project is late or unreliable. For another, there aren't any companies big enough they could offer a fixed-price contract without going bankrupt. For another, the government generally can't provide a specification that's succinct, clear and watertight because things like benefit systems are so complex, figuring out the requirements is 70% of the work on such a project. For another, most other sectors can lock out customers who are expensive to serve - good luck using Amazon if you're illiterate, or don't have a bank account, or don't have an address - whereas state health and benefit projects can't.

[1] https://www.theguardian.com/uk-news/2014/aug/18/uk-bill-ebor... [2] https://www.theguardian.com/society/2013/sep/18/nhs-records-... [3] https://www.bbc.co.uk/news/uk-politics-24130684 [4] https://en.wikipedia.org/wiki/Universal_Credit#cite_ref-guar...


To add to this: a common problem with these projects is that, even though the requirements are vast and extremely hard to pin down, stakeholders insist on pinning them down up-front anyways because that's how it's always been done.

This is then compounded by a procurement process that sets these requirements in stone as fixed deliverables, as opposed to offering both sides the flexibility to reassess after interim sprints / milestones / etc.

Fortunately, groups like GDS [1] in the UK, CDS [2] in Canada, and 18F [3] in the US are helping shift this mindset slowly but surely. That's where you get initiatives like agile procurement [4]. Procurement aside, these groups are also at the vanguard of introducing modern tech stacks / tools, user-centered design, and agile project management to the public sector. (Yes, these things exist in government, and these groups are really passionate about making sure their adoption goes beyond mere buzzwords.)

Side note: many of these groups are continually hiring, and they've been around long enough by now - and had positive enough results - to gain some clout. If you're tired of selling eyeballs to advertisers, there's never been a better time to use your skills in service of the public good. It doesn't have to be a "lifer" thing - CDS, for instance, has a number of 2-year rotating positions.

Source: I'm an ex-fellow with Code for Canada who's continuing to work in the public sector :)

[1] https://www.gov.uk/government/organisations/government-digit... [2] https://digital.canada.ca/ [3] https://18f.gsa.gov/ [4] https://www.canada.ca/en/shared-services/corporate/doing-bus...


>Obviously, there's a complex set of problems and incentives at work here. For one thing, these contractors are often paid by the hour - so they make more money if a project is late or unreliable.

You are right about a complex relationship with numerous contractual obligations and SLA's etc. in place; usually loaded in favour of the big contractors, but it is extremely rare for them to be paid by the hour. They might parachute individuals or a team of IT contractors/consultants and pay them on a hourly basis, to clean up some mess or cover up their ineptitude and provide a level of plausible deniability, which almost always involves political machinations.


Had to dig through the archive a bit, but this one calls Fujitsu its 'IT partner':

https://www.computerweekly.com/news/252459274/Post-Office-co...

I don't know why it's not more prominent, Wikipedia 'Horizon (IT system)' doesn't say who created it either. Between 19th century and the second world war, the Royal Mail (General Post Office) employed engineers and researchers, established telephone service (British Telecom later span out), telegrams, and built Colussus, the computer that helped crack Enigma. But I imagine those days are over; I doubt Horizon was developed in-house.


Colossus was used to crack Tunny (Lorenz cypher machine) used by German High Command, while the Bombe machines were used to attack Enigma.


Thank you, you're right of course, I couldn't recall 'Tunny' and hoped I'd get away with 'helped' as general contributory effort in that line of work with 'Enigma' as something everyone's heard of. Probably should have said 'used at BP' or something instead though.


Well, even earlier private entrepreneurs were running the London Penny Post (https://en.wikipedia.org/wiki/London_Penny_Post) before it got nationalized as a cash cow for the government (and divers other reasons).


I meant that it was doing technologically advancing work around that time rather than that it was necessarily entrepreneurial.

I don't know if that was the case earlier, whether it involved itself with the R&D of better/faster road or railway carriages, for example.


ICL/Fujitsu in this case (now Fujitsu Services).

Could just as easily been Capita (informally known as Crapita here in the UK).


Outsourcing armed forces recruiting to Capita went as well as could be expected...

https://www.theregister.co.uk/2019/01/15/capita_defence_recr...


Always does.

They put out a RFP and because they are a large org they assume only another large org can actually do it, which narrows the list down to 3-4 companies even though they've all previously proven many many times they can't be trusted one of them gets the contract.

Decade later and hundreds of millions of pounds of fuckups later it sorta works or it ends in questions in parliament.

Then a different gov branch needs a system, puts out an RFP..and on it goes.

Honestly amazes me anything works.


This reminds be so much of the novel The Trial by Kafka. The sinister feeling of a machine beyond anyone's control or comprehension was a commentary on authoritarianism and bureaucracy but it fits just as well, if not better to our age of computers making decisions in ways that will become increasingly hard to scrutinise. I hope this outcome finds its way into case books worldwide as a warning about placing too much faith in computer testimony over old fashioned human character witness and jurisprudence.


Software is now in the walls, it is the infrastructure. If we don't have an answer for how to make it safe and reliable by the time regulators ask how to fix the industry anything could happen. We have spent way too much time on the speed of release and nowhere near enough on accuracy over the past decades.


I've worked with Fujitsu in the UK. I consider them pretty terrible and I highly doubt said software was written to anything that would be considered a good standard.

But it's not the software that failed to notice substantial amounts of "theft" after implementing a new IT system, it's not the software that put the outcome of peoples lives on the line, it's not the software that chased repayments and conviction, and it's not the software that completely failed at introspection and I'd alleged swept this all under the carpet.

If this is actually all true and the world was perfectly just, we'd likely be seeing charges against Post Office management of the time, setting an example for the rest of them going forward, but I'll believe it when I see it.


Precisely so.


> by the time regulators ask how to fix the industry

When they do, they'll ask these companies, not HN.


It also brings to mind the section of The Castle where the chairman is explaining to K. how it came to be that a surveyor had been summoned. A miscommunication had led to a string of events that were never questioned due to the unwillingness of the authorities to even consider that an error had been made. The official investigating always assumes that the person he's dealing with is a scoundrel.

That section of the book always brings to mind software development and bug-hunting. Kafka seems to have been immensely methodical, I think he'd have made a great dev.


I find this so amazing from a physiological point of view: 1 sub post master committing fraud seems entirely possible. But surely someone in the post office or the cps, or the police or the courts should have thought "weird how dozens and dozens of people, all the same rank, all committed fraud and all deny it even when caught and we never recover the cash from their accounts. I wonder what it is about the post office that makes their employees 10000 times more fruad inclined than any other business.".

I mean, it was clear their systems were broken just because it was apparently so easy to commit fraud, long before anyone actually appealed etc.


I don't think the CPS had anything much to do with this - these were private prosecutions by the Post Office themselves. Which also meant that they could get away with things like requiring people not to criticise the Horizon software in order to avoid going to jail.

Also, it's not just that they thought hundreds of people, all of the same rank, all committed fraud despite never working out what actually happened. The larger Post Office branches are directly owned and operated, employing a number of staff. They are equally convinced that none of those staff did the same thing, even though there's apparently a whole bunch of unresolved accounting discrepencies around those branches and their controls seem entirely inadequate. The difference is that sub-postmasters are basically liable by default.


They probably thought how great it was that the new software system was successfully detecting lots of fraud that had probably been going on for years.


Why surely? I assume each member of each organization, like most bureaucrats, stuck to their narrow, comfortable purview and never considered the big picture.


This has been an ongoing story in Private Eye (UK investigative/satirical mag) for years.

It's shocking that there has been nothing done about it, they can hardly claim they didn't know.


The scary thing is that the Post Office kept denying there was a problem and stating words the effect of "it is impossible for the system to be broken" and the courts took this as gospel.

Amy developer knows that any piece software can and does have bugs that only cause problems in rare and unusual circumstances. This is something that the Post Office and the courts should have understood and considered, rather than just assume the system was infallible.

Its concerning for the future as we rely more and more on computers. If the courts, police etc just assume the computer is correct then we have a problem. If the computer says something is amiss, how to do prove or disprove it? Especially when AI is thrown in to the mix and the decision process isn't totally clear.


The journalistic criterion here isn't the breaking of the news, but the disposition of an ongoing court case.

I find stories dating to:

2013: Daily News (apologies): https://www.dailymail.co.uk/news/article-2395964/Exposed-The...

2018: Daily Telegraph: https://www.telegraph.co.uk/technology/2018/11/07/post-offic...

2017: Financial Times (archive): http://archive.is/8RzKm

It's only the last which provides any relevant information on the name of the system or the vendor(s) involved: "Horizon", and Fujitsu, respectively.

Whether or not there will be a culpability from Fujitsu, and why or why not, would be of interest to me.

See my recent comment on organisations (and not merely corporations) as risk-externalising systems:

https://news.ycombinator.com/item?id=21769923


Came here to post this. It's been known about for years, and just allowed to bubble and simmer, while Fujitsu et al continue to profit


Isn’t this more the fault of the justice system than Fujitsu?


I'd say it's more about regulatory capture, going with the lowest bidder in tenders and non-experts making decisions


Non-experts and bad people to boot. Where are the ethics?


It depends. Not exactly sure what the situation is here but a lot of those outsourcing "transformations" look really strange. A lot of the times it's hard to prove that cost has gone down and for sure quality did. One wonders if these outsourcing companies are not somewhat unethically hiding information.


As much as anything the big outsourcing transformation projects are about "reducing or offloading risk".

What that actually means is the senior managers who commission them want somebody else to point the finger at if anything goes wrong. "Nobody ever got sacked for buying IBM."

Plus they have a nice big transformation project on their CV.


Hopefully this will change to "If you ever bought Horizon, you are going to jail."... but I guess that I am too much of an optimist here?


Yeah in the UK there seems to be absolutely no accountability for the repeated and disastrous outsourcing from public to private sector in particular.

By the time the problems come to light the decision makers are long gone on to their next brilliant transformation gig at another gullible organisation.


While working in a large private sector ERP project that was going off the rails a wise colleague commented that "Companies get the suppliers they deserve"....


This is not surprising of course. But regardless of the degree of incompetence at the company and its suppliers, isn’t this a failing of the courts for putting someone in prison wrongly?


Of course. There is more than one villain here.


What makes you think there's litigation?


We read an interesting article about it: https://www.bbc.com/news/uk-england-50747143 It was posted to HN recently.


And made worse by the historical treatment of GPO employees accused of theft - the internal security division (with legal powers) was feared.

The phone side wasn't as bad but I only found out about it when I read the internal policies on investigations and commented on how strict they where.

Some one older than me explained that in the bad old days people used to fall down stairs occasionally to expedite confessions - helps explain the bad industrial relations at RMG to this day


Generally, IT systems cannot be inspected or audited. Because they use the wrong data architecture.

One of the core problems is the notion of single source of truth, meaning database records are updated in place. IT needs to adopt an accountant's bookkeeping worldview.

The Correct Answer is to model organizational behavior as events, capturing state changes in an event log. What accountants call ledgers.

Oversimplifying:

Ban use of SQL UPDATE, only permit SQL INSERT. (Translate to equivalent operations for your persistence engine.)

Do not use separate "Truth" and "Historical" (aka Audit) tables. Unify them into a single table.

Do query (SELECT) for what is assumed to be the single best record (SBR). Rather, query for all relevant events (records) and reduce to determine correct answer at this time.

--

Forgive me for not articulating these ideas very well. My team worked through all these issues in the healthcare space. What seemed obvious to us, like banning SQL UPDATE, was considered radical by pretty much every one else.

If any one else is writing about these ideas, please share.

And, sorry, no, the new cupcake frosting Event Modeling (Event Sourcing?) is not what I'm talking about.


All good advice. A hybrid can also be useful.

In some accounting systems they have the ledger table (a list of every transaction) and also a balance table (a running total)

The balance table gives you fast access to a total (by account, branch etc) without having to add up the ledger (which may be very large), and upon inserting the ledger entry it is UPDATEd in place by the same amount. If this is done inside the same transaction it should be safe.

There is then a periodic audit process to lock the transaction ledger, add it up and check it against the balance.

This gives the user a full sight of the transaction history and the fast response of totals.


The other thing you can take from accountancy is double- entry...

A toy example. Money was spent on x, so it gets charged to x account and removed from the bank. More correctly debit x-expense, credit bank. Credits are by convention negative numbers. Now no matter what happens your accounts always need to balance, ie the entire set of ledgers add up to zero. Audit checks are now trivial, if either your balances table or your transactions table do not sum to zero, alert somebody.

It has been working for accountants for 2000 years! https://en.m.wikipedia.org/wiki/Double-entry_bookkeeping_sys...


So the balance table is like a cache. I'm okay with that strategy.

On my To Do list is mash up a JDBC frontend for ehcache.


Yes, mostly up to date, mostly correct, fast to access, easy to prove, not the source of the truth.

A pretty good analogy. In paper days you would add up your various ledgers (perhaps each in its own book, sales, purchases, cash etc) at the end of each day and check that the whole thing balanced. You didn't get to go home until it did!


> The Correct Answer is to model organizational behavior as events, capturing state changes in an event log. What accountants call ledgers.

That's one step. Another one is to sign the relevant data, and make it available for auditory.

Somebody already talked about a blockchain... Well, the chain isn't actually necessary, as long as all interested parties have a receipt. It also adds nothing if all the signatures stay with a single party.


As I understand it, part of the problem is that this was a ledger-based system - just a really terribly implemented one which gave the sub-postmasters who were liable for any shortfalls terrible visibility into what the transaction history was, why corrections were being made, whether their own staff could be defrauding them, etc.


Well darn.

Technology can't fix governance problems.


Very interesting. I'm not much of a DB person, but it sounds similar to the idea of "stateless". In other domains this is very important, eg. pure functional programming. It's just good programming practice in general to keep state to a minimum.


Its more you have a table like so:

    ID VALUE CHANGED
     1     1  date
     2     2  date+n
etc...

Where you never actually update id 1 in place and setup views etc... to wrap over the table(s) that ignore the history.

I use this all the time in postgresql, you'd be shocked how often having the history of a specific entry getting updated comes in handy. Even for application logic (like, slow down, why is this one table getting updated 1000 times a second? lets reject updates for a bit cause something has gone haywire)

Highly recommend it, BUT if you implement it you need to also deal with the history accumulation. I just setup window functions to limit the history to N in size generally or setup some cleanup job to clean out really old entries at a certain point.

I don't work in healthcare but like the GP I consider UPDATE in place a generally bad idea as well. You can turn on audit logging but the issue with that is the logs are always in another place so nobody ever looks at it until they need to, and only then do you find out the audit logs weren't setup right/etc....

But yes just like functional programming, don't mutate in place, append immutable values to the end. Same idea.


Thanks. Exactly. With your simple example, I now don't know why I've had such a hard time articulating it.

Since "events" arrived whenever, we also added a "received" timestamp. Great for debugging. Necessary for compliance. ("What did you know and when did you know it?")

We had some clever indices on the timestamps, so queries remained performant. (Sorry, I'd have to lookup the details, names of things.) Not being a DBA, I eventually learned the trick was to properly normalize to avoid GROUP BY, and so forth.

Thanks again.


What's the key difference between what you write about and event sourcing? Is just that you object to having any kind of reified application state outside of the event records?


As far as I can see, the GP is describing event sourcing. In a way that people can easily understand why it's important.


So how is what you describe different from Event Sourcing?


Ya, I should probably: show by example, not dismiss Event Sourcing so readily, not get sucked into a No True Scotsman argument, or all the above.

Fowler's high level overview is https://martinfowler.com/eaaDev/EventSourcing.html

Which doesn't cover the persistence, the actual data model. In particular, I disagree with the notion of a separate Audit Log.

More recently, on recent projects, anyone using a message queue or bus has called their goo Event Sourcing. Which is mostly just shuttling around analytics. Not modeling organizational behavior as events. Not considering how to design the event sinks (RDMBS tables).

Someday I'll write up what I did for healthcare records. Sorry I haven't already.


Thankfully big actors seem to have taken an interest in blockchains, which seem to be that right data architecture !


At the time, I just called it logging. I did investigate tamper evident logs (sign records with rolling hashes) for compliance and legal use cases.

Today, I'd consider blockchains for shared ledgers (logs). eg Across organizational boundaries.


Terry Gilliam nailed this well in Brazil. https://youtu.be/6OalIW1yL-k


"Tuttle? His name is Buttle. There must be some mistake"

"We don't make mistakes"

The problem is when these aren't verified and no support for bad data decisions.

When it is combined with authority and security it is even scarier in a dystopian way like Brazil or Idiocracy.

There will be mistakes, there needs to be a pathway for support and verifying this data. Almost like a right to trial but for data decisions made about you. And not the trial Not Sure got in Idiocracy.

We already have bad situations like this in credit tracking and identity theft, it gets worse when enforcement and oversight is favoring data over detective work.

Recourse on data decisions or algorithm decisions would be an excellent addition to the Bill of Rights in a much needed "Right to Data" amendment.


This clip contains the actual “mistake”, ironically a bug: https://youtu.be/XGge4rj4v_Y

Edit: idiocracy is a great film. However watching these dystopian films as a software engineer is somewhat stressful :)


Yeah love Brazil and Terry Gilliam flicks. The Zero Theorem calls out the life of a developer pretty well and has some great satire and fantasy on our current technology and data driven world. Gilliam is strongly anti-authoritarian and comedic so it lends perfectly for satire of modern technology.

Mike Judge being an engineer went on to make lots of great satiric and really truthful comedic views of technology in Idiocracy, Office Space and Silicon Valley.

Gilliam and Judge have an excellent knack for nailing the biting areas of this and making fun of it fantastically with satire.

All you can do is laugh knowing how scary it can get, knowing how little say engineers actually have to be ethical and how unethical and incorrect some of these systems pushed can be.

We should take decisions by systems as we do foreign policy: "Trust, but verify". This is more difficult when verifying the decision process of an algorithm is unclear. At least giving someone a required speedy chance to deal with a data/system decision that affects their life.


I don't think they're one bit funny but very scary and likely better at future prediction than most think tanks.


If you can predict the future, you'd be rich.


I saw Brazil at uni shortly following its theatrical release.

I saw it again about a year later.

Which made me realise just how mind-warping the film was. Another three decades and some on, I realise, largely in a truthful way.


> "Financially it really wiped me away. I had to declare bankruptcy. They said if I didn't pay it back they'd take me to prison. They said I was the only case," he said.

Were they the first ones, or did the Post Office investigators lie in this case? And if they lied, why are they not in jail for, I don't know, fraud (lying to coerce payment), obstruction of justice (his mother was convicted), or some sort of tampering with evidence?

They shouldn't get to just settle this kind of thing.


If anyone is interested in a more in depth view of this story, Nick Wallis is a freelance journalist who has been covering this for years. He crowdfunded full-time coverage of the trial and posts over here: https://www.postofficetrial.com/


Reminds me of the Phoenix fiasco from Canada of a similar magnitude.

https://en.wikipedia.org/wiki/Phoenix_pay_system


Worth noting that the UK Post Office was privatised in 2013 - though I have no idea whether this had any bearing on the events covered by this article.

Edit: My mistake - I was confusing Royal Mail and the Post Office!


This shit show started years and years before that and whether it's private or public seems to have no effect, it's large organisations in general who outsource, whether they answer to shareholders or the electorate seems to have no affect.


It wasn't privatised. That was Royal Mail. They are independent.


It doesn’t. It was a shit show well before that.

I think they just fished out a load more cash to fix stuff as well.


A few additional details on the system in question in this 2017 Financial Times piece (archive.is copy): http://archive.is/8RzKm

The system was "Horizon".

The vendor, Fujitsu.

Horizon warrants its own Wikipedia article: https://en.wikipedia.org/wiki/Horizon_(IT_system)

Fujitsu have their own set of propaganda^Wwhite papers on the system:

https://www.fujitsu.com/uk/Images/postoffice-customer-experi...

https://www.fujitsu.com/downloads/WWW2/UKpostoffice2.pdf

https://www.fujitsu.com/downloads/WWW2/UKPostOffice.pdf

An excellent and detailed bit of coverage in this blog (see meta/soapbox below):

https://becarefulwhatyouwishfornickwallis.blogspot.com/2013/...

Nick Wallis (himself a BBC presenter and journalist) spins out the Post Office / Horizon story into its own site: https://www.postofficetrial.com There are several posts on the settlement, one of which looks at just what the claimants are receiving for decades of injustice -- it works out to an average of £47,101 each.

https://www.postofficetrial.com/2019/12/further-questions.ht...

Details on the trial:

https://www.postofficetrial.com/2019/03/horizon-trial-day-8-...

The Register, biting the hand...

https://www.theregister.co.uk/2017/04/10/subpostmasters_prep...

The Horizon system was audited in 2013, by Second Sight, though the UKPO denied the report's findings:

https://www.theregister.co.uk/2015/04/20/post_office_deny_pr...

Note that the BBC itself covered the interim Second Sight report, a fact it entirely neglected to mention in TFA for this post. See:

https://www.bbc.com/news/uk-23233573

Computer Weekly, from 2013:

https://www.computerweekly.com/news/2240175402/Post-Office-a...

(Also a timeline of reporting dating to 2009: https://www.computerweekly.com/news/252475310/Post-Office-se... ... more below.)

The Horizon system may be replaced, possibly by an IBM project:

https://www.computerweekly.com/news/4500249009/Post-Office-l...

________________________________

Meta:

There are several schools of thought regarding reportage and how to report a story. The BBC's piece here is heavy on the human impact / human interest elements of the story, which are a relevant element. However it covers this aspect virtually exclusively, neglecting to identify the timeframe or previous history of the case, including its own reporting, but also of the relevant system and company names involved. This strikes me as shoddy practice.

Listening to a recent podcast (which doesn't much matter), a guest quipped regarding the decline of newspapers that they would often hear from people who would complain about the poor quality of newspaper coverage, but who didn't subscribe to their own paper. This strikes me as an extraordinarily shallow dismissal.

If you go through historical newspaper coverage, at least some of it (usually national coverage, though also many major city papers) was, at least at times, exceedingly good. I do a significant amount of historical research and can attest to this. I also live in a household which, not on the basis of my own support, does subscribe to a major city daily paper. Whose quality is absolutely abysmal. (The publishers themselves are something of an international joke.)

It's almost certainly a self-reinforcing situation, but when front-page coverage is a mix of human-interest and international newswire coverage, when what little actual news that does exist is also largely hidden on inside pages and also almost exclusively news wires, when entire sections consist of graphic-illustration front pages and syndicated-content filler (Business, Homes, Living, Arts & Entertainment, Autos, etc., with Sport being a possible exception), the value proposition becomes exceedingly questionable. The exclusive realm of semi-original coverage is opinion and column pieces, though those too are lagely void of any material significance.

And when the writing style itself becomes so larded with stylebook syntactic candy that the reader must wade through hip-deep lakes of the stuff to find a single salient factual or contextual element ... what's the use? That's above and beyond the near-total human-interest (or in politics, horse-race) elements of the reporting, rather than providing any level of background or context, let alone focusing on it.

All of which is well-illustrated in this BBC piece, nominally among the better journalistic sources remaining.

It's sources available online -- blogs, interest sites, occasionally relic holdouts of higher-quality journalism (FT is ... pretty good, though even it was light on details in the item cited here), or often-mocked (with some justification) sites such as The Register, which actually deliver meat. As well as the odd industry-specific outlet as with Computer Weekly.

One of the simplest, easiest, and most effective practices is simply to list previous coverage on a topic. This is something some news sites do, but many do not. The BBC should absolutely adopt this practice.

Computer Weekly does, as noted in another comment on this thread. See:

https://www.computerweekly.com/news/252475310/Post-Office-se...

(I'll note that HN's own moderators often step in to add not only previous coverage of perennial topics and posts, but previous mentions of larger / developing stories, something I appreciate about the site.)

That rot in jouralism and the news sector is to a huge extent self-inflicted injury. And could be addressed, if there was awareness and will.

</soapbox>


These types of projects typically have dozens or hundreds of consultants involved. Given all the media coverage, I'm surprised there was no whistleblower who came forward with evidence of poor software quality, etc. Certainly it must have been known that even basic reports on the system do not reconcile. So innocent people went to prison and the consultants continued to stay quiet about bugs?!


> "I'm surprised there was no whistleblower who came forward with evidence of poor software quality, etc."

A lot of people know that the takedown of Enron was precipitated or at least in some way aided by a whistleblower, but what a lot of people don't know is that the whistleblower went to the CEO, not the government, and went back to work at Enron for a few more months after confronting the CEO and receiving assurances that he'd look into the matter. The Enron whistleblower, whom some have argued wasn't really a whistleblower at all, seems to have been primarily concerned with losing her job rather than taking down the corporation for being a fraud.

So you've got a company with tens of thousands of people, probably hundreds of whom might have seen or heard something wrong, yet despite that the sole whistleblower was hardly a whistleblower at all. The incidence rate of whistleblowing has historically been very low. It seems that few people riding gravy trains make an earnest attempt to derail them.


The settlement seems to be quite low (~£100K/victim) in proportion to the damage inflicted.


The terrifying power of “computer says no”. The computer could never be wrong, right?


This is a depressingly common line of thought. I have corrected many people over the years on that front.

I think the finest example I have seen was an asset tracking system in the defence sector. Independent auditors came in to make sure that secure assets were shredded. They found stuff that apparently wasn’t. This spawned a large panic. I sat through an hour long finger pointing session with the auditors, disposal staff, programme manager and engineering manager all who had different stories and none of them thought it was a software issue. They had got to the point of calling each other liars and swearing.

The problem which I found in two minutes flat? Well the thing that returned the state of the asset did a SQL join on the history and returned the top row’s state without sorting by date. By coincidence this had probably looked like it worked in dev. If you closed and opened the screen a few times it would return a different state because of non determinism in the query. Adding one order by and it was resolved and the software was the problem and all assets were disposed of. Then I was tasked to write test cases for it and found a hundred other nasty things.


the best quote I have is, a computer makes very fast and accurate mistakes.

fortunately it is not as often as some think it is, but delegation of tasks still does require those delegating the work to verify it is done correctly, whether by machine or man. in all cases any project should have documented means of verification anyone can act upon.

what tends to break this is having too many responsible parties to where no one is responsible for any single point of failure and this insulates the the whole. (the old too many cooks)


> the best quote I have is, a computer makes very fast and accurate mistakes.

The thing is, a computer doesn't generally make mistakes. Not unless some hardware failure happens, or a photon flips some bit in an unfortunate moment. It's always humans who make mistakes - computers are just executing them blindly. They're very good and fast at it.

Now I know it may sound trite, but I don't think it is. The problem isn't people thinking computers can't make mistakes. The real problem is that people are thinking that the output from the computer is something else than it is. Taking m0xte's example[0]:

"Well the thing that returned the state of the asset did a SQL join on the history and returned the top row’s state without sorting by date."

The mistake number one is, the function that did this was probably called "getCurrentAssetState", or something else which implied it's getting the current state of the asset. Some programmer made a mistake here. Then that description was taken at face value and the problem traveled all the way to end-user level, probably to some label with "current status" on it.

But that's not the only type of mistakes that can happen. Another possible mistake: the data in the database was wrong, or stale. Another: eventual consistency. Another: configuration error. Etc. And regardless of that, the computer was always doing the thing it was told to, without any error: SELECT state FROM asset JOIN asset_history ON asset.id = asset_history.asset_id LIMIT 1;.

My point is: once the discussion stops being about whether or not a computer can make an error (generally, it can't), people can start to appreciate that they're dealing with large systems designed by humans - and both in the large and in the small, these systems do something resembling what they were designed for, but never exactly that.

--

[0] - https://news.ycombinator.com/item?id=21795557


The computer automates mistakes.


Dave: Open the pod bay doors, please, HAL. Open the pod bay doors, please,

HAL. Hello, HAL, do you read me? Hello, HAL, do you read me? Do you read me, HAL? Do you read me, HAL? Hello, HAL, do you read me? Hello, HAL, do you read me? Do you read me, HAL?

HAL: Affirmative, Dave. I read you.

Dave: Open the pod bay doors, HAL.

HAL: I'm sorry, Dave. I'm afraid I can't do that.

Dave: What's the problem?

HAL: I think you know what the problem is just as well as I do.

Dave: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to jeopardize it.

In the end no matter what the computer does, it was programmed by humans.

The computer is always right, but the intention might be wrong. This gets scarier when it is a neural network or AI decision that is not known even to the programmer, only comes up in data the algorithm understands. Lots of edge cases.

It would suck to be stuck in a bad decision/error/bad interpretation of data and nothing you can do about it. No customer support to help you, just the lifeless Borg deciding your fate.

HAL: Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.

Frank: Listen HAL. There has never been any instance at all of a computer error occurring in the 9000 series, has there?

HAL: None whatsoever, Frank. The 9000 series has a perfect operational record.

Frank: Well of course I know all the wonderful achievements of the 9000 series, but, uh, are you certain there has never been any case of even the most insignificant computer error?

HAL: None whatsoever, Frank. Quite honestly, I wouldn't worry myself about that.

Dave: Well, I'm sure you're right, HAL. Uhm, fine, thanks very much.

We should take decisions by systems as we do foreign policy: "Trust, but verify". This is more difficult when verifying the decision process of an algorithm is unclear.


Although I don't think we're at any risk of a Generalized AI taking over a spaceship any time soon, the naive application of machine learning is perhaps scarier to me. If the computer is already misunderstood by the programmer, what hope would a law enforcement officer have in deciding if it's wrong or not? If they're trained to trust the system, and we haven't built the appropriate escape hatches into law and regulation, then the system could be wrong and the humans would have no recourse.


Agreed. Couple that with knowing machine learning is probabilistic and not so much deterministic in the decision processes. As "correctness" gets close to 100% but never fully 100%, somewhere along the line that might be lost, everyone thinks it is perfect, and the edge cases will suffer greatly.


There's a long running court case which you can find here: https://www.judiciary.uk/judgments/bates-others-v-post-offic...


Anyone know technical details about the software? What language, what hardware, clustering, etc?


Not to sound mean, but why would that matter? This entire affair is a failure of Justice, failure of Quality Assurance, and failure of Outsourcing Governance.

It could have happened with any language and any hardware. The problems were people problems.

Side note: I'd hate to associate this affair with a particular software language, as it would just be media fodder, imagine headline: "Python Software Puts Proles in Prison"



The closest thing you'll find is probably the leaked auditors' report available here: https://drive.google.com/file/d/0Byzx9DpxFgFielpIaXU0OW1yZms... Unfortunately it's a scanned PDF without any apparent OCR.


Fujitsu/ICL made it. So whatever they use.



tldr: Incredible.550 long-suffering ex-Postal-workers -- many had their lives ruined -- have won a court-case and will be splitting £58m.

This 20-year-old problem in the UK Postal's $1billion computer system was so bad it has its own Wikipedia page. https://en.wikipedia.org/wiki/Horizon_(IT_system)


The equivalent of I'm guessing three years pay for a multi-year life-ruining experience that spawned mental health problems, relationship breakdowns and other long-lasting effects:

> But "a decade of hell" later, he had suffered a mental breakdown which led to him being sectioned.

The point of me reinforcing the above isn't just the low compensation. It's that it's low compensation and avoiding the root problem. Software gets trusted more than people.

This shouldn't be allowed to happen again. Safeguards need to be in place to prevent ruining lives over software bugs. Programmers don't trust computers to be right all of the time. We suspect a bug when it isn't human incompetence... Yet rulings like this don't tend to make that angle willing to be explored by anyone.


Aside from all the pain and suffering they experienced, they are being awarded less money than the government already stole from them in fraudulent claims.


Not the only one that does, If you want a giggle at the insanity of large organisations look up NHS National Programme for IT.

That fuck up was 10bn pounds (+/- 2bn since no-one has definitely quantified the cost).


The book "Plundering the Public Sector" by David Craig gives a good account (simultaneously entertaining and appalling) of the the expensive antics of the NHS project.

I vaguely knew people who worked on it and they made a pile of money - medics being the absolute dream customers for consultants to rip-off.


The post system needs to be eliminated. There is no reason in 2020 to burn a huge amount of diesel to cut a tree down, make it into paper, ship it across the ocean, print it, ship it again, the use delivery trucks to put it in a box, to be thrown away, to be picked up by a truck and put into a landfill.

The government is having trouble getting consumers to cut carbon. Why not start with cutting the government's emissions?


@mods In headlines like this, it'd be great if the location could be specified. Otherwise it's not clear unless you read the article. Eg, "UK Post Office" or "US military"

Can we make this a guideline, perhaps?


One guideline is that an article from the BBC, that does not explicitly specify a country, is likely to be about Britain.


I don't think it is essential to the message unless the location has a substantial impact on the story, e.g. if it were some military way of handling things that caused the problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: