Hacker News new | past | comments | ask | show | jobs | submit login

That trial wasn't a "statistically spurious event", it was a flawed trial

> the reason the first trial came to an exaggerated impression seemed to be the number of patients who might not have fully progressed to SPMS

Stopping early can lead to statistical fraud, which is why the bar is so high on doing so. But it has to be balanced with the recognition that, if the interim results are correct, continuing the trial will lead to a significant number of avoidable deaths.

And for what it's worth, given the believed cause of the flawed trial being cited, it sure sounds like if the trial had run to its conclusion it still would have produced flawed results.




Having worked in clinical trialling, what I saw made me realise that the final stats may be worth much less because of mismanagement of data.

I can give some hare-raising examples[0] but for obvious reasons... The one I can give is it was known that docs who prescribed this to multiple patients and saw an apparent improvement the attributed to the new drug, would switch the patients on the old drug to the new and not inform us. Obviously they couldn't or they'd invalidate the trial and they knew it. And it was done entirely with the patient's best interests at heart. Docs care about their lives.

That may have been rare and a flaw in the trial protocol, but much worse stuff was done via utter incompetence. And I mean including at the top of these giant drug companies. Run by idiots, really.

Wider lesson: just cos you hand over a process to a third party does not mean it's going to be done right.

[0] tribute to the thread elsewhere


I think it's hilarious that in this one HN thread we have a complete word swap of terms. You said Hare when it should be Hair and they did the opposite.

https://news.ycombinator.com/item?id=20682918

It's supposed to be hair-raising because it's alarming and surprising etc. Nothing to do with taking care of hares :)


Switching extra patients to a new drug would make the trial more conservative though... so perhaps don’t worry so much.

In general it’s worth remembering that RCTs are estimating the effect of an intention to treat ( the randomisation) not the treatment itself.


> ...would make the trial more conservative though...

If the doc correctly divines that the drug is improving things, yes, but there is noise in the signal so it may be just noise causing the few apparent improvements the doc sees. If so, doc's action may be smothering a less-then-obvious signal.

There's too much at stake for that to be acceptable.

> ...are estimating the effect of an intention to treat ( the randomisation) not the treatment itself.

I don't understand - the protocol applies the drug (aka the treatment) and results are measured. I don't understand 'intention to treat'. What is 'intention' here, so tech term I am not familiar with?




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: