Hacker News new | past | comments | ask | show | jobs | submit login

> When the DSMC met on Aug 5, 2021, it recommended that the TOGETHER trial stop randomly assigning patients to the fluvoxamine group, as this comparison had met the prespecified superiority criterion for the primary endpoint (prespecified superiority threshold 97·6%).

This... is not how it's supposed to work.




Actually, it is. This is an adaptive clinical trial, meaning its part of the protocol to change when pre-set criteria are met. This is stated in the paper. There's a whole body of work on the ethics and statistics of adaptive clinical trials and when to stop them early if promising results are found.

https://en.wikipedia.org/wiki/Adaptive_clinical_trial


https://academic.oup.com/jnci/article/104/18/1347/924103

> He also cautioned against “play the winner” designs that increase the number of patients being randomized to arms that show the best results in an interim analysis. “That is most often used in early phase trials, like a Phase I or early Phase II trial where you’re really trying to identify the best dose level; such trials tolerate bias pretty well,” Sietsema said. “But you wouldn’t use a play- the-winner model in a Phase III trial because the potential for bias could lead to a wrong decision and the FDA would object.”

That sums up my concerns pretty well. This is being presented as a phase III clinical trial demonstrating safety and efficacy. For the level of effect they are testing for, I don't see how an adaptive trial is appropriate here.


Perhaps they were more interested in saving people's lives than waiting for the trial to conclude... I mean the downside is people are less depressed while on a ventilator.


> Perhaps they were more interested in saving people's lives than waiting for the trial to conclude...

The bit "waiting for the trial to conclude" actually means "determine if the drug actually has any effect or not".

No matter how much you are interested in saving people's lives, or any other moralistic fallacy that might pop up, you can only help them if you give them treatments that actually work. Cutting short a study is falling short of the most basic requirement to meet that goal.


The point is that math shows they determined that it has an effect before the trial ended.

So now you are faced with a conundrum. You made sure that it has an effect to the degree you wanted. Now the math says that you either might get even surer by continuing the trial, or save 30% people from hospitalisation.

Also in treatment group just one person died while in control it was 12.

So what's it going to be? Making little bit more sure or saving few lives?


> The point is that math shows they determined that it has an effect before the trial ended.

With a small enough sample, math can show that a coin flip gives tails 4 out of 5 times, specially if it's stopped too soon.

Don't you agree that stopping an experiment once we get the result we were hoping for and are looking for from the start does raise doubts regarding the conclusion?


With small enough sample math tells you exaclty how little confidence can you have in a given result.

And for a group of 1500 people it tells you how much confidence can you have. And it was more confidence than they demanded before starting the trial.

> Don't you agree that stopping an experiment once we get the result we were hoping for and are looking for from the start does raise doubts regarding the conclusion?

If you have a novel disease that's 99% fatal. And then you test a treatment that results in 30% people surviving 99% fatal disease. And you gathered enough data that that math tells you is enough to confidently claim that it isn't just a fluke.

Does stopping the trial and giving the medicine to the half of patients that you were keeping as a control group makes people raise doubts about efficacy of the drug or does that just make you not be called a second doctor Mengele?

Did they stop fluvoxamine trial? Or do they just mention that the effect was so strong that they could?


No. It's called p-hacking, and it's a sleazy way to get published.

https://youtu.be/42QuXLucH3Q


This is not p-hacking. p-hacking is when you don't pre-register your hypothesis and you hunt for ones which hit .05 after the fact.

There was a single hypothesis being tested here.


It was an adaptive clinical trial with multiple arms, so it wasn't a single hypothesis. Apparently this is considered legitimate in medical trials, but I have some reservations about the concept.


Also this was >0.60 not around 0.05 If I read correctly.


This is not uncommon or wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: