This article is about a phase III RCT that the hospital managed to do without major industry capital injection. This truly was a major achievement (I have been involved in a phase III RCT myself). It was published in the New England recently: https://www.nejm.org/doi/full/10.1056/NEJMoa2402604
This trial is using an existing drug in a potentially novel way (before surgery as opposed to after surgery). I dont think it really lives up the original article title.
Argh, I'm so sorry, I linked to the wrong New England paper in my post above. (That is a different major achievement from the same institution, but the above was industry funded as others correctly pointed out).
The correct New England paper about this treatment is here:
This one is TIL therapy, where you basically take tumor-infiltrating lymphocyte from the patient, stimulate them ex vivo, and put them back.
The reason this is so impressive -- and highlighted by this article -- is that large phase III trials like this have now become so complicated due to various technical, financial, logistic, ethical, and above all regulatory challenges, that they are now mostly done by companies, or at least as joint ventures with companies (and often in jurisdictions with less of these issues, certainly not in the EU like this one). It is very, very impressive to pull off something like this as an academic institution (at least in Europe). What's more, the funding came from KWF (the Dutch cancer foundation), which is actually a public charity that mainly relies on donations.
Interesting. I haven't seen much in this space since Lee Spector's "push" more than 20 years ago (http://faculty.hampshire.edu/lspector/push.html). I did see a mention of Push in the FAQ but it would be very interesting to compare this in detail. If I get it correctly Zyme programs are evolved on the bytecode level whereas Push's stack architecture is designed to be evolvable directly at the syntactic level? A head-to-head comparison / benchmark would be super interesting.
The connection between SEMs and DAGs is really interesting. The underlying models are very similar but developed independently -- SEMs coming from the psychometrics tradition and popularized by Jöreskog, DAGs coming from Bayesian Networks and popularized by Pearl. There are deep connections between them -- we have done some work on that (e.g. https://osf.io/preprints/psyarxiv/2kqxr and https://arxiv.org/abs/2302.13220).
I made the first version of this back in 2010, when Pearl's work on causal inference started impacting Epidemiology. A friend was an Epidemiologist and she told me about an MS-DOS program she was using to do something with graphs (https://pubmed.ncbi.nlm.nih.gov/20010223/); she found it painfully slow and wondered if I could "make it more user-friendly".
I did my PhD in algorithms at the time and was intrigued when I started reading Greenland, Pearl, and Robins (https://pubmed.ncbi.nlm.nih.gov/9888278/) and then Pearl's "Causality". I soon found that it was not obvious at all how you could speed up that MS-DOS program, and it led to a paper at UAI in 2011 (https://arxiv.org/abs/1202.3764). I made dagitty as a demonstration that you could actually use the algorithms we developed in that paper, and it took off from there -- started with 10 users per day, growing to the hundreds and thousands as causal inference became more popular.
It's now a bit dated, and I don't have as much time anymore to keep it "fresh" as I would like. But I am still grateful and amazed at about how many people I got to know due to this. Highlights included collaborating with Pearl himself on a solution manual for his book "Causal Inference: A Primer" when it first came out, and so many e-mails I got out of the blue from users all over the world. Just last summer I stayed at the house of the author of one of the builtin examples in dagitty.
As these 14 years flew by, I now am happy to do play a small part in supporting the next generation of causal inference software -- if you're interested in causal inference, be sure to check out pgmpy.org, a Python library for Bayesian networks that includes several causal inference functions (https://arxiv.org/abs/2304.08639). Ankur, the author, did his PhD with me and will soon defend his thesis!
Also, R users, be sure to check out ggdag, a great package by Malcolm Barrett that wraps dagitty functionality in a much nicer and tidyverse-compatible way.
Yes I had the same issue. But the wording "there is a < 5% probability that an outcome was the result of chance" is in fact problematic since many readers will go on to conclude "hence a >95% probability that the outcome was not the result of chance", so it is easier to misinterpret than the technical definition P( Observation | H_0 ).
In courses I will typically use wordings like "If there was truly no association, then the probability of getting an observation like this is <5%".