Hacker News new | comments | show | ask | jobs | submit login
Ecstasy for PTSD: Not Ready for Prime Time (sciencebasedmedicine.org)
34 points by tokenadult 1758 days ago | hide | past | web | 42 comments | favorite



Warning: people on PTSD are most likely on anti-depressants such as SSRI's and MAOI's which have interactions with MDMA, potentially causing serotonin syndrome.

http://en.wikipedia.org/wiki/Serotonin_syndrome

It can take weeks to a month of widthdrawl from SSRI to prevent interactions.


Not necessarily. PTSD is a lot more complex than simple depression.

Doctors will try and fix the depression, but then something else will "arrive" and wreck havoc. Whether that's anxiety, flashbacks or something else that's not "always there" when you're simply "just" depressed.

It's nearly impossible to treat PTSD as a whole with conventional medicine.


Yep, I'm not saying SSRI's are more effective, but theres a good possibility that someone with PTSD is already on it and self-medication could be dangerous.

> The most commonly prescribed class of medications for PTSD (and the one approved by the U.S. Food and Drug Administration) are the selective serotonin reuptake inhibitor (SSRI) antidepressants.

http://psychcentral.com/lib/2006/an-overview-of-treatment-of...


I wonder why the focus is on MDMA? IMHO it's entirely possible that the same or better effects could be found from a related substance like MBDB, and some (again like MBDB) seem to be less toxic than MDMA.

The page also mentions that the reputation of MDMA preceded it and may have had an impact on the study. Using a less well known substance may have tackled this.


MDMA is what's popular in the recreation scene and so will draw attention over others; Even Madonna not-so-subtly promotes it through her "MDNA" album and tour name - though most are none-the-wiser.


Oh snap, I thought she was punning on a different chemical (to wit: deoxyribonucleic acid)...


Wrong. It's because MDMA was successfully used in the 70s for psychotherapy before it was scheduled. Between uncontrolled clinical reports, the self-reports of users, and modern controlled studied there is a wealth of evidence that MDMA can be used for psychotherapy with excellent results.


> Exit poll: 95% correctly guessed which group they were assigned to. [ie, knew they had taken XTC or not]

This pins the WTF meter. 1 out of the 20 could not figure it out?

In related info: The controlled studies for viagra were a bit comical.


From what I understand this 95% figure is not that far off from what is obtained in anti-depressant trials, but somehow the poor blinding of anti-depressant trials is almost never mentioned as a serious methodological problem.

This is discussed in more detail in Irving Kirsch's The Emperor's New Drugs.


The issue here has to be that it's hard to mask MDMA - people are going to know they're on something even at therapeutic (rather than recreational) doses, are they not? Is that really poor blinding?

Antidepressants tend to be long lasting and slow acting (AFAICT) which MDMA is not.


True, antidepressants are much slower acting than MDMA, but you're still looking at 80-90% of people in antidepressant trials who can correctly guess which group they're in (due to side effects such as dry mouth, sleep problems, sexual dysfunction, etc.).

Regarding the issue of whether or not it is poor blinding, I guess it is a bit like the tree falling in the forest but nobody is around to hear it: if you conduct a placebo-controlled study, but all the participants can correctly guess which group they're in, is it really a placebo-controlled study?


Likely a rhetorical question on your part there, but it is an interesting conundrum. Because the drug works directly on the brain and folks emotions, and because those effects are directly bound up with the therapeutic area under study, I can't help thinking a placebo would be really hard to do well here. Any other phenethylamine that subjectively mimics MDMA is going to be messing with the same systems.

I guess (as may have been suggested elsewhere) you could use a non-seratonergic stimulant of some sort to give people the feeling that they are on something, and rely on them not being able to tell what it is. You'd need to screen for anyone that had any recreational experience though.


Yeah, placebo controlled studies of mind-altering chemicals are hard. I've done quite a lot of research on placebo, and would never expect to be able to blind effectively without using an active drug (low dose). Something like methylphenidrate might work, as noted by the top comment, but even then you don't really have a control, as you really are just estimating the effects of one chemical against another (which tends to require far larger studies to demonstrate an affect).

Speed would probably work. One thing that interests me though is whether or not the patients were treated seperately. Having been surrounded by people on MDMA while not partaking myself, at least in my anecdotal experience it can affect your state of mind, which could be one reason for the rather large placebo effects (given that the treatment is so obvious, one would expect placebo effects to be artificially lowered.


If you conduct a placebo-controlled study, but all the participants can correctly guess which group they're in, is it really a placebo-controlled study?

No, it's not, and the question of how many allegedly double-blind studies are compromised by this leakage deserves a lot more attention.


You just know one of them was sat there saying 'I am so high right now!' but hadn't been given anything.

Suggestion is a powerful thing to some folks.


Or simply "I'm feeling better". Not everyone has experienced highness. Contact-high is also possible.


I'm always skeptical of using this stuff for PTSD since the comedown is so harsh. I would suspect that the period of serotonin depletion could actually be just as bad for PTSD as the MDMA "high" could be good. Though I guess it's also important to note that taking MDMA with your therapist in a quiet room is nothing like taking it in a loud, dark room with your friends.


Is the come-down bad though, if the doses given are sub-recreational and are followed by proper amounts of sleep?

Not to mention if you can ensure you're getting the right thing and not some dodgy multiply-substituted cathinone or piperazine!


I don't think the comedown is harsh for non-recreational doses. Also, you only do the therapy sessions during the high, so the serotonin depletion shouldn't matter much.

There is very little evidence that MDMA at therapeutic doses causes any more damage than all of the other legal prescription drugs like prozac.


Do you mean the comedown from pure MDMA?


Once again Science Based Medicine shows that it's just a propaganda outlet for big pharma:

"The study is characterized as blinded; but the blinding didn’t work, since 95% of subjects were able to tell which group they were in."

As the researchers themselves say, the purpose of the study was to test their methodology before doing larger trials, including whether or not their blinding protocols were effective. They are going to be using active placebos for phase III, which are much more effective for blinding. In fact many of the other studies they've done already use active placebos, either a lower dose of MDMA or else something like methylphenidate. The authors address this in the actual study and explain that they now have FDA approval to use a three-arm protocol in the future, but SBM is pretending that this a methodological problem they found themselves.

"There is only a pilot study with scanty, preliminary, unconfirmed evidence."

There have been at least a dozen studies looking at MDMA as a treatment for PTSD, but again the author is pretending that this is the only one.

"Experiments using illegal drugs are always problematic, because researchers tend to be advocates for legalization and may be biased by their emotional investment."

Most of the MDMA studies use a triple blind methodology which is far more rigorous than the methodology that big pharma uses. What this means is that not only are the patients and the people administering the drug blinded (or will be going into phase III trials), but also the people analyzing the data were not the ones administering the drugs so they have no idea who was in each group.

Also, these are multi-site trials going on in various countries around the world, including ones where the legal status and cultural baggage surrounding MDMA is wildly different in both researchers and the general population.

"The patients in the placebo group improved, too; so a large part of the benefit could be attributed to the psychotherapy."

As one would expect. However, they still had PTSD whereas the people who got the MDMA didn't. Additionally, studies have shown that the MDMA group continues to improve in the months and years after therapy much more than the control group.

"The addition of MDMA to psychotherapy might prove helpful in refractory cases of PTSD, but the preliminary results of this one small pilot study will need to be confirmed by larger studies in combat veterans before the treatment can recommended to patients."

Agreed. Except for that again there have been many studies of MDMA as a treatment for PTSD, not just this one.

"Unfortunately, the history of medicine is full of equally promising treatments that didn’t pan out. Probably the most typical course is this: a strongly positive pilot study is followed by larger studies with weaker positive results, then by studies with equivocal or negative results, then reports of serious side effects or other problems, and eventually by rejection of the treatment. I sincerely hope this one will prove an exception to that course, for the sake of suffering veterans with few remaining options. But I am not optimistic."

The reason why this has happened consistently in medicine is because the studies are being funded by for profit companies and they are using extremely biased methodology to make their drugs look much better than they actually are. Whereas MAPS is a non-profit, and their methodology is designed to be as neutral as possible. The research may well still not pan out as trial sizes increase and patients are more effectively double blinded, but there is definitely reason for optimism based on the results so far.

For what it's worth, there are numerous videos online where you can watch the researchers (and MAPS) talking about these studies:

http://www.maps.org/videos/source/video3.html (This video is with Michael Mithoefer, one of the researchers behind this study.)

http://vimeo.com/32062578 (Talk given by MAPS about the roadmap for getting MDMA approved.)

There are several more videos online here about both the mechanisms of action, the pharmacology, and the results of other trials:

http://www.maps.org/media/videos/


Yeah, maaaaaaaaan.

And weed cures all forms of cancer -- that's why it's illegal.

If Big Pharma wanted to push dangerous or ineffective drugs on the public, there are far easier and cheaper ways to go about it than coming up with elaborate double-blind, randomized, placebo-controlled testing protocols. See: Vioxx.

Experiments using illegal drugs are always problematic, because researchers tend to be advocates for legalization and may be biased by their emotional investment.

But MAPS, a pro-psychedelics organization, is neutral. Shyeah -- about as neutral as High Times.

This person is expressing sincere doubt -- probably because she's seen enough medical research to know that drugs that seem like miracle cures in pilot studies often pan out to be not so miraculous when tested in broader studies with larger cohorts -- and that's true whether the drug in question is politically controversial (like MDMA, LSD, or marijuana) or not.

I say this as one who supports drug legalization and wants to see further research conducted with MDMA. Until the results of larger studies come in, there's still room for doubt here.


>If Big Pharma wanted to push dangerous or ineffective drugs on the public, there are far easier and cheaper ways to go about it than coming up with elaborate double-blind, randomized, placebo-controlled testing protocols. See: Vioxx.

??? Vioxx was not at all cheap or easy to bring to market, nor was it ineffective. Rather, it was a case where a drug was tremendously beneficial to most users but deadly for a small subset. Even after its withdrawal from the market, US and Canadian government panels have voted in favor of bringing it back based on cost-benefit analysis!

http://en.wikipedia.org/wiki/Vioxx#Withdrawal

I don't think it's possible to have a worse example of the drug company behavior you mentioned. Why not point to the days of phony, deadly cures being pushed on a naive population? Vioxx had actual research and efficacy behind it, and may actually merit coming back on the market.


Merck had a sweetheart deal with Elsevier to publish a vanity journal which passed off marketing copy for Vioxx as "research".

Why not point to the days of phony, deadly cures being pushed on a naive population?

That's still going on, except these days the pushers of phony, deadly cures sound a lot like Alex3917. Hint: look for accusations of medical researchers being "shills of big pharma".

Vioxx gives us a handy, recent example of big pharma actually behaving badly,and they had to undermine science to do it. Just like the peddlers of unproven "alternative cures" -- and quite unlike, and to the dismay of, the medical research community.


If it's being recommended for re-introduction even knowing what actually happened, and is responsible for the health of the overwhelming majority of those who took it, I don't think that counts as an example of a case where you "needed" to corrupt journals to get it approved.


"If Big Pharma wanted to push dangerous or ineffective drugs on the public, there are far easier and cheaper ways to go about it than coming up with elaborate double-blind, randomized, placebo-controlled testing protocols."

That's demonstrably false. There are dozens of books on the topic, many of which I link to here:

http://news.ycombinator.com/item?id=4807436

"Until the results of larger studies come in, there's still room for doubt here."

No question there's room for doubt. My issue isn't with the fact that the author doubts that MDMA will be effective, which it may well not be, but rather that she is being intellectually dishonest in her arguments.


That's demonstrably false.

You haven't demonstrated it false; I can demonstrate it true.

she is being intellectually dishonest in her arguments.

Or she is just unaware of the scope of MDMA reesearch conducted thus far. I doubt this is her specialty.


There's also been plenty of books showing how big pharma is deeply connected to regulators and health agencies which help them get meds through. More commonly they mislabel existing meds for conditions they don't really help with (or have dangerous side effects) - without significant consequences.

Fortunately, the economic disincentive is getting larger each year:

http://en.wikipedia.org/wiki/List_of_largest_pharmaceutical_...


The reason why this has happened consistently in medicine is because the studies are being funded by for profit companies and they are using extremely biased methodology to make their drugs look much better than they actually are. Whereas MAPS is a non-profit, and their methodology is designed to be as neutral as possible. The research may well still not pan out as trial sizes increase and patients are more effectively double blinded, but there is definitely reason for optimism based on the results so far.

If a for-profit company pushes a study through a first stage trial and then loses out at a later stage they lose much more money than if they had given up quickly. What's really necessary for most promising early stage treatments to not pan out is for genuinely good treatments to be rare, plus people being more likely to report impressive results than failures. Both of which I hope you'll admit are just as much a problem for not-for-profit researchers as for-profit ones.


Also, here are some more quotes on the influence of commercial funding, showing indirectly that the file drawer effect combined with the methodology produces results that are much more biased and of poorer quality than non-profit funded studies.

"Studies repeatedly document bias in commercially sponsored research, but the medical journals seem powerless to control the scientific integrity of their own pages. In 2003, separate studies were published in JAMA and the <i>British Medical Journal</i> showing that the odds are 3.6 to 4 times greater that commercially sponsored studies will favor the sponsor's products than studies without commercial funding. And in August of 2003 a study published in JAMA found that among the highest-quality clinical trials, the odds that those with commercial sponsorship will recommend the new drug are 5.3 times greater than for studies funded by non-profit organizations. The authors noted that the lopsided results of commercially sponsored research may be 'due to biased interpretation of trial results.' They cautioned that readers should 'carefully evaluate whether conclusions in randomized trials are supported by data.' In other words, doctors are warned that even the best research published in the best journals <i>cannot</i> be taken at face value." Source: Overdose America, p. 97. Citing <a href="http://www.ncbi.nlm.nih.gov/pubmed/12533125>Scope and Impact of Financial Conflicts of Interest in Biomedical Research: A systematic Review</a>, <a href="http://www.bmj.com/content/326/7400/1167.full>Pharmaceut... Industry Sponsorship and Research Outcome and Quality: A Systematic Review</a>, <a href="http://jama.jamanetwork.com/article.aspx?articleid=197132... of Funding and Conclusions in Randomized Drug Trials</a>

"In the case of calcium channel blockers like Norvasc, for instance, one survey of seventy articles about their safety found that 96 percent of authors who were supportive of the drugs had financial ties to the companies that made them, whereas only 37 percent of authors who were critical had such ties." Source: The Truth About Drug Companies, p. 107, citing <a href="http://www.nejm.org/doi/full/10.1056/NEJM199801083380206>... of Interest in the Debate over Calcium Channel Antagonists</a>

"A team of reserachers at the Beth Israel Medical Center in New York have examined the outcome of clinical trials as a function of who had sponsored them. They found that approximately 75 per cent of drug-company studies showed favourable results for their own drugs, but only 25 per cent of them showed favourable results for the product of a competing company. In studies that are not sponsored by a drug company, the success rate is approximately 50 per cent." Source: The Emperor's New Drugs, p. 62, describing <a href="http://journals.cambridge.org/action/displayAbstract;jsessio... between drug company funding and outcomes of clinical psychiatric research</a>

See also:

http://jama.jamanetwork.com/article.aspx?articleid=202867

http://www.ncbi.nlm.nih.gov/pubmed/17550302


I'm certainly willing to believe that groups can be biased towards a drug that they're affiliated with, I just doubt that the effect is smaller if it's being developed by their university as opposed to their corporation. But I should note, you can also explain that data in terms of corporations being biased towards paying studies on those drugs most likely to succeed. That is, it could be that on those drugs that are most good corporations paid for a larger share of the studies than for those drugs that are least good. Now, I expect that if you did on study on, for a given drug, corporate funded studies would still be disproportionately likely to find the drug good - but to a less extend than the paper you cite since humans are biased creatures after all.


"Both of which I hope you'll admit are just as much a problem for not-for-profit researchers as for-profit ones."

I agree that genuinely good treatments are rare. However, the file drawer problem is definitely much worse for industry-sponsored studies than for, say, non-profit- or NIH-sponsored ones. MAPS has a tiny budget of less than a million dollars a year, and if they were to do a study and not publish the results they would get completely discredited. And for an NIH researcher, if they didn't publish their results then it's unlikely they'd ever get more grants in the future. I don't know of any specific stats about how the file drawer effect for NIH-sponsored studies compares to industry sponsored ones, as that would be the closest comparison. But I do have some quotes that address the issue in general, and explain why the problem is much worse with for-profit companies:

"The failure to publish unsuccessful trials presents a problem in many research areas. When a study has produced non-significant results, it is less likely to be submitted for publication; and, if it is submitted, it is less likely to be favourably reviewed or accepted for publication. But although publication bias affects all areas of research to some extent, it is particularly acute when it comes to drug trials. This is because most of the clinical trials evaluating medications are financially sponsored by the companies that produce and stand to profit from them. The companies own the data that come out of trials they sponsor, and they can choose how to present them to the public -- or to withhold them and not present them to the public at all. With widely prescribed medications, billions of dollars are at stake. In this case, it is not reviewers or journal editors who are impeding publication of negative findings. Rather it is the companies themselves that decide to withold negative data from publication. [...]

Most studies showing negative results remain unpublished, and short of making official enquiries to government agencies, their data are unavailable to researchers, doctors and the public at large." Source: The Emperor's New Drugs, p. 39

"The fact that what gets published are the trials with positive results was most convincingly shown by a group of researchers at the Oregon Health and Science University, who followed up on our initial analysis of the FDA data by comparing the conclusions reached by the FDA with those reported by the drug companies in journal articles. Of 38 drug-company clinical trials that the FDA viewed as having positive results, all but one was published. In the same documents, the FDA described 36 trials as having negative or questionable results. Most of these negative trials were not published at all, and of the few that were published, most were described in the journal articles as showing positive results -- despite the fact that the FDA had concluded that they had not." Source: The Emperor's New Drugs, p. 67, describing the journal article<a href="http://www.nejm.org/doi/full/10.1056/NEJMsa065779>Select... Publication of Antidepressant Trials and Its Influence on Apparent Efficacy</a>

"When application was made to the Swedish Drug Authority for approval of five new antidepressant drugs, 28 separate clinical trials evaluating the drugs' effectiveness had been published in medical journals. The results were overwhelmingly positive: Twenty-two studies showed that the new drugs were significantly more effective than a placebo, and only 6 showed no difference. In Sweden, drug applications must including <i>all</i> known studies -- published or not -- relevant to the new drug. When researchers from the Swedish Drug Authority went through the new drug applications for the five new antidepressants, they found that a total of 42 studies had been completed. It turned out that exactly half of these showed that the new antidepressants are more effective than placebos and half found that they are not. The 22 postive articles that had been published represented 19 of the positive studies (three were published twice). In contrast, only six of the 21 studies with negative or inconclusive findings had been published. Even the most conscientious doctor could know only the results of the studies that had been published and would reasonably conclude that the weight of the evidence about the new antidepressants was overwhelmingly positive.

The Swedish researchers commented that their finding that 40 percent of the studies that had been completed on these drugs remained unpublished (as independent studies, not pooled with others) was consistent with the findings of other such reviews. In their conculsion, they warned that 'for anyone who relies on published data alone to choose a specific drug, our results should be a cause for concern.... Any attempt to recommend a specific drug is likely to be based on biased evidence.' What else can a practicing physician rely on but the published data? In an understated way, these researchers were telling doctors that they could not trust the published scientific evidence about antidepressants to be complete and unbiased." Source: Overdosed America, p. 115

"Whereas many of the negative trials were not published at all, some of the positive trias were published many times, a practice known as 'salami slicing', and this was often done in ways that would make it difficult for reviewers to know that the studies were based on the same data. In some cases, the authors were different, and references to previous publication of the data were often missing. Sometimes there were minor differences in the data between one publication and another, as well as between the data as presented to regulatory agencies and the data as published. So a reviewer trying to summarize the data would be likely to count the positive data more than once.

Another trick was to publish only some of the data from a clinical trial, a manoeuvre that researchers call cherry-picking the data. Some clinical trials are conducted in more than one location. These are called multi-centre studies. Multi-centre studies make it easier to find sufficient patients to conduct the trial. They also make it easier to cherry-pick the data. For example, one multi-centre study of Prozac was presented to the FDA as showing a drug-placebo difference of three points on the Hamilton scale. When the data from this clinical trial was published, the difference was reported as 15 points -- a five-times increase in effectiveness. How was this magical augmentation of the benefits of Prozac accomplished? The full study was conducted on 245 patients. The published paper reported data from only 27 of these patients. In the published version, the data from the bulk of the patients were left out, making the drug seem much more effective than it really was." Source: The Emperor's New Drugs, p. 41

"Drug companies also publish 'pooled analyses' of the trials they have conducted. That is, they bundle together the results of different trials and analyse the drug-placebo difference across them. This is similar to the meta-analyses my colleagues and I have conducted, but with one important difference. Our meta-analyses, in common with most others reported in the scientific literature, are based on all the studies that we were able to find. In contrast, the drug companies pick and choose which studies they wish to include in their pooled analyses. For example, GlaxoSmithKline submitted 15 clinical trials of Seroxat to Swedish regulators. In addition to being published individualy -- sometimes more than once -- studies with positive results were also included in six different pooled analyses. Most of the studies with negative results were, of course, not included in the pooled analyses.

There is yet another way in which pooled analyses can hide negative data. Rather than not publishing the negative data at all, the companies can bundle them together with data from positive trials, so that the overall result is positive. By so doing, they can truthfully claim that they have published the data from a negative trial, while hiding the fact that those data showed no difference between a drug and placebo. The article by the Swedish regulators showed that the data from about 20 per cent of trials were not published at all. The data from another 20 per cent of trials were bundled together with data from more successful trials, so that their negative results were hidden from view. Taken together, approximately 40 per cent of the data are kept out of sight. Practices like this [...] make it exceptionally difficult for reiewers to establish how effective the drugs really are." Source: The Emperor's New Drugs, p. 42


Well, you've assembled a huge pile of evidence there for the idea that publication bias exists, but I notice that none of the evidence you provide shows that publication bias is stronger for commercial drugs.


Could you post this as a comment on SBM? I think it'd add to the topic and would spark additional conversation.


Do you have links to some of the other studies? I wouldn't mind going through them as well.

Cheers.


You can probably find some of them linked to from maps.org under their MDMA research section, but I don't know all of them offhand. You can also just use Google scholar or clinicaltrials.org.

Personally though I think watching the lectures by the researchers (and the MAPS staff) is a better way to learn about this stuff than reading the actual research. Normally I prefer just reading the primary source, but when you have the actual researcher explaining each graph in their powerpoint then it's hard to beat that. (Albeit it is somewhat scattershot compared to reading the entire literature.)


Thank you! People seem to think that just because it is illegal that there was no reason the drug was invented. It used to be a legal drug used in psychotherapy before they decided to label it a schedule 1 "narcotic."


Who structured this study? They're doing it wrong ...


They're doing it right, this is just piss-poor journalism. See Alex3917's reply.


"Placebo controlled: 12 patients got the active drug, 8 got an inactive placebo Exit poll: 95% correctly guessed which group they were assigned to."

No kidding


Thanks for the several comments that came in while I was busy with a work project. The author of the submitted commentary article, Harriet Hall, M.D., is a retired military physician. It is reasonable to assume that she has a personal bias toward finding effective treatments for military post-traumatic stress syndrome (PTSD), and perhaps also a personal bias toward finding effective treatments for PTSD after rape.

Contrary to several comments posted while I was busy, the edited group blog Science-Based Medicine is about applying the methods and settled factual conclusions of science to the existing practice of "evidence-based medicine." As the group of authors put it, "Science-Based Medicine is dedicated to evaluating medical treatments and products of interest to the public in a scientific light, and promoting the highest standards and traditions of science in health care."

http://www.sciencebasedmedicine.org/index.php/about-science-...

There have been a variety of posts on the site over the years critical of large pharmaceutical companies, and there have been a variety of posts critical of various aspects of current medical practice. The tenor of the site is that everyone involved in treating human patients and guarding their health needs to be truth-seeking, considering prior probabilities and scientific fact when evaluating new treatments. The authors are not a propaganda outlet for any industry, and they have no financial conflicts of interest on the issues they write about.

Dr. Hall correctly points out that "Prolonged exposure therapy is a widely recognized therapy for PTSD. The rationale for using MDMA is to suppress anxiety and allow patients to talk about their traumatic experiences without feeling excessive fear and hyperarousal." Any effort to find a new treatment for PTSD, whether a prescribed drug or a new form of talk therapy, will have to be evaluated against what is already available to treat that tough disorder.

Any reader of Hacker News who hasn't seen the link below already should definitely check out LISP hacker (and Google director of research) Peter Norvig's article "Warning Signs in Experimental Design and Interpretation,"

http://norvig.com/experiment-design.html

which is a great checklist of what to look for in a report of a new research finding. Dr. Hall knows how to evaluate research reports in this framework, and she urges that people (like her) who hope that more effective treatments for PTSD are developed proceed carefully so that the new treatments are truly safe and effective.


With a name like "Science-Based Medicine" it just has to be true </sarcasm>




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: