Hacker News new | comments | show | ask | jobs | submit login
Curry spice turmeric boosts memory by nearly 30%, eases depression, study finds (els-cdn.com)
726 points by bcaulfield 9 months ago | hide | past | web | favorite | 386 comments



From the Wikipedia page on Curcumin (section Research):

  In vitro, curcumin exhibits numerous interference properties which may lead to 
  misinterpretation of results.[3][6][20] Although curcumin has been assessed in 
  numerous laboratory and clinical studies, it has no medical uses established by 
  well-designed clinical research.[21] According to a 2017 review of over 120 
  studies, curcumin has not been successful in any clinical trial, leading the 
  authors to conclude that "curcumin is an unstable, reactive, non-bioavailable   
  compound and, therefore, a highly improbable lead".[3]
The study claims to have studied a bioavailable forom of curcumin, Theracumin® but it uses a very restricted interpretation of "bioavailability":

  The use of adjuvants that block curcumin metabolism, or nanoparticles, liposomes, 
  phospholipid complexes, and other strategies have improved its bioavailability somewhat, 
  but only as defined as increased curcumin blood levels8,64–66 with minimum effects on   
  curcumin availability to the brain.
The study also notes its sample was small:

  The relatively small sample size in this study warrants caution in interpreting 
  our results and limits their generalizability. 
It just doesn't sound like their results are going to stand the test of time.


None of your criticisms from wikipedia pertain to this publication. The paragraph is essentially making the argument that a randomized, double blind (DB), placebo controlled (PC) clinical trial (RCT) is unlikely to show that curcurmin is effective. yet, the published article is exactly that, and shows a statistically measurable impact on a clinically meaningful endpoint: a direct refutation of the paragraph.

the fact the "interference properties" and being an "unstable, reactive, non-bioavailable compound" is subsumed by the study design.

  claims to have studied a bioavailable form of curcumin
This is not really relevant to the claim or conclusions. could it be that the bioavailability matters? maybe. could it be that curcurmin is a trick molecule to study? yes. but come what may, this is direct evidence (not proof) of a cause-effect relationship between this treatment and the outcome.

the only thing this effects is the chance that other forms of curcurmin will be comparable (and it might have consequences for regulatory status by the FDA).

  The study also notes its sample was small
This is a very weak criticism when the study is positive (i.e. shows an effect). when the sample size (and statistical power) of the study is small, you need a BIG EFFECT SIZE to be positive. The fact that the trial was positive actually means that it's more likely to be clinically important, because a big effect size means it protects memory in a dramatic way. This is like penicillin: in the pre-antibiotic era, how many cases of pneumonia do you need to treat in order to be satisfied that penicillin is effective? Not bloody many, because the effect size is so dramatic. Likewise for parachutes. few are the therapies that fall into this category, but when you have an inkling that one does following a DB, PC, RCT, you better pay attention.

So does it need to be replicated? definitely. does the sponsorship matter? definitely; they could have cheated in some way. until then does it make sense to prescribe this to seniors, or encourage them to increase curcurmin in their diet? quite possibly, given the presumably low rate of adverse effects.

Bottom line: this study is not the end-all be-all of curcurmin and memory, but there is legitimate early evidence that the emperor has clothes.

edits: mainly spelling, formatting


When a small sample size produces a big effect, it does not mean you can be more confident in the effect!

All things equal, you'd expect confidence intervals / credible regions to be very large.

Another way to say this, is that since the measurement error is large, you expect spurious findings that result from something like p-hacking to be large (they have to be to be significant).


What annoys me as a someone not trained in statistics is that these discussions are always abstract in-principle arguments about what proper statistics should be like.

Meanwhile, we have a real paper with real p-values, why not discuss the validity of that?

> SRT Consistent Long-Term Retrieval improved with curcumin (ES = 0.63, p = 0.002) but not with placebo (ES = 0.06, p = 0.8; between-group: ES = 0.68, p = 0.05). Curcumin also improved SRT Total (ES = 0.53, p = 0.002), visual memory (BVMT-R Recall: ES = 0.50, p = 0.01; BVMT-R Delay: ES = 0.51, p = 0.006), and attention (ES = 0.96, p < 0.0001) compared with placebo (ES = 0.28, p = 0.1; between-group: ES = 0.67, p = 0.04). FDDNP binding decreased significantly in the amygdala with curcumin (ES = −0.41, p = 0.04) compared with placebo (ES = 0.08, p = 0.6; between-group: ES = 0.48, p = 0.07). In the hypothalamus, FDDNP binding did not change with curcumin (ES = −0.30, p = 0.2), but increased with placebo (ES = 0.26, p = 0.05; between-group: ES = 0.55, p = 0.02).

I can see that there are a lot of p-values around 0.05, but that the supposed improvements have much lower p-values:

    (ES = 0.63, p = 0.002)  SRT Consistent Long-Term Retrieval
    (ES = 0.53, p = 0.002)  SRT Total 
    (ES = 0.50, p = 0.01)   BVMT-R Recall 
    (ES = 0.51, p = 0.006)  BVMT-R Delay 
    (ES = 0.96, p < 0.0001) attention 
What does this imply? Is it a good sign, or does it actually make it more fishy? pythonslange's comment[0] seems to suggest the latter, claiming that none of the other pre-registered tests are mentioned, without any explanation why.

(eyeing that "attention" one, if this does hold up under scrutiny, could I expect tumeric-based ADD medication somewhere in the future, without all the nasty side-effects of my current amphetamine-based options?)

[0] https://news.ycombinator.com/item?id=16229693


Here is the problem: If there is an actual large effect size, then you can detect this in small samples. However, the opposite is not true. In a small sample, spurious large effects are MORE likely, not less likely (outliers have a greater impact in a small sample). See: http://andrewgelman.com/2017/08/16/also-holding-back-progres...


Thank you, that is an explanation that I can actually apply to this result.


I agree with the other commenter, and Gelman's blog is a good place to start. There are very clear, concrete statistical arguments, but they are difficult to summarize in the comments section of HN.

The gist is like this. Imagine that someone had people play 20 different slot machines. Then they went into a private room and looked at the results for each machine. After, they come out with the results of 5 of the machines and say, "look, our slot machines pay out at a higher than chance level!".

Do you believe them? I hope not. If only 4 machines had done well, maybe they would have shown only 4, or 3, etc.. They've effectively stacked the deck.

On the other hand, suppose someone said they were running an honest experiment with 20 slot machines. How many machines would you expect them to report on?


I can follow your analogy, but it doesn't quite add up.

The researchers took a group of people and submitted each of them to an "extensive neuropsychologist test battery".

That's not just testing 20 different slot machines, that's testing 20 different designs of slot machines.

(Also, each person does each test on their own, which is the equivalent of having them not only play twenty different models, but a unique production per person unit per model)

In that light, the claim that all slot machines have a high pay-out chance is obviously suspicious, but would coming back and saying "these five designs have a higher than chance level of winning!" be an incorrect conclusion?

If each slot machine types is unique, no. But if one slot machine design is known to have a flaw, and if it shares this flaw with another design, and if that other design does not show the same increased performance, then things get really suspicious.

So the question becomes: do we know how strongly correlated the results of these tests typically are? If that is a lot (which I would expect to be true with at least some of these tests), the absence of the other tests is suspicious. If it is low, it might be less of an issue.


You hit on a lot of important points. The key here is that they are making a claim about memory, a psychological construct, that is being represented by some of their measures (batteries). But their batteries also purport to measure other psychological constructs, too.

To connect this with the slot machine analogy, it might be like if groups of slot machines had different colors, and they chose the color that yielded the best results.

> but would coming back and saying "these five designs have a higher than chance level of winning!" be an incorrect conclusion?

It would be if you didn't take into account that you analyzed 20 machines in your analysis .


The number of hypothesis tests is problematic (alpha inflation / familywise error rate) and the authors simply mention, "Another limitation was that we did not correct for multiple tests in the analyses as this was a pilot trial."

However, identifying their study as a pilot study does not excuse the lack of correction for multiple tests. These procedures are well known so failing to use them leads to maximum likelihood of ending up with false positives. The study then provides the best case scenario with the highest likelihood for false positives and seems to highlight the authors' desire to avoid failures to detect effects.

There is always a statistical trade off between reducing the numbers of failures to detect and reducing the number of false positives, so readers will be left to their own devices to decide if the authors' approaches and interpretations were justified.

The study's strength of argument is reduced by: * The authors' self-description of the study as a pilot study with a small sample size, * no corrections for multiple testing, * use of a non-representative sample ("Only approximately 15% of the screened volunteers were included in the study, and our recruitment method yielded a sample of motivated, educated, physically healthy subjects concerned about age-related memory problems. The sample, therefore, was not representative of the general population.") * PET brain imaging results self-described as "exploratory" (I take the word exploratory to mean "ourselves and others need not trust these results or interpretations" or at least "different results would not be unexpected")

What I find least appealing is the lack of a specific conflict of interest statement even while there is explanation of authors' financial interests in the substance being studied and the company selling it. As noted in the article, "Industry Sponsorship and Financial Conflict of Interest in the Reporting of Clinical Trials in Psychiatry " from the American Journal of Psychiatry ( https://ajp.psychiatryonline.org/doi/abs/10.1176/appi.ajp.16... ), "Author conflict of interest appears to be prevalent among psychiatric clinical trials and to be associated with a greater likelihood of reporting a drug to be superior to placebo." (Perlis et.al, 2005) and this explains the primary conclusions of the current study - better than placebo.

We can't fault the inventors of the substance, holders of the patent, and those with financial interests in the company which sells the product to want to test their product and promote positive findings, but how objective are the investigators and how rigorously are they trying to apply the notion of falsification to their own ideas?

They lose everything with falsification and negative results would undermine the parent company's claims that, "Theracurmin® product is one of the most advanced and studied, highly bioavailable forms of curcumin in the marketplace." http://theravalues.com/english/ . Looking on the research page, there are only a small number of studies shown at http://theravalues.com/english/research-clinical-trials/ . All positive outcomes of course.

All of the current supporting studies are listed at http://theravalues.com/english/literature-published-articles... so does anyone want to open up each of those studies and look at the sample sizes in each one?

In one of the supporting studies for Theracurmin® (https://www.nature.com/articles/srep39551), the sample size was six rats. In a second study (https://www.ncbi.nlm.nih.gov/pubmed/21603867) the sample size was 6 people.

What sample sizes do we see in other studies? How many other studies of this substance are equally as weak in terms of sample size? 6 rats here, 6 people there, 40 people here... not convincing. If anything, the strong marketing hype based on such studies makes me more wary and less trusting of the marketing and scientific claims.

Given the Theravalues clinical trials website promotes the product for for "Progressing malignancies, Mild cognitive impairment (the study highlighted in the parent post), Heart failure / diastolic dysfunction, Cachectic condition, Osteoarthritis, Crohn’s disease, Prostate-Specific Antigen after surgery" I'm left with distrust.

Kudos to the efforts to begin product testing and this sort of research is time consuming and expensive, but studies with sample sizes of 40 people do not support the company's marketing hype. If this study is positive enough for the authors to obtain more grant funding and run a much larger and better clinical study then I would be interested to see what is found then, but happier even still if authors with no financial or personal conflicts of interests ran the study.


I know you want this to be true given your stated disatisfication with the current state of attention deficit medication, and I know that arguing about statistics seems pointless, but this is n=40 (21 test 19 control). No matter how good your p-values are or how many different tests you run, there's just nothing to take away from this right now.

That is not to say its worthless. If there are a number of repeated similar trials by different researchers and this study is combined with those in a larger meta-study that continue to show significant results, then it could end up being a very important study to look back on.

But right now its just 40 people eating refined turmeric and taking memory tests...


Yeah, I'm aware of my own bias. But maybe I phrased myself badly: it's not that I think that arguing about statistics is pointless, it's that this discussion often feels like it becomes disconnected from the paper at hand.

The replies to my complaint are nice counterexamples of that, though.


Most discussions about research findings are like that and HN is probably no exception. :-)

"The sample size is too small" or "they didn't control for [x] effect" are the sort of gut armchair comments people make offhand, knowing it probably applies to any study if you get challenged. I assume that's the sentiment you reacted to and I agree it can be frustrating.


This. This is a relevant podcast featuring John Ioannidis (author of "Why Most Published Research Findings Are False.") http://www.econtalk.org/archives/2018/01/john_ioannidis.html


>> None of your criticisms from wikipedia pertain to this publication. The paragraph is essentially making the argument that a randomized, double blind (DB), placebo controlled (PC) clinical trial (RCT) is unlikely to show that curcurmin is effective. yet, the published article is exactly that, and shows a statistically measurable impact on a clinically meaningful endpoint: a direct refutation of the paragraph.

I think, if there are solid reasons to believe that studies on curcumin will not turn up any useful results and then a study finds results it calls significant, the chances are that either that study is not designed very well, or its results are not significant, despite the claims of the authors.

Apologies in advance for the controversial example, but this reminds me of the controversy about Daryl Bem's parapsychology research a few years ago- the author claimed his study was methodologically unassailable, but of course he was making an absurd claim (essentially, that people can receive information from the future via ESP).

Edit: removed cruft.


>I think, if there are solid reasons to believe that studies on curcumin will not turn up any useful results and then a study finds results it calls significant, the chances are that either that study is not designed very well, or its results are not significant, despite the claims of the authors.

If there are indeed "strong reasons" then the rest is a tautology.

If there are not, then making such a conclusion is bias.


>Because curcumin's anti-inflammatory properties may protect the brain from neurodegeneration

My very first question was: Please provide a possible physiological pathway for this. I'm so glad it was answered in the very first sentence. So many of these "Superfood does X" studies are just trials with 10 people over a month, with no explanation as to __how__ it could possibly happen. It significantly increases the skepticism I have whenever something like this comes up

I also did not know Turmeric had anti-inflammatory properties. I guess I have reading to do.

Also interesting that they used (what seems to be) a name brand supplement instead of Turmeric


I am a neurologist. this is actually the exact opposite of what you should care about, which is "does it work to treat ___"

In this trial, and 99% of other trials, there is no mechanism provided by the study, only a measurement of causality (which is the same thing as effectiveness).

The authors may think they know based on other, prior, basic science research (as they are postulating here), but it does not affect the conclusions of the trial ( whether they do or don't (or even if they are wrong).

Obviously it's intellectually satisfying to understand WHY, but it's not as important as "is it true?"


I am having trouble accepting your argument. Judea Pearl presents [0] the debate over whether smoking causes cancer as an illustration for why correlation is not sufficient. Two plausible, contradictory, models ('smoking causes cancer', 'no it doesn't, something else does') could not be settled by observing correlations. The debate was resolved through the addition of an explanatory mechanism, and a test of the mechanism's influence. Pearl, and others, have worked on the mathematical machinery necessary to examine this kind of causality.

[0] http://singapore.cs.ucla.edu/LECTURE/lecture_sec3.htm


none of this applies to a double-blind, placebo-controlled, randomized clinical trial (which is what this article is).

the reason it's the gold standard for medicine is because the only thing that is different in aggregate between the groups is the intervention. Therefore, it is the strongest possible evidence of causality we have, and does say something about causality (rather than correlation).

It's important to replicate, but the points made by Pearl are not relevant here and the mechanism is moot when you have a causal link shown by RCT.


Andrew Gelman has a nice blog post on the shortcomings of the double blind clinical trial:

http://andrewgelman.com/2018/01/08/benefits-limitations-rand...

To be honest, I think a randomised clinical trial is more of a starting point than an endpoint in terms of 'knowing' that an effect is genuine. The 'gold standard' moniker overdoes the authority of the RCT a


Unfortunately, we don’t understand the exact mechanism for many (most?) available medication.

Correlation != causation. Which is why you need to do randomized testing and try and account for other causative factors. But if you give a bunch of people a medication and they stop dying, you can pretty much say this is valuable and start giving it to people.


The issue that you're raising does not apply for randomized treatment.


This study established causality with double blind test and thee effects size was really big.


Obviously you don't understand HN.. everyone here is a "very smart software engineer" and knows far better than a simpleton like you with your decades of medical training and experience.


The neurologist wants what's best for his patients today, which is to know whether a treatment is likely to work or not.

Others may value the mechanism more for very legitimate reasons. For instance, if you underatand the mechanism it leads to further research that could utilize a related mechanism or perhaps see potential side effects that would otherwise be hard to identify.

You really need both. "Is it true?" is a great starting place and oofers some treatments, but discovering the mechanism gives potentially a whole new class of treatments.


completely true.

the problem is that efficacy studies (like I want) are not that exciting to report. as a result, the lay press reports only mechanism studies (because their readers like to be "wowed" and think about things, like the poster I originally replied to).

that wouldn't matter at all, but we all get sick. and when a person gets sick with this mentality that they need to understand the mechanism, then they start overthinking the treatments that are available today, drawing all kinds of spurious conclusions, and making medical decisions that are unjustified at best and harmful at worst. and in my experience, the smarter the patient is, the more willing (and insistent) they are to take this approach (witness Steve Jobs). This is absolutely not the patient being "dumb" or stubborn: it's completely understandable given how they understand science. But it hurts them, which no physician likes to see.

let's note how incredible it is that we are having this conversation in the setting of some incredibly expert scientists (albeit in other fields). it really shows how hard science can be to understand sometimes.


It's a matter of practicality too. A scientific experiment of any kind can only in full honestly provide a set of results given a set of conditions. No matter how well your theory may fit the existing data, fundamentally your data may simply be responding to some other mechanism you're not aware of or not measuring.

There is a huge amount of effort required to say why something works with certainty. Usually to get there requires many years of experience with a new surprising result. Which requires publishing lots of results before understanding the mechanism.

If you refuse to believe anything that isn't figured out from first principles is true... I mean... I have a condition and if someone told me a higher percentage of patients on a new drug did well than on the one I use, do I care if they can tell me exactly why? Not in the slightest. I will wait a year or two to see if side-effects crop up and ideally see the results confirmed, but I'm not going to not get a treatment likely to help just because they don't know how exactly how it works.

I think it's good practice to assume the results of a paper are accurate, even if you give no credence to the interpretation.


God forbid that anyone with a mathematical inclination begin to question study methodology...

https://mobile.nytimes.com/2008/04/08/science/08tier.html


Appeals to authority, such as those you've made, are worthless.

The motto of the Royal Academy of Sciences is quite literally "take no one's word for it". If you wish to blindly swallow assertions made by people with academic titles, perhaps you should start to listen to those who advise against it.


An appeal to authority is perfectly reasonable if the person in question is actually an authority in the topic discussed. The logic fallacy you probably meant is about appeals to authority in cases where the authority is in another domain.


No. An appeal to authority is always fallacious. Your heuristic may be useful, but it is just that - a heuristic, not an argument.


> No. An appeal to authority is always fallacious.

Says who? ;)

> Your heuristic may be useful, but it is just that - a heuristic, not an argument.

Indeed, but in many but the most trivial cases (stuff like newtonian physics, or computer science, where most arguments are easily falsifiable) heuristics is all we have. Just look at the set of comments we are currently part of, e.g. many of the p-value posts, or the criticism of the small sample size. Those are valid arguments, but also wrong. But how can someone not trained in empirical science find out? Relying on an authority (relevant to the topic at hand) is a valid way to be wrong less often.

To put it pointedly: How do you know its always fallacious? Have you actually proved it yourself, or is this from a textbook/the internet, e.g. - from an authority? ;)


Yeah, if I'm going to be wrong I'd rather be wrong alongside a bunch of people who are way more qualified than me to guess. What's the P-value on trust in expert advice improving outcomes? For every case where the "x was wrong" there are 1000 non-controversial cases where x was right.


> decades of medical training and experience

http://slatestarcodex.com/2013/12/17/statistical-literacy-am...

Ask your local doctor next time you visit to tell you what a P value means.

The medical literature is a tragicomedy of statistical incompetence and malpractice.

Specifically on this study it has a lot of red flags: not pre-registered so p hacking can be assumed, industry sponsored which studies show greatly skews results, small numbers, unrealistically high effect sizes.


It was pre-registered. However, they did a battery of tests, and only report on a few. This is a method of cheating on the pre-registry process.


There are professional statisticians who believe the same as this HN cult.

http://andrewgelman.com/2017/01/30/no-guru-no-method-no-teac...


The problem with “is this true?” Investigations is that there are tons of potential pitfalls and a combination of experimental limitations, poor academic incentives, and insufficient statistical and scientific training make this very hard to do well. I think it’s bad science, that will one day retrospectively be viewed as invalid scientific method.

I think the way to combat this is to avoid studies that solely attempt to demonstrate an effect, but to complement that part of an investigation with additional studies that demonstrate a plausible explanation. This reduces the risk of false positives and increases our confidence in the result.

The additional studies could take the form (for example) of deliberately intervening such that the effect should be manipulated by the intervention in a predictable and measurable way. Essentially, come at the hypothesis “from another angle”.


  there are tons of potential pitfalls and a combination of 
  experimental limitations
you provided some, which are at least partially addressed by the study design. (I don't think it's your point, but let me agree that "the scientist can cheat" is always true).

  poor academic incentives,
this is addressed because it is double-blind

  insufficient statistical and scientific training
this is (partially) addressed by peer review, and the availability of the data. you can't make this general criticism of the paper at hand: find the place where the design or statistics used were in error.

  I think it’s bad science, that will one day 
  retrospectively be viewed as invalid scientific method.
that's a claim without any evidence to back it p.

  I think the way to combat this is to avoid studies 
  that solely attempt to demonstrate an effect, but to 
  complement that part of an investigation with additional 
  studies that demonstrate a plausible explanation. This 
  reduces the risk of false positives and increases our 
  confidence in the result.
If you have a double-blind, placebo-controlled, randomized clinical trial then what can the mechanism possibly add? Is an effective medication less effective because we don't understand how it works? If you want an amazing example, read about the mechanism of action of general anesthesia: we have many ideas about how it works, but nobody really knows. Nobody gets caught up by this when their appendix ruptures. I hope, for your sake, that you will lower your standards in the event that you need surgery.

  The additional studies could take the form (for example) 
  of deliberately intervening such that the effect should 
  be manipulated by the intervention in a predictable and 
  measurable way. Essentially, come at the hypothesis “from 
  another angle”.
If you have a properly designed and executed randomized, double blind, placebo controlled trial, then you have measured the effect of the intervention, and only the intervention. You can repeat it in order to gain confidence, but you have to decide when the diminishing returns of "more confidence" are not worth your study dollar. And for curcurmin, which is cheap and has low side effects... it's probably not that far away.

edit: fixed formatting


>> I think it’s bad science, that will one day retrospectively be viewed as invalid scientific method.

> that's a claim without any evidence to back it [u]p.

You are correct. The irony is rich here.

I always welcome a good argument (whether in dispute or affirmation of an issue), but just saying "I think <XYZ>" doesn't make it so. Reading through this thread, all I see is the common HN Default Skepticism / Negativity -- but more so; skepticism with less scientific rigor than the work being criticized.


I somewhat agree, but "is it true?" is all that science can answer.

A mechanism is really just a narrower example without irrelevant distractions or confounding variables.

Kind of like the difference between the bug reports: "run this complex workload on this specific machine and it will orobably crash after a few hours" versus "run these three commands when the cache is cold and it crashes".


I find it odd that the sibling reply by fleitz (all that science can say is "is it false"), which just a precis of Popperian philosophy of science, has been marked dead. Not a view that I share, but seems uncontroversial.


Agreed, indeed my immediate response to that comment was "that's not how science works, it's falsificationist".

What is your view on science, do you think it can create - and verify - a true result? Whither axiomatic underpinning in such a view?


Well thanks for asking.

I think it's clear that science gets us truth, but that the mechanism for that complex social activity can't be reduced to a few axioms or principles; Feyerabend gives a detailed description [1] of how bad a scientist Galileo was (as judged by modern theories of what science is), yet he was right, and he advanced truth.

[1] Paul Feyerabend, Against Method (1975)


And thanks for responding ...

Heliocentrism isn't truth, it's opinion [that's my opinion, lol!!]. Einstein's relativity makes heliocentrism false in demanding that there is no aether and so no special points in space.

Science provides our understanding of a best current description of reality - the whole point of scientific endeavour is to demonstrate we were wrong and that our understanding/description was flawed.

Moreover it relies on maths, which is provably unable to verify it's own consistency and completeness.

Furthermore it's an axiomatic system that produces a probabilistic framework that gives us little to hang our conception of reality on [what I'm saying is our perception is not particularly informed by physical reality, we have to subdue our instinctive perceptions to work with what amounts to our best, most consistent, physics view of reality].

Science gives us as much truth as we can see in a well lit test-tube.


Actually GR makes both geocentrism and heliocentrism "true". One is free to pick whatever coordinate system is most convenient to solve a given problem.


Mathematical simplicity is insufficient for truth in my epistemology. Sounds very Platonic.



Quite funny, had to reflect on my own bias a bit. I still think it's important to be critical of purely observational or anecdotal results when tested evidence driven medicine does exist. For example, suggesting a cancer patient abstain from radiation therapy, offering the alternative of cayenne pepper smoothies.


haven't seen cayenne pepper studies. While for example drinking baking soda - https://www.ncbi.nlm.nih.gov/pubmed/19276390

significantly slowing cancer spread with for example 4 months survival result - less than 40% of controls and more than 80% of baking soda drinker mice https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2834485/figure/...

The study has all the statistical bells and whistles. And its result perfectly matches our understanding of cancer spread process - local overproduction of acid from glucose aerobic glycolysis (dominating way of glucose consumption by cancer cells) results in dissolution of the ECM (which is mostly collagen) - and the probable mechanism of action of the baking soda on it.


OK. But without knowing why how can you be certain it's cause and not correlation? Especially in such a small trial?


isn't it because the control and treatment groups are randomised from the same pool?


It can still be correlation. It's possible you're just attributing something to an attribute you're tracking but that's just coincidence. Without a sense of why, you can't be certain about any conclusions.


in this case they seemed to expect an effect on the amyloid plaques which cause Alzheimers and dementia (i.e. they bothered to do brain scans, not just cognitive tests)


As far as neurology and psychiatry go, if we're not asking "how?", it's because we're very far from comprehending. Our brains are exceedingly complicated. If two neurons can ride a bicycle, imagine the complexity arising in a system with billions and billions.

More generally, however, a plausible mechanism is a great aid in determining the potential value of further researching a correlation because as we all know, correlation != causation, but correlation + mechanism pretty much is causation, which means correlation + plausible mechanism is very interesting. It's not wrong to ask "how?" here, though it might not yield much given the nature of the field. Certainly, anti-inflammatory properties can plausibly have positive effects on the psyche, so that's a mechanism worth some looking into.


Can you comment on the practical improvements reported in this study?

For example, How meaningful is the 20.3 point difference in Buschke SRT test? Is it noticeable in daily life, or just something detected in testing.


In your view, does this study fall short of answering the “is it true?” question?


)


For all the claims of bioavailability, I was surprised to see that Theracumin(R) doesn't seem to contain piperine[1] or any perines (which are usually in curry powder courtesy of black pepper). The 90mg dosage also seemed low, I have been taking 500mg of 95% extract daily for a couple of years.

1: https://www.ncbi.nlm.nih.gov/pubmed/9619120


Would you say your supplement regime has helped you? Can you offer any insight into this?


Theracurmin's formulation is specifically designed to increase bioavailability without the use of bioperine.


Did some research and it seems to be somewhat more effective than curcumin+piperine:

Piperine study (PS): https://turmericaustralia.com.au/wp-content/uploads/2015/06/...

Theracumin study (TS): https://www.jstage.jst.go.jp/article/bpb/36/11/36_b13-00150/...

In TS table 3 shows the Cmax was 25.5ng after 30mg of theracumin.

In PS table 2 shows the Cmax was 180ng after 2000mg of curcumin + 20mg of piperine (the table says 2g/kg but after reading the Materials and Methods section I believe this is just a typo. 2g/kg would be a ton of curcumin).

So comparing the two, it seems that Theracumin was absorbed 10x more per mg consumed. Of course, the question is if you consume equal amounts of Theracumin and Curcumin + Piperine, would the absorption rate hold, which we can't answer, but seems pretty promising.


Herbalist Stephen Harrod Buhner advises against piperine because it tends to cause leaky gut.


How do you take it?


We eat a lot of turmeric. If you go to an Indian grocery store, of which there are plenty in the Seattle area, you can buy it in bulk very cheaply. We are vegetarians, so we put it in lots of veggie based dishes, especially stir fries. Frankly, I'm not much of a cook, and adding turmeric and pepper to my meals has led me to experiment with lots of spices, and generally improve the taste of the food I eat. Yes, there have been low points, but overall, it works for us.

We cook in a Kirkland Cookware set which I believe is stainless steel. It is still the same color it was when we bought it. Our blender, on the other hand, is now a depressing yellowish green. But it works the same regardless of its color.


I buy bulk 95% powder, pop a piperine tablet, and dump 3/4 tsp of powder in my mouth - eating something with it to get it down. I experimented with mixing it with various things, but it's hydrophobic and stains everything bright yellow, so the quicker the better. I was mostly interested in the anti-inflammatory properties, my family is full of arthritis, and between climbing and typing, I started noticing some in my hands once I hit 40. I do think it helps with that, and any improvements to memory or mood are a plus.


Instead of taking tablets, try adding as an ingredient into your daily food. I think that will make it much easier to eat. Try cooking any Indian curries and turmeric is 90% of time default spice.


Curry is delicious, but it'd get pretty boring to eat it EVERY day.


I wasn't achieving my weight goals like I hoped so I threw my arms up and said "FUCK IT chicken and sweet potatoes for lunch from now on."

I've been eating curry-powdered baked chicken for maybe 3 months straight for lunch nearly every day and somehow it's fine. Mostly I'm craving the massive amount of protein it gives me so maybe my body is overriding any "boredom" of the food?


Your body gets used to what you are eating and you actually start to crave that homemade stuff if you go on holidays. I've been eating chicken curry, rice and broccoli for the past 7 years. Only eat something else on cheat days twice a month and the occasional times I buy minced meat.

Not really that hard. Tastes great and honestly if you mind too much how the food taste your goals are somewhere else and/or have depleted your mouse buds. Even steel cut outs taste great with only little added salt.


Overnight steel cut oats + flaxseed + chia seed + peanutbutter = my breakfast for the last 4 months.

Are we on the same diet?


I understand what you're saying but I have a hard time believing you eat the same thing every day without getting bored.


Huh, well, I do! Sometimes I might add either Cock Sauce, or Green Sauce, or Orange Sauce. Sorry that's so unspecific, that's literally how they're labeled in my work fridge, in otherwise unbranded bottles >.<


Yeah but... It's still a chicken curry every day...

Don't get me wrong, it's just my opinion. I would get bored, I can't talk about other people.


Which is why you change up the blend. For example I recently got sick of toasted black mustard seeds, so I stopped putting them in my beans. I might leave out clove, or add some allspice, etc. etc.

You've really got to change it up to keep on top of your spice game.


There's way more than one type of curry and more than one way to enjoy a spice. For tumeric, fry some pulled aubergine with tumeric, garlic and tomato, for example.


I use turmeric a lot on chicken even when not cooking Indian style food - eg when I make fajitas.


I put it in my daily smoothie


Eating a spoonful of powder sounds like a nightmare I sometimes have.


It's a small spoonful, I don't find it hard. You can certainly find things it will mix with (like yogurt, peanut butter, or honey) but it is really in another league when it comes to staining things, and I don't find that worth the trouble.


Try mixing it with honey and have it, it will be much better.


Or an arguably more wholesome version of the YouTube meme of eating Tide pods


In 2012 there was a similar meme called the Cinnamon challenge: https://en.wikipedia.org/wiki/Cinnamon_challenge


youtube dumbasses one-upping each other until they're literally eating poison

>meme


I used to have to do it every single morning for a bogus supplement pyramid scheme my guardian fell for.

It doesn't get any less disgusting even after a couple years.

They also sometimes made me drink hydrogen peroxide mixed with water, but that's another story.


This comment raises a lot of questions.


You get two questions, make 'em count.


Mix it in a glass of warm milk with honey.


95% what? curcumin or turmeric extract? I ask because I've ordered plant extracts on alibaba and did GC analysis. From the labeling it is pretty difficult to tell which you're getting.

One may be very high purity curcumin while the other is curcumin alongside all the other components extracted.

This, for instance:

https://purebulk.com/curcumin-turmeric-extract-powder/

seems like what you probably would find with a simple search, and there's not even a mention of the actual curcumin concentration.


For arthritis i would advise simple and cheap solution: https://www.ncbi.nlm.nih.gov/pubmed/7889887


Ever try filling your own capsules?


I have - filling by hand (set up opened capsule halves, pick up larger half, scoop and pack powder into it, close it with other half, repeat) for something you take daily gets really old really fast.

If you know a better way I'm all ears, as I'd love to keep filling my own piracetam capsules (commercial caps are 800mg, way too strong for me) but hate actually doing it.


They make bulk capsule fillers that might help a little. It's still manual but not quite as fiddly. You can find them on amazon for ~$25


When I took Piracetam I used a Cap M Quik machine[1] to make batches of 50 capsules at a time. Took about 15-20 minutes to make a batch and I was pretty leisurely about it.

[1]https://www.cap-m-quik.com/


My favorite "tea" to make is just chopping up a little ginger, a little turmeric, and some sort of citrus zest, steep in boiling water for 10 mins. You can re-steep it several times also. I drink it every day because it's delicious, as well as calming and anti-inflammatory.


Yes! I too make that exact tea, it tastes of the earth and really chills me out! Recommended!


Every morning I make 1,5L tea, 1 root of cut/chopped turmeric and and a bit of cut/chopped ginger (for the taste), I also add a bit of ground pepper.


It makes chicken or turkey delicious. Fry a sliced onion for a while in a pan with some hot chilli, then add chicken or turkey chops and finally a spoonful of curcumin plus a bit of salt; keep cooking & mixing until meat is cooked and curcumin is completely mixed with the rest.



Looking through the comments, I'm really glad I posted the paper itself to HN, rather than the various writeups of it.


Turmeric has helped me quite a bit with my inflamed joints after learning about it at a yoga ashram. Now I cook a lot with it. What i have noticed though is that supplements do nothing for me. I only feel a difference when it's fresh powder or even better ground from a root.

I have done several experiments where I stopped taking it. Every time my joints got worse and then better after resuming cooking with Turmeric.

No idea how it works but it seems to work.


To add to your experience, i have managed to treat several severe tooth/gum abscesses with fresh sliced tumeric root steeped for 4-12 weeks in brazilian sugarcane rum (51 pinga). It kills the pain almost instantly and after a few days of use, the infection disappears. I now swish it in my mouth once a month or so as a preventative measure.

It's important to note that there are more than 10 varieties of tumeric, and have markedly different morphologies.

Also: growing conditions or perhaps some other factor affects the depth of the yellow color of a crop, as well as smell, which seem to be indicators of potency.

Lately, particularly since the recent judicial decision in India regarding the intellectual property rights of tumeric which denied a patent to a western company, I've seen general attitudes toward 'curcumin' in western academia revert their opinion about it's potential.

i dont expect any study that isolates the curcumin molecule (which ks widely known to be non-bioavailable on its own) to be very effective.

furthermore (conspriacy theory warning) considering the lack of scruples of the bio-based businesses (med agri gen chem), i wouldn't be surprised to see medically inert or even toxic varieties of 'unprofitable' and/or 'competitor' plants to flood the market in places where tumeric isn't grown amongst traditional small farmers. just a thought, and fits the monsanto narrative in the us and india.


It common among men with low testosterone to have aching joints. Have you checked your levels?

Also funny that some men on TRT take turmeric(with some black pepper) to lessen the amount of estrogen converted from testosterone.


Actually I checked my levels a few years ago and they in the lower range of normal. The doctor said it's fine but I was wondering about that. My recovery time from injuries and workout also has increased a lot. It's probably part of aging but testosterone could play a role too.


You need to do a full blood analysis. I'm not a doctor, but I know that testosterone can be bound to SHGB hormones and then it's useless for the body. But your total amount of testosterone can still be high, in range and so on but if a large % of that is bound to SHGB it's no different from having an under range level. You'll still feel miserable, having a more negative view of everything and increased risk of cardiac problems.

So check your amount of free testosterone and % bound to SHGB. Also, levels mean nothing if the balance between testosterone and estradiol("e2") is bad.


> It common among men with low testosterone to have aching joints.

Great... We can have either pain or hair loss... And don't even think about the prostate...

Intelligent design... Meh...


It's almost guaranteed that some of your experience is the placebo effect, but it's impossible to know how much.


As I wrote before I have tried a few times back and forth with consistent results. Maybe it's placebo but it works reliably for me. As long as it works I am happy.


The thing with placebo is that it is an effect. As you say, if it actually helps (and does no harm, or replaces something that helps much more...) - then placebo can be useful and beneficial. It can be difficult to prescribe however...


The thing about placebo is that you can't measure it on yourself - you can only find out with large enough double blind tests. But as long as it works for you - placebo or not - good for you!


I post this whenever HN brings up a comment about supplements -- but I created a site to help track people trying out a lot of supplements and how it impacted health and sleep.

HN and QuantifiedSelfers find this as a niche site to record correlations and habits for self improvement.

https://betterself.io/ (Open sourced)


This is how science works. You start with an anomalous discovery. Publish it, write a grant to do more research.


More it more I'm learning that inflammation is a huge problem in modern health. I'd be very interested in a list of anti-inflammatory foods, supplements and behaviors.


Beware. We don't know that the inflammation is not compensatory. Since we know its purpose IS to be compensatory, it seems more reasonable to default to it being compensatory, than the original problem; or better, to withhold our judgement.


Inflammation is a symptom of most auto-immune diseases. Usually the inflammation is caused by your own auto-immune system attacking your healthy cells.

Some examples Uveitis (eye inflammation), Crohn's (bowels), Reumatoid Artritis (Joints). Usually these diseases come in pairs, so some people are just more susceptible to whatever causes these diseases. Right now there is no cure but in the last decades huge progress has been made in suppressing these diseases using biologics, but they have the side effect of suppressing your immune system which may be dangerous.


I agree. "How" is a difficult question to answer, and involves basic research. "X does Y" studies need a hypothesis, a bunch of test subjects, and some statistical wrangling. Far easier than basic research. I'm now at a point to take all statistical studies with a pinch of salt. Molecular studies are more interesting and while micro and may miss the macro picture, are more promising.


Not to say it's not useful (a plausible method of action + weak evidence is more suggestive that further study is needed than weak evidence alone, and that understanding could lead to further developments), but the question of does this work is far more important, and can be answered without knowing the how.


I vaguely remember reading that the physiology of acetaminophen is still not completely understood. I doubt we will be lucky to figure this one out quickly. I also remember reading that the level of dementia in India is much lower than western nations and turmeric was thought to be part of the reason for that.


Im from India. Whenever I get a throat infection, my grandmother gives be a spoonful of turmeric powder mixed with very hot milk, no sugar.

Till date, it has not failed once to cure the inflammation/infection/pain. The infection is gone within a day or atmost a couple of days.


> The infection is gone within a day or atmost a couple of days.

As they usually do on their own.


Yes. However for me, when not using the turmeric milk thing, it lasts upto a week. There was a clear reduction in the time it took for the infection to subside. Now I am not saying this is proof that it's a magic medicine. Just an observation.

It could as well be a placebo effect.


Given what we’re learning about the impact of gut microbiome on CNS, immune system, and other systems I wonder if the GI effect is it, but with secondary effects related to activity of bacterial colonies.


Gut biome is insanely important, I've had digestive issues for over a year and it's been depressing that they couldn't find the cause.

I finally gave up waiting and decided to experiment myself, started drinking a yakult every morning and last thing at night, within two weeks things started... Moving and now 6 weeks in its like it used to be, bloatings gone, regular as clockwork, energy levels are up and I generally feel better.

I still have other issues that are more serious but the medication handles those ok.

I started taking vitamin D a few weeks before that as well and that seemed to help as well.


found this, 'Biochemical mechanism of modulation of human P-glycoprotein (ABCB1) by curcumin I, II, and III purified from Turmeric powder'

https://www.sciencedirect.com/science/article/pii/S000629520...


Small study, probably not preregistered (didn't find anything about it in the description). Likely Publication Bias or p-hacking. I'm starting to get interested when it's been independently replicated.


> Small study

Almost entirely irrelevant, provided it's properly powered--and based on the effect size and p-values, it is. Furthermore, for an 18 month trial, 40 people is larger than most.

> probably not preregistered

I mean, true, and we should definitely promote pre-reg, but, 1. so few are preregistered that I'm not sure it's prudent to disregard a study based on this and 2. it's an academic study, neither a clinical trial nor a public-health decree--not a typical study type that necessitates pre-registration.

> Likely Publication Bias or p-hacking.

What? You're arrived at this conclusion based on... the fact that it's "small" and you couldn't find preregistration about it?

> I'm starting to get interested when it's been independently replicated.

Yeah, if only there were another trial that showed a positive cognitive effect of bioavailable curcumin... https://www.ncbi.nlm.nih.gov/pubmed/25277322

I mean, the title of this post is by far the most wrong thing about this article, as...

1. it's not turmeric, it's curcumin, which is about 2% of turmeric by mass (so, you'd need to eat ~4.5g turmeric daily to match study doses).

2. curcumin doesn't even have any reasonable bioavilability by itself, so they're using a bioavailable analogue.


Last paragraph:

The University of California, Los Angeles, owns a U.S. patent (6,274,119) entitled “Methods for Labeling ß-Amyloid Plaques and Neurofibrillary Tangles”, which has been licensed to TauMark, LLC. Drs. Small, Satyamurthy, Huang, and Barrio are among the inventors and have financial interest in TauMark, LLC. Dr. Small also reports having served as an advisor to and/or having received lecture fees from Allergan, Argentum, Axovant, Cogniciti, Forum Pharmaceuticals, Herbalife, Janssen, Lundbeck, Lilly, Novartis, Otsuka, and Pfizer. Dr. Heber reports receiving consulting fees from Herbalife, and the McCormick Science Institute. The manufacturer of Theracurmin, Theravalues Corporation, provided the Theracurmin and placebo for the trial, funds for laboratory testing of blood curcumin levels, and funds for Dr. Small's travel to the 2017 Alzheimer's Association International Conference for presentation of the findings.


If that were the gp's argument, I'd have no problem with it (heck--I'd agree with most of it).


>> Small study

>Almost entirely irrelevant, provided it's properly powered--and based on the effect size and p-values, it is.

Sorry to state this so strongly but I feel it's important to point out that this is misinformation of the most egregious kind.

There is no examination of statistical power in this article whatsoever so there is no basis for your claim that this study is properly powered.

Large effect sizes are not evidence of adequate statistical power. Quite the opposite is true. E.g. See this article: https://www.nature.com/articles/nrn3475/figures/5

'Effect inflation is worst for small, low-powered studies, which can only detect effects that happen to be large.'


  Sorry to state this so strongly but I feel it's 
  important to point out that this is misinformation of the 
  most egregious kind.
  
  There is no examination of statistical power in this article whatsoever so there is no basis for your 
  claim that this study is properly powered.

The power is only relevant when you fail to detect an effect. This is a positive study, so the power is simply not relevant to criticizing that particular study.

The nature article you reference talks about literature in aggregate, and the problem created by small studies in general, but it does not allow you to say anything about a single positive study. In fact, the place where those very authors draw their primary conclusion, they state:

  Inflated effect estimates make it difficult to determine an adequate sample size for replication 
  studies, increasing the probability of type II errors
Which means: when you try to replicate it, you might fail because you thought the real effect size was bigger than it is, based on your estimated effect size with small n.

That's well and good, but is a problem for the replicating authors, not for these.

An even more direct point related to the current article: it was preregistered, so you know the authors have not been farming well-designed but small curcurmin studies until they did one than that was positive by chance, and then published it. The act of pre-registration goes a very long way toward addressing the issues raised by Button et al.


> The power is only relevant when you fail to detect an effect.

This is simply not true. It is sometimes called the fallacy of "what does not kill my statistical significance makes it stronger”.

See http://andrewgelman.com/2017/08/17/just-google-despite-limit...


  The power is only relevant when you fail to detect an effect. 
This is absolutely false. Small, low-powered studies are more likely to produce false results than larger studies. This was well-documented by John Ioannidis over ten years ago, in work that should be required reading for all publishing scientists:

http://journals.plos.org/plosmedicine/article?id=10.1371/jou...


This is addressed by pre-trial registration. You can't have publication bias (his argument) when you register the study and see that this is the only one.


I know this is a dead thread: I'm answering because if you are a clinician you need to know that your understanding of the statistics is incorrect. This statistical effect has nothing to do with publication bias. If you want to continue the discussion please email me and I can explain and give copious references which will show you why.


Am I right in thinking that statistical power is the true positive rate, so a test that always rejected the null hypothesis (whether correctly or incorrectly) would have the highest possible statistical power?


No, statistical power as a concept is independent of the observed result of a test.

It is widely misunderstood though, so often people try to use an observed result to justify low power.

http://andrewgelman.com/2017/08/17/just-google-despite-limit...


Is the wikipedia page on statistical power plain wrong then?

"The power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H0) when a specific alternative hypothesis (H1) is true. The statistical power ranges from 0 to 1, and as statistical power increases, the probability of making a type 2 error decreases. For a type 2 error probability of β, the corresponding statistical power is 1-β. "

So according to that, if I always reject H0 whether rightly or wrongly, that means β=0 and power=1.


it's not turmeric, it's curcumin, which is about 2% of turmeric by mass (so, you'd need to eat ~4.5g turmeric daily to match study doses)

Typical Indian cooking uses turmeric in pretty much everything. I wouldn't be surprised if daily consumption of turmeric is very close to the figures you mentioned.


Interestingly, one of the recent fads I've been seeing has been for "turmeric lattes" - maybe can reach that level even if you're not keen on Indian cuisine?


Apparently, eating more than a teaspoon of tumeric a day increases the risk of kidney stones though, so don't overdo it.

I've been mixing it through my oatmeal, together with chilli-pepper and black pepper, a bit of cardamom, and a teaspoon of real cinnamon (the non-cumerin kind). While it feels like it works, I know that the placebo effect is probably much stronger than any real effect these foods have. Still, a placebo effect is still a real effect on my mood, so that's still a kind of health benefit I guess. If nothing else the spices help me wake up!

Tangent regarding health food fads: these days I mainly use nutritionfacts.org[0] to determine which of those are actually supported by the latest nutrition research. It is the only health food/diet website that I know of that directly cites nutrition research and continuously scours the latest papers for new findings - sources are always linked, and quotes are directly lifted from the papers with no modification (so less likely to suffer from "stronger or opposite of what the paper actually says"-shenanigans often seen elsewhere).

Some of the other things I tried out based on that website have too big of an effect to just be placebo: adding blue-berries to a meal really reduces sugar rushes/crashes in the hours after it[1]. Taking a table spoon of freshly broken flax seeds[2] does a lot to counter the rise in blood pressure due to my ADD medication (I measured it, plus I feel a lot less discomfort in my chest area and less jittery).

Last time I mentioned that website here, it was in a discussion of which diets are healthy. Ironically, it was the only comment in the discussion that attracted multiples downvotes without explanation, while everyone else was sharing their unsourced opinions.

[0] https://nutritionfacts.org/

[1] https://nutritionfacts.org/video/green-smoothies-what-does-t...

[2] https://nutritionfacts.org/video/flax-seeds-for-hypertension...


"turmeric lattes" :)

My kitchen medicine for running nose, soar throat is hot milk with turmeric. Works for me most of the times.


Also:

http://www.ijamhrjournal.org/article.asp?issn=2349-4220;year...

> Worldwide, esophageal cancer is the eighth most common cancer [...]. In India, it is the fourth most common cause of cancer-related deaths.

Granted it's probably due to alcohol and tobacco but curry probably won't save you from cancer.


This is spurious correlation at its best. Terrible science at it is worst.


chewing tobacco is more common, at least, on the uncles I watch.


What prevents researchers from running studies like these on the order of thousands of individuals, as opposed to 40 or 100?


A lot of things that basically boil down to budget, optimization and space available. If you have a drug that you believe you can show a strong effect with the effect group at ~15 people, maybe you run the trial with 40 (15 e.g., 15 control, plus wiggle room for drop-outs over the 18-month course and extra certainty).

Check the last few paragraphs--"The authors thank Shayna Greenberg, Dev Darshan Khalsa, and Anya Rosensteel for help in recruitment, data management, and study coordination; John Williams, University of California, Los Angeles, Nuclear Medicine Clinic, for performing the PET scans; and Vladimir Kepe, Cleveland Clinic, for developing the FDDNP-PET analysis procedures for proteinopathies in patients with neurodegenerative diseases." Add all that to the effort involved in the actual chemical extraction of the bioavailable curcumin analogue and it just doesn't make sense to add any more people than you need.


For the recruitment, data management, and study coordination piece, sounds like a great area to optimize with software. I'm sure big pharma will also happily shell out for something to help them manage their own clinical trials. Academia PIs can then get the same stuff at a lower rate.

Edit: Heck, you can even lease top grade medical space in all major metro areas and offer it as part of the package in a WeWork style arrangement. I'm sure they are tons of nuances to rolling something like this out, but it sounds like it can cut costs substantially.


If they're marketing their own drug (as implied by other comments - I don't know), you might also ask what stops them from running the studio on a 100 groups of 40 patients, and picking the "best" for publication... (other than ethics, obviously)...


Cheaper to run it on 100 and select the best 40 to report in your study.


As others have pointed out. Its about money. If the compound is expected to be very profitable, then it make sense to invest a lot money to do a large scale study. On the other hand, if it is a natural compound that you can't really patent, the profitability will be much lower. It doesn't make sense for a commercial entity to do a large scale study. If the compound is important enough, a government entity will sometimes invest the money to study it.


Money.


Funding


Thank you for your informed comment.

I don't' work in the field so I find it hard to interpret these studies. What's your opinion about the size of improvements in practice?

For example, How meaningful is the 20.3 point difference in Buschke SRT test?

Cohen's d (mean difference divided by SD) above 0.60 with the memory or attention seems good (73 % of the people who take the dose get above the mean test score). But difference that is detectable in a test is not telling me how big the effect would be in the daily performance unless I'm familiar with the test.


It was registered as a clinical trial (https://clinicaltrials.gov/ct2/show/NCT01383161?term=NCT0138...). If you are concerned about p-hacking and replicability, what should worry you is that the outcome measures posted during registration are rather vague, i.e. ".. show less evidence of cognitive decline (as measured with neuropsychological assessments)"

At baseline, they performed an extensive neuropsychologist test battery, including a bunch of subtests:

  -Trail Making Test A
  -WAIS-III Digit Symbol Substitution
  -WAIS-III Block Design Test
  -Rey-Osterrieth Complex Figure Test (copy)
  -Trail Making Test B
  -Stroop Interference
  -F.A.S.
  -Buschke-Fuld Selective Reminding Test [SRT]
  -Wechsler Memory Scale-3rd Edition [WMS-III] 
  -Verbal Paired Associations I
  -Benton Visual Retention Test
  -Buschke-Fuld SRT
  -Rey-Osterrieth Complex Figure Test [recall]
  -WMS-III Verbal Paired Associations II
  -Boston Naming Test
  -Animal Naming Test
The outcome measures they report are:

  -TMT-A
  -SRT
  -BVMT-R
I would be _very_ surprised if those were the only tests they performed at followup after a 18 month (expensive) clinical trial. If these measures had been specified as the only outcome measures of interest, noone would question the results. But, when it was not registered, and when no other measures are reported, not even in the supplementary, the reader is left to wonder why that is.


Excellent point. Outcome measures should have been more clearly defined in pre-registration. As it was done, p-hacking and selective reporting were still possible.


That's a good point. After having seen much of the brain training literature (especially its early studies), this wouldn't surprise me at all.


Great "credibility check". There should be a "nutriction facts table" for scientific papers quoted at articles. Sample size, preregistration, replication, p-value.

At least a practice to include that in the abstract


Maybe along the lines of 'A statistical definition for reproducibility and replicability'? I think there are more attributes to be included, but it seems like a good place to start on a 'nutrition label' for papers.

[0] https://www.biorxiv.org/content/early/2016/07/29/066803

[1] https://cran.r-project.org/web/packages/scifigure/index.html


If every study on tumeric, tea, chocolate, and coffee was fully accurate we'd all be immortal by now.


The problem you describe is a real one, but a bit different than the other criticism raised here so far.

Chocolate has the Mars Company and other Big Sugar companies behind it pushing for justifying chocolate as a "health food" in the public mind. It is very obvious that they are wiling to distort the truth for better sales; people who can ease their conscious about eating unhealthy sweets buy more chocolate[0]. There might be some real effect to eating raw cocoa, but that does not really translate to your average chocolate bar with insane amounts of fat and sugar.

Tea/Coffee is kind of similar, as would be the "drink x glasses of water per day"-advice, which is a distortion of science by bottled water companies (fruits, vegetables and other food contains a ton of water already).

For this reason I also don't buy that salt is safe: there are very strong vested interests in the food industry that want salt to be (considered) safe to sell more food, whereas I cannot see a comparable bias on the "eat less salt"-side.

For this research the situation seems to be have some of the mentioned problems, but not to the same degree:

On the one hand, that tumeric has an in-vitro effect is pretty well established. The issue is that the normal form we put on our food is probably not absorbed in a dosage that has a significant effect. However, this was a trial with a bio-available form, in a high enough dosage, so that obvious flaw is out of the picture.

On the other hand, that bio-available form is a product, so the company behind it wants it to be effective. Still, Big Pharma is under more scrutiny than Big Sugar, so I have cautious hopes this might actually work out in the end (and it won't be sold in the form of calorie-rich sweets at least).

Admittedly, I'm biased here: my grandmother on my mother's side, and my grandfather on my father's side both had Alzheimers (or something very similar to it at least). My parents are now their late sixties, I would really like to see them stay healthy and happy for a few more decades.

[0] https://www.vox.com/science-and-health/2017/10/18/15995478/c...


Here's a study on salt you may enjoy:

https://www.ncbi.nlm.nih.gov/pubmed/29359681


I'm not a native speaker, what does the K stand for? Potassium? (as in "Kalium-salt")

Another argument in favour of eating potassium and less sodium, which is pretty well established:

https://nutritionfacts.org/video/lowering-our-sodium-to-pota...


Correct, K = potassium


Aren’t we? People are living longer than ever before. The issue is in determining effect size. I’m sure some amount of all those things you mentioned is “good” in some sense, but in what quantity and for how long?


People are "living longer" because child mortality has fallen dramatically.


Not only. Life expectancy has increased in every age group percentile. But the effect is obviously not due to tea, coffee and turmeric. Rather, it’s due, mainly, to (a) better general nutrition, (b) better hygiene, and (c) modern medicine, especially antibiotics and vaccines.


If in making those statistics one didn't account for that I'd be very disappointed, but it doesn't look like that's the case. [1][2]

[1] http://life-span.healthgrove.com/l/51/50

[2] https://www.ncbi.nlm.nih.gov/books/NBK62587/


I'm doing my part!


Except that it was preregistered, as described in the article.

https://clinicaltrials.gov/ct2/show/NCT01383161?cond=NCT0138...

It's great to be skeptical, but it's really lame to be skeptical and lazy. It's a terrible combination that causes patients not to trust their doctors.


This is becoming an increasingly worrying trend - to bunk any analysis with a comment on 'small sample size, biases' that somehow moves to the top of the comment pile.

Statistically, there is no such thing as a 'small sample size', in isolation. It is always related to the power and effect that the researchers want to work with.

Moreover, even if a study is statistically insignificant, it DOES NOT mean that the opposite conclusion is true. It might often merit some further examination of cultural and historical trends. In this case, for instance, turmeric has been used for medicinal properties in Asian cultures for centuries. Theories of evolution suggest that this wouldn't be the case (and it would've died a natural death), if there wasn't some merit.


>In this case, for instance, turmeric has been used for medicinal properties in Asian cultures for centuries. Theories of evolution suggest that this wouldn't be the case (and it would've died a natural death), if there wasn't some merit.

Lots of things that are completely useless have been used for thousands of years for their medicinal purposes. The evolution argument only holds if there were some negative effects here. Anything that doesn't do harm to the body can easily integrate into the huge body of knowledge called superstition. Eating it is tasty and doesn't hurt, there's nothing more needed to explain that Asian cultures have it as a medicinal herb.

If it for example was good for you but caused somewhat unpleasant side-effects, your evolutionary argument might have more merit. Anything that works as well as homeopathy (it doesn't), but doesn't harm either (like homeopathy), won't be sorted out.


> In this case, for instance, turmeric has been used for medicinal properties in Asian cultures for centuries. Theories of evolution suggest that this wouldn't be the case (and it would've died a natural death), if there wasn't some merit.

That is utter nonsense. To give just one counter-example, bloodletting was used for millennia for conditions where it’s completely ineffective (and, in fact, actively harmful: bloodletting has killed scores of people). In fact, you simply can’t apply an argument from evolution here. Evolution doesn’t optimise for “goodness”. It optimises something (and only over the long term). That “something” can often be positively detrimental to other outcomes, and even to general fitness (e.g. the laryngeal nerve).


Statistics makes basic assumptions that are false. For example, most things don't fit a bell curve.

These assumptions are more relevant in small sample sizes. Making them inherently unreliable. What's missing is not a p-value, but an estimate for the uncertainty around that p-value.


Moreover, even if a property is evolutionarily insignificant, it DOES NOT mean that it will be selected against.


It's possible, but I think (not that I'm an expert) that this is not the first piece of evidence to suggest this. A year or two was recently speaking with a researcher who studies Alzheimer's and he had told me about how in cultures where more curry and turmeric was consumed there was significantly less Alzherimer's. I'll try to find a source for this and update my comment.


Not only is that anecdotal, but even if it's true it is correlative in the weakest sense. There are undoubtedly countless other features for which "cultures" (how does one define this concretely btw) with low Alzheimer's rates also are outliers.

Here is a great demonstration of how correlation does not imply causation: http://www.tylervigen.com/spurious-correlations.


Sure. But I'm saying that bit of information would become interesting given the original post which supports a similar idea. I'm not making an assertion of fact simply bringing to mind another data point. Also here. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781139/ It would seem that this is a rather well documented and often researched thing. Not just a one-off p-hack.

I believe the research I was thinking of was specifically this part of the overview above.:

Various studies and research[9,10] results indicate a lower incidence and prevalence of AD in India. The prevalence of AD among adults aged 70-79 years in India is 4.4 times less than that of adults aged 70-79 years in the United States.[9] Researchers investigated the association between the curry consumption and cognitive level in 1010 Asians between 60 and 93 years of age. The study found that those who occasionally ate curry (less than once a month) and often (more than once a month) performed better on a standard test (MMSE) of cognitive function than those who ate curry never or rarely.[10]

Sources:

9. Pandav R, Belle SH, DeKosky ST. Apolipoprotein E polymorphism and Alzheimer's disease: The Indo-US cross-national dementia study. Arch Neurol. 2000;57:824–30. [PubMed]

10. Ng TP, Chiam PC, Lee T, Chua HC, Lim L, Kua EH. Curry consumption and cognitive function in the elderly. Am J Epidemiol. 2006;164:898–906. [PubMed]


Examine.com has a page for curcumin. They cite 5 sources with notable impacts on reducing depression, but they summarize with this:

"Curcumin seems to be somewhat more effective than placebo in reducing symptoms of depression. It may take 2-3 months to see any outcomes. Skepticism is warranted though, as the studies comparing curcumin to placebo were not well designed and produced effect sizes not too far apart, even though the differences were statistically significant."

https://examine.com/supplements/curcumin/#hem-depression


Results too good to be true, as well :-).


agree that replication is important and that it's too early to believe fully, but when a study size is small and you still find a difference, then it implies the effect size is large.

If you meet statistical significance, then diminishing n means increasing clinical significance.


I flipped 5 coins, 4 came up heads, so heads must be >>80% likelihood??


> but when a study size is small and you still find a difference, then it implies the effect size is large

That’s only true if it holds across multiple studies. Otherwise it might simply be random variability. By your argument, a study with n=4 that showed an effect would imply a huge effect size. No. It implies that the measured signal is highly variable, and that you got lucky on 1:3 odds. Check out regression towards the mean [1].

(Note that if you’ve got a good underlying model of the variability, n=4 can work. In fact, it’s routinely done in specific applications, such as transcriptomics. But even there it’s far from ideal, and it’s supported by a stringent error model.)

[1] https://en.wikipedia.org/wiki/Regression_toward_the_mean


What is the power & effect size?


Effect size is the amount of change that has been observed, in this case the ability to memorize things.

Significance is the probability of the result being coincidence. So "highly significant" only means "most probably not coincidence".

That means you can have a "highly significant" result where the actual result (= effect size) is tiny.

In study design, both are related: The higher the number of test persons, the easier it is to archive significance. Therefore, proper design would be to first guesstimate (based on existing literature etc.) the effect size, then define the minimum number of test persons necessary.


Personal anecdote: was born and brought up in south India and my grandma used to make me drink turmeric + ground black pepper milk every night. It was a standard practice to serve it with black pepper, even though I’m sure they didn’t really know about bioavailability. Kids were forced to drink it when they have the flu.

Used to hate it as a kid, but of late I’ve been looking to replace one coffee/tea with turmeric milk.


> was born and brought up in south India and my grandma used to make me drink turmeric + ground black pepper milk every night

I grew up in a part of South India with a fairly high natural background radiation (thorium, mostly) and I did wonder if the practice boosted survival via accidental cancer protection from an alpha decay environment.

Natural selection is weird because the goal of taking turmeric might have nothing to do with the survival advantages.


Not sure if high natural background radiation really means more cancer.

I tried searching but I cant find a study where apartment builings in Japan were inadvertently built using steel laced with radioactive elements.

People (10000) were living in them for decades. Statistical analysis on them showed lower affinity for cancer related deaseses.


> inadvertently built using steel laced with radioactive elements. > Statistical analysis on them showed lower affinity for cancer

Radiation Hormesis might be real (I hope so), but if it gets through drywall, it would be a different form of radioactivity to the one I grew up around.

Alpha decay needs just a simple sheet of paper (or dead skin cells) to stop the radiation, but it messes with DNA if you happen to eat it or breathe it.

That's the one from Th-232, in comparison bananas are also naturally radioactive, but the K-40 emits beta particles, which doesn't cause as much damage.

Alpha decay is what a cup of Polonium tea would have (from the infamous assassination).


Found the study - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2477708/

It wasn't Japan, but Taiwan.


Just curious: what's the amount of each ingredient that goes in one drink?


200ml of hot (not boiling) milk

1 tsp of ground turmeric powder

A pinch or two ground black pepper

If you’re not used to turmeric, I’d suggest starting with half tsp as it’s got quite a strong flavour (especially when not cooked with other spices)


Speaking as an European not used to very spicy food, it's not the turmeric I'm worried about, it's the black pepper. But a pinch or two seems fine.

I'll give it a try. If my mouth catches on fire, I know who to blame. :)


> 200ml of hot (not boiling) milk

If you're lactose intolerant, you can do this with yoghurt with a little water & salt instead.

Since this usually happens when I do have a cold, my laziness translates into a coffee cup of yoghurt + water + 1 minute in a microwave to zap all the bacteria out of it.


There is a school of thought that the worst thing to consume in case of a cold is dairy.


I think that's an old wives' tale. There's probably a better article somewhere, but I found this one: http://www.bbc.com/future/story/20170421-should-you-avoid-ic...


Turmeric is an Indian old wives' tale apparently.


About a half-teaspoon or so of turmeric, and about a teaspoon of black pepper. A teaspoon of black pepper might be a bit aggressive, so adjust it to suit your taste, but my mom seems to think lots of pepper is necessary to reduce coughing/phlegm when you're down with a cold.


Fun fact - tumeric was the trigger for India creating a patent fighting engine around Indian traditional medicine (Ayurveda and unaani) including a gene bank and international legal force around biopiracy.

We now have something called Traditional Knowledge Digital Library (TKDL) - http://www.tkdl.res.in/tkdl/langdefault/common/Home.asp?GL=E...

http://www.mondaq.com/india/x/586384/Patent/Traditional+Know...


For those commenting on bioavailability, black pepper supposedly makes it more bioavailable:

https://examine.com/supplements/curcumin/

It's an interesting result, but they do note that the subpopulation is somewhat self-selected: "Only approximately 15% of the screened volunteers were included in the study, and our recruitment method yielded a sample of motivated, educated, physically healthy subjects concerned about age-related memory problems. The sample, therefore, was not representative of the general population. "


Black pepper increases bioavailability of everything by inhibiting your cytochrome p450 enzymes, which increases your vulnerability to toxins.


There’s a difference between bioavailability and half life. Metabolic enzyme inhibition increases half life due to decreased hepatic metabolism of certain drugs. Bioavailability is how much of a drug your body absorbs from the initial dose.

Also, black pepper inhibits CYP3A4, not 450.


Interestingly, it seems that curcuminoids themselves also inhibit p450, with less specificity.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2574793/

And p450 exists in the brain, which may relate to the OP suggestion that turmeric boosts memory, but doesn't give me confidence that it boosts it in a way that is absent of tradeoffs.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3223320/


There is a difference, but increasing either the bioavailability or half life of toxins is undesirable.

Also, CYP3A4 is one of the p450 enzymes. But after looking it up, piperine does seem to be specific to CYP3A4, which is better than inhibiting p450 in general.


Very interesting to see this posted. In Japan its ingredients are used in an anti-hangover drink, this youtube video https://youtu.be/Ij_aJI5O9Rs goes somewhat deeper into it, where the poster tried a combination of it and pepper to strengthen the effect.


Speaking of Japan, I recently saw an interesting video about why Japanese people are so thin, compared to say the US:

https://www.youtube.com/watch?v=lr4MmmWQtZM


Yeah, that's from the same channel :)


Note: this was a relatively small sample (40 total, split in half for placebo/not), who were elderly people who already have "mild" memory problems, and given a concentrated turmeric derivative rather than just sprinkling it over food.

In summary: very cool, and I'd offer these curcumin supplements to anyone 50+ but sprinkling turmeric on your breakfast cereal ain't going to do much for high schoolers.


Actually the age range was 51-84, so middle plus old age, not all "elderly". Normal people flourish and are in their prime around 50.

Volunteers had "objective cognitive performance scores and clinical histories consistent with normal aging or MCI (i.e., mild neurocognitive disorder) and inconsistent with dementia (i.e., major neurocognitive disorder)." [emphasis added]


"Normal people flourish and are in their prime around 50."

This is false. The only advantage people in their fifties have is experience. Your body is past its peak physically and mentally. People flourishing in their fifties are by and large reaping the benefits of things done in their younger decades, not because they are peaking.

I'm not saying that someone that is 50 years old is decrepit, but ask nearly any academic and they'll tell you they were better when they were younger. Ask nearly any athlete (ultramarathoners being an exception) and they'll tell you they were better when they were younger.


>but ask nearly any academic and they'll tell you they were better when they were younger.

I don't think that's true at all outside of Mathematics.

>Ask nearly any athlete (ultramarathoners being an exception) and they'll tell you they were better when they were younger.

That's obvious.


>I don't think that's true at all outside of Mathematics.

Quite a few great math papers are written by people over 40, despite the flashy age restriction on the Fields Medal.


So you say you don't call people who are 50 "decrepit" -- but do you call people in their middle age or in their 50s "elderly"?


I didn't make that comment. However, I would say that people in their fifties have started to experience age-related decline in numerous areas (memory included). Showing a memory improvement in people 50-80 should be more accurately described as "restoring" their memory rather than "boosting" it, which implies that they now have supranormal memory.


Am in my late 50s, can confirm. Memory, concentration, attention, and a bunch of other faculties, both mental and physical, are trending disgustingly downward.

I know there are things I could do to fight it, but motivation is another one of those things going down. f--- it, f--- everything.


Pushing 46, can confirm. Literally just got that thing that screams "maybe you need bifocals not contacts" where nothing close quite comes into focus... basic response is "fuck it, fuck everything" like you say.


> Normal people flourish and are in their prime around 50

How do you mean? I don't think most people in their 50s would describe themselves as being "in their prime".

For example, ELO ratings of Chess players tend to decline in their 30s[1]. For NBA players, their "prime" is typically the late 20s, with performance declining in their 30s.

1. https://www.chess.com/blog/LionChessLtd/age-vs-elo---your-ba...


Note that I wrote around 50, not after 50. Middle age is usually defined as the 40 to 60 range (or more recently held to be slightly later, 45-65). That is the period when people flourish. That is certainly not when people are "elderly." Interestingly the Ancient Greeks used to report the time when one flourished rather than their date of birth.

Neither extreme athletic performance nor ELO rating are measures of normal flourishing (I'm surprised one would bring up such abnormal elements; the lives of NBA players and elite chess players deviate very far from that of a normal human). I wonder how the kind of argument you offer, and that of the sibling "olympus" comment, feeds into reported ageism in SV and related tech companies.


Are you equating "flourishing" with "being in your prime"? Because I don't consider those to be the same.

Being in your prime is about peak potential. And sure, the age at which you have peak potential varies by activity. My early 20s is when I have the _opportunity_ to be the best sprinter I'll ever be. That's my prime. But say I first take up sprinting in my late 30s, and put in my best sprint time at the age of 40. I might be "flourishing" at that point, but in no way does it mean I'm in my prime at age 40. The 20 year old version of myself, with the similar conditions and preparation, is simply physically capable of sprinting faster.

Certainly, there are activities where your peak potential is greatest in middle age. I'm not disputing that (however, I'm highly skeptical that memory and physical ability are in their prime past age 40).

> I'm surprised one would bring up such abnormal elements; the lives of NBA players and elite chess players deviate very far from that of a normal human

I mean, where can I get performance statistics for "normal people"? If we're talking about maximizing potential, why not look at the ones putting in the most effort toward that goal?

Also, the linked ELO ratings were for 179,221 "registered FIDE players". They weren't just the top-100 or even top-1000 players. Yes, this is biased toward people who play chess - but where else can you get the data?

I'm curious what you think a "normal human" is? Do you think any of those 179,221 chess players are "abnormal"?


To be in one's prime means to be productive, rational, social, and in good health. Not past one's prime, not "elderly" -- which is the particular term I was responding to.

I don't understand your (and others') focus on peak extreme-athletic performance, or peak chess-playing ranking. Why is it problematic to assert that people in their middle age are normally healthy, fully functional, flourishing human beings?


I wonder about the ELO thing how much is age and how much is priorities changing (kids, families, the realisation that outside the top15 there isn't much of a living in playing vs teaching/writing which takes you away from teaching etc).

Anand just won the rapid chess world championship at 48, Korchnoi was top-100 into his 70's.

Not saying there isn't some decline just that it may not be as pronounced as the raw ELO figures would indicate.


> I don't think most people in their 50s would describe themselves as being "in their prime".

I are not going to be part of a professional sports franchise. But at my age (59) I know more than I ever did, and I have no serious physical problems.


"no serious physical problems" is far from "in your prime"... and you might know more things, but your ability to learn has significantly declined. That's just the nature of aging.


The measure of flourishing is not how fast one can dig holes, nor is it how quickly one can study new APIs.

Knowing useful and effective ways to act, and acting accordingly; understanding and undertaking long-term, purposeful courses of action; and fully integrating new knowledge with vast, existing knowledge -- are key to flourishing. This doesn't end at or decline with the start of middle age.


Being in your prime refers to mental and physical fitness, not happiness. It's easily measured by digging holes or studying books.

The context of this thread is fitness - 50 year olds are definitely less fit than their younger selves. It's part of the human condition.


If you think humans flourish only in terms of physical prowess or memory retrieval, you have very poor standards. To call someone who is middle-aged "elderly" does fit with your criteria, and with reports of ageism in tech companies.

I hope you're not involved in any way in hiring decisions, or in other activities that involve evaluating people's abilities, performance, achievements, or potential.


> If you think humans flourish only in terms of physical prowess or memory retrieval, you have very poor standards.

Nope, I just think you have poor reading comprehension - you are replying to a comment that said exactly the same thing. Flourishing (vibrant, active, enjoying life) != being in one's prime (peak performance).

Nothing wrong with that. My parents are 60+ and flourishing past their prime - their decades of experience are invaluable.


"your ability to learn has significantly declined"

no, it hasn't.


40? so nothing relevant? :(


Number of samples depends on the effect size. e.g. You only need n=1 to claim a dog can talk.


You realize that statistics is capable of using very small sample sizes right?


And you do realize that 40 is just a tremendious small sample size in comparison to a complex system like a human beeing?

I mean yeah you need one dog who speaks but apparently, when you read other comments, there are enough papers saying the opposite to this one.


There are side effects to be aware of taking curcumin together with other medications:

"Medications for diabetes (Antidiabetes drugs) Interaction Rating: Moderate Be cautious with this combination. Talk with your health provider. Turmeric might decrease blood sugar in people with type 2 diabetes. Diabetes medications are also used to lower blood sugar. Taking turmeric along with diabetes medications might cause your blood sugar to go too low. Monitor your blood sugar closely. The dose of your diabetes medication might need to be changed.

Some medications used for diabetes include glimepiride (Amaryl), glyburide (DiaBeta, Glynase PresTab, Micronase), insulin, pioglitazone (Actos), rosiglitazone (Avandia), chlorpropamide (Diabinese), glipizide (Glucotrol), tolbutamide (Orinase), and others."

Turmeric might slow blood clotting. Taking turmeric along with medications that also slow clotting might increase the chances of bruising and bleeding.

Medications changed by the liver (Cytochrome P450 3A4 (CYP3A4) substrates) Interaction Rating: Moderate Be cautious with this combination. Talk with your health provider. Some medications are changed and broken down by the liver. Turmeric might decrease how quickly the liver breaks down some medications. Taking turmeric along with some medications that are broken down by the liver can increase the effects and side effects of some medications. Before taking turmeric talk to your healthcare provider if you take any medications that are changed by the liver.

Some medications that are changed by the liver include some calcium channel blockers (diltiazem, nicardipine, verapamil), chemotherapeutic agents (etoposide, paclitaxel, vinblastine, vincristine, vindesine), antifungals (ketoconazole, itraconazole), glucocorticoids, alfentanil (Alfenta), cisapride (Propulsid), fentanyl (Sublimaze), lidocaine (Xylocaine), losartan (Cozaar), fexofenadine (Allegra), midazolam (Versed), and others."

https://www.rxlist.com/turmeric/supplements.htm#Interactions


Turmeric has been used in India since ancient times. There are many references to it in Ayurveda - the Indian/Hindu medical sciences. Ayurveda believes in using food as medicine to non only prevent a large majority of human diseases, but also to cure several of them — all with almost zero side effects. Turmeric is just one such extremely powerful & healthy food item that Western science seems to be playing catch up on. Many others like black pepper, honey, asafoetida (hing), coconut oil, copper utensils, cloves, cardamom, mustard seeds, neem (gold), milk, clarified butter (ghee). So, expect more things to come out from the research of Western labs, knowledge that is common to most grandmothers in India for ages. But the other aspect is that we are slowly losing all that because Indians try to mimic the West in almost everything they do & nowadays do not trust our own sciences like Ayurveda etc, even at the risk of debilitating side effects.

Side fact: An American company tried to patent turmeric & almost got away with it. This had to be fought by the Indian government to prevent (or nullify) the patent application.


>There are many references to it in Ayurveda - the Indian/Hindu medical sciences. Ayurveda believes in using food as medicine to non only prevent a large majority of human diseases, but also to cure several of them — all with almost zero side effects.

Homeopathy also has zero side effects (apart from having zero effects). There is no medical wisdom in any of the old texts (from any culture) that are relevant in 2018. The vast majority is pseudo-scientific nonsense that should be rejected.

>Turmeric is just one such extremely powerful & healthy food item that Western science seems to be playing catch up on. Many others like black pepper, honey, asafoetida (hing), coconut oil, copper utensils, cloves, cardamom, mustard seeds, neem (gold), milk, clarified butter (ghee).

All of those are known to the entire world, both their exaggerated benefits and their limitations. Sorry to burst your bubble.

>But the other aspect is that we are slowly losing all that because Indians try to mimic the West in almost everything they do &

You are severely confused. Do you take coconut oil when you get cancer? Or do you take Turmeric? If you get liver disease do you use copper vessels? Nobody cares where the idea originated. When it comes to their health, people will go where there are results.

> nowadays do not trust our own sciences like Ayurveda etc, even at the risk of debilitating side effects.

That doesn't explain the success of pseudo-scientific products marketed under the "Patanjali" brand.

Also, there is no such thing as "Western science". Its a global science and people from around the world are contributing to it. Go into any research lab in the "west". You'll find a large portion of people there belonging to chinese and south asian heritage apart from the local ethnic majority.


Thank you for your bulls* antidote. I especially liked the part about western science vs global science.


> There is no medical wisdom in any of the old texts (from any culture) that are relevant in 2018.

Seriously? We can do better than that - if you're going to make a statement that extreme, you'll need to provide sources to back it up.


The more extreme statement is that those texts have any value. Lets start with that first.


You're making some very, very broad, blanket statements about vast swaths of human experience and recorded wisdom (hint: that usually doesn't work out so well - nuance is our friend!).

But, to play along, in attempt to affirm or deny this - let's start with what you mean by "those texts". Which texts, specifically, are you referring to, when you say that they contain absolutely zero applicable content to the care of modern humans? (Where I'm paraphrasing your statement: "There is no medical wisdom in any of the old texts (from any culture) that are relevant in 2018").

edit:

you know what? I'll help you out with a single counterexample to your statement, so that we can move on:

http://www.bbc.com/news/blogs-china-blog-34451386

https://en.wikipedia.org/wiki/Tu_Youyou#Malaria

To summarize, the plant Artemesia Annua was recorded approximately 2500 years ago in traditional Chinese medical herbal texts as being an effective treatment for the disease known as malaria. The 2015 Nobel Prize in Medicine and Physiology was awarded to the chemist who used this specific information from these texts to aid in the discovery of a compound that is now the defacto treatment for this disease.

There's plenty more examples like this; I will end my confrontation of your current system of beliefs here. The rest of this form of work is completely up to you!


>You're making some very, very broad, blanket statements about vast swaths of human experience and recorded wisdom (hint: that usually doesn't work out so well - nuance is our friend!).

I don't know about you, but when I read about applying dung into my wounds, I kinda lose interest in the remaining 4000 pages, and also happily forgo the 'ward off evil spirits' bonus that it brings.

>Which texts, specifically, are you referring to, when you say that they contain absolutely zero applicable content to the care of modern humans? (Where I'm paraphrasing your statement: "There is no medical wisdom in any of the old texts (from any culture) that are relevant in 2018").

Your paraphrasing is out of context. I am speaking about medical knowledge. Our knowledge has improved, but our flaws still remain. In other non-medical contexts, a lot of things would still apply.

Back to your counter point.

>the plant Artemesia Annua was recorded approximately 2500 years ago in traditional Chinese medical herbal texts as being an effective treatment for the disease known as malaria. The 2015 Nobel Prize in Medicine and Physiology was awarded to the chemist who used this specific information from these texts to aid in the discovery of a compound that is now the defacto treatment for this disease.

While that maybe be what is being reported it is not even close to the truth. The texts don't say "here's how to treat malaria". They don't even know what malaria actually is or how it works or anything about it. Traditional medicine applied a 'throw shit to the wall' approach (common in ancient times) to treat fevers using hundreds of different herbs. None of these actually worked, because there was no knowledge of what was causing those fevers (parasite/virus/disease etc). This is also why we don't treat malaria by giving someone a cup of herbal tea - it doesn't work.

What was a world record for a marathon a hundred years ago is barely the qualifying time for the Boston marathon in 2018. So yeah, The Chinese, Egyptians, Indians, etc all had their own world records. Its time to let them go. I prefer to take my medical science from the time when we know the most about the human body, not when we knew the least. You can make your own choice.


Knowledge is everywhere, science doesn't tell you to be narrow minded. If you deny human kind's past experiences, you would just end up reinventing the wheel. It is always hard to find the reason behind anything as it requires us to find the pattern first. Call it symptoms in medical science. Human kind has been doing it since its inception. Ancient texts are just there to help you out. Remember that there were no technological advancements as of today, still our ancestors had found reasons and solutions (cures in medical science) to many problems. You do not need to really trust anything, but what I am saying is that you cannot also simply deny it.


>When it comes to their health, people will go where there are results.

Isn't that exactly what you are disputing?

I think we have to allow for the possibility that some of the correlations identified over thousands of years may actually be real.

I would never follow any of that advice, because I agree that the scientific process we have today is vastly superior and supercedes traditional medicine.

"Try things and see what works" wasn't invented yesterday though. That's why people were successfully using citrus fruits against scurvy long before double blind trials became a thing.


Yeah 'some' of those 'correlations' might also involve sourcing crocodile dung [1]. Or maybe you prefer cutting open a vein and draining some blood when you feel sick? [2]

The qualifying time for the Boston Marathon today, was a world record 100 years ago. So yeah, maybe these remedies were 'world records' in their time. But now, we have advanced, and they're sub-par and often times downright dangerous.

[1] http://www.crystalinks.com/egyptmedicine.html

[2] http://www.nytimes.com/1999/12/14/health/death-of-a-presiden...


But how do you actually know if these Ayurvedic practices work without the rigorous trial and error that "Western" science demands? Unless I'm mistaken, the Ayurvedic texts do not contain repeatable evidence of the efficacy of the treatments described within them, nor do they describe plausible explanations of how those treatments work.

Whatever truths are contained in Ayurvedic texts are accidental unless they were arrived at by some sort of experimentation, and they're mixed in with a lot of nonsense which "Western" science is attempting to sift through. Along with proving the potency of some Ayurvedic treatments, science is also DISproving that of many others, which means that it's not "catching up" with Ayurvedic so much as catching it, shining a bright light on it, and asking it much tougher questions than anybody thought to ask before.


I think you are quite mistaken. The ayurvedic or any of the sciences of ancient Indians use the same trail & error methodologies of modern science. Why should it be any different? A lot of the texts have already been lost, thanks to time, the British & Muslim rule in India.

I don't agree there are accidental truths. In India we have been following these for thousands of years. They were not exposed to the Western world, maybe because of the prejudice of religion, obscurity etc & also the belief that something of virtue is rarely possible in an Eastern civilization like India.

What ones have been disproven? If any, Western sciences have been building up on some of what were already discovered & documented. Also, I did not mean to insult anyone by saying that they are trying to "catch up". I feel at least some of what we are discovering nowadays are repeat discoveries of the Eastern / Indian / Vedic world, not because the Vedic people were smarter, but only because they have had a much much longer civilizational presence than any other. Even this longevity of Vedic civilization has not yet been "discovered" fully by the Western world.


> The ayurvedic or any of the sciences of ancient Indians use the same trail & error methodologies of modern science.

Then please show us the studies, and the reproductions. As you said, "Why should it be any different?", and verification via repeatably reproducing results is how science works.

> the British & Muslim rule in India.

That's a bit of off-topic attack that serves no purpose to your argument. If you were trying to make the argument the texts would have been preserved if India had been entirely under Hindu rule, well, that's an entirely different argument that does not validate an "Ayurveda works" claim. In short: "The dog ate my thesis" does not work.

> In India we have been following these for thousands of years.

As have been a lot of rituals and traditions that have been proven to hold no (or worse, negative) benefit. Age serves as a poor proxy for legitimacy.


The OP comment strikes me as having a rather jingoistic/nationalistic tone, so I sort of doubt its sincerity regarding question of efficacy of Ayurvedic practices.

Any culture that has been developing food habits via trial and error for centuries has some that are beneficial, some that are harmful, and others that are neither harmful nor beneficial.


> Any culture that has been developing food habits via trial and error for centuries has some that are beneficial, some that are harmful, and others that are neither harmful nor beneficial.

How is that different from science? That is great & is science. Maybe you do not recall the recent news about how Radium painting was also science. All this is science, especially the Ayurvedic sciences. What else is it?


It is different in important ways from the modern scientific method [1].

In particular, it omits (or has long since stopped):

- Developing testable (falsifiable) predictions

- Gathering data to test the predictions

- Refining, Altering, Expanding, or Rejecting Hypotheses

Modern science is actively doing all those things: constantly re-evaluating its own prior conclusions and testing against the most recent empirical data available.

Ayurveda, like many other traditional health practices, is an interesting source of hypotheses that can be tested by the modern scientific method - and we might learn a lot from doing just that - but it's by no means a fully verified system of knowledge.

[1] https://en.wikipedia.org/wiki/Scientific_method#/media/File:...


Other than the part that it has long since stopped or highly slowed down due to various reasons, why would the others be any different in Ayurveda. It is just medical science called by a different name. Now mankind's prejudice has made many shun it & discredit it again & again while at the same time digesting it & copying it to use a different name in order to credit someone else. There are many evidences for such copying, because not only are the good parts copied, but also the mistakes in the ancient Vedic sciences are copied by the people who want to discredit it because they attribute this cultural development to Hindu pagan religion — this while historically true has worked to encourage mankind to shun it even now. The copying of the mistakes are what gives away this dishonesty.

> Ayurveda, like many other traditional health practices, is an interesting source of hypotheses that can be tested by the modern scientific method - and we might learn a lot from doing just that - but it's by no means a fully verified system of knowledge.

That is true for everything, that is how science works today at least. Even scientific theories are tested again & again, just to prove them wrong. If they are proved wrong, then that is a great accomplishment, but how can they be proven wrong if they are not tested? Same with Ayurveda, it is just science with a different name.


The problem is it's hard to make money off of those things. There's strong evidence suggesting papaya seeds are remarkably good for purging parasites from the digestive system -- but why research it further when you can push Albendazole for $400/treatment?


You see a lot of turmeric and a ton of other spices and herbs in SE Asian cooking too. But I really wouldn’t say that these populations are particularly healthier.

The idea that modern medicine is ‘playing catch-up’ is laughable. For every home remedy they get right, they probably get many others wrong.

I agree drug companies are still fairly evil.


> But I really wouldn’t say that these populations are particularly healthier

How would you be able to tell?


By sampling them against other cultures, the same way we know Japanese people have lower rates of heart disease than western European/Americans.


That's not a useful avenue -- you have no control group. We can't compare people from India who eat a lot of curry to people from the US who don't: there are a million differences in diet, lifestyle, genetics, and environment that make it impossible to draw a conclusion about how it affects memory.

Your example is a good one actually: we know Japanese men are less prone to heart disease, but we don't conclusively know why. We can guess it's related to certain factors, e.g. fish consumption, but we can't know just by comparing the two cultures.


Japan has a very close relationship to Vedic sciences. This has been established since ancient times. Look at their gods for example, almost all are 1:1 Vedic/Hindu gods - Ganesh, Lakshmi, Saraswati, Kuber, Shiva, Yama etc.


> For every home remedy they get right, they probably get many others wrong.

Ayurveda & any Vedic sciences are arguably as rigorous as today's sciences, if not more. You're are grossly mistaken if you think Ayurveda is a "home remedy".


"rigorous" in science, especially biomedical sciences, has a specific definition. Where is the double-blind testing? Where is the statistical significance? Where are the clinical trials?

Ayurveda likely has a lot to offer as a FOUNDATION to start doing investigative research; but this claim of "any vedic sciences are arguably as rigorous as today's sciences" is bunkum. Show me any statistics / clinical trial data (or its equivalent) in the vedas?

I will accept & agree with the idea / contention that a lot of the treatments in ayurveda are possibly based on strong observations & correlations. Which is a good place to start modern scientific research from.

But to claim that ayurveda as written in, for instance, the Atharva Veda, is rigorous science demonstrates to me, a rather stark, depressing & baffling ignorance of modern science.


It is clear that you have not even read the Vedas to understand its implications on the world, especially in science. Most of the statistics that you ask have been lost due to them being so old, or deliberately destroyed. But we have the results to prove that these existed. "Strong observations & correlations" are just another way of saying "trial & errors".

Not just ayurveda, but take any science or even Mathematics. The Vedic civilization has been too far ahead of its European counterparts. In fact the idea of heliocentrism, gravity, action=reaction, universe & its age, speed of light, distance of the sun & moon from earth, understanding of time travel, time zones, metallurgy, the so called pythagoras theorem, sudoku & its various forms, the numeral system etc etc all have Indian/Vedic origins, so why would you doubt vedic medical sciences so much? There is also substantial evidence for surgery to have developed in Vedic India far earlier than Hippocrates or any Western counterpart. I firmly believe that if not for the muslim hordes who destroyed everything under the pretext of war, mankind would have been able to make better use of mankind's ancestral knowledge.


Look, you obviously have an axe to grind. And you are conflating too many different things.

So I'll limit myself to the original point.

Science, particularly modern science, even with all its baggage and flaws and politics, is evidence based.

Evidence in this case is statistics and data. Those are not present, from many reasons (as an aside, I don't think there is any evidence of statistical trials being performed in the Vedas).

There is no work in medicine in the Vedas that satisfies this criteria. Ergo, it is not yet evidence based science. It's observational. Now, if the govt. through AYUSH [AYUSH added in edit] actually invests in performing clinical trials etc. and publishing them in peer reviewed journals, then your argument may hold water. You are conflating correlations with causation / evidence.


No axe to grind. What could I possibly have?

> Science, particularly modern science, even with all its baggage and flaws and politics, is evidence based.

Can be reworded as: "Science, particularly Vedic science, even with all its baggage and flaws and politics, is evidence based." The vedas are not any different than any sciences that mankind honestly developed. Only, it goes far beyond what we (aka modern science) are/is able to comprehend, about the consciousness of the universe etc. With time, that will also be rediscovered. Also not to take away anything from modern science, many things have been discovered & invented far more than Vedic sciences. My problem is treating the Vedas as purely a religious book like the Bible & Quran and neglecting the great expanse of knowledge in it.


Now you are shifting the goalposts. This is what I am alluding to when I say, conflating multiple things.

Who, in this comment chain, is making any point about Vedas being purely religious? That is not apposite to this discussion.

The Vedas are a work of knowledge. In philosophy, in natural sciences. There is a strong argument to be made that they capture large parts of the essence of scientific temperament, which is analysis, refection & observation. But, modern scientific research is not just scientific temperament.

Practicing science, modern science, also requires evidence & follows a hypothesis-driven model that makes testable & provable / disprovable predictions.

You seem to be eager to misinterpret OR re-interpret terms to suit your perspective / point of view. "Evidence" based has a specific meaning. This [1] is a reasonable definition of what evidence based medicine means TODAY.

The vedas provide observations. They record correlations (perhaps). That's not evidence!

Lots of work has been lost; that's not unique to the Indian subcontinent. The library of Alexandria was lost. Many of Plato's works are lost. What does that have to do with modern science apart from regret & grief at what was lost?

[1] https://en.wikipedia.org/wiki/Evidence-based_medicine


I think you've misunderstood evidence based medicine as end-all, be-all of scientific analysis & only restricted to modern science. It is quite natural & largely common sense. Agreed, it is the gold standard for today, but it may not be so in the future, something else will come & upend that too as it always happens with scientific development. Remember Radium painting, that was also part of evidence-based medicine. By that account the Ayurvedic science has had fewer mishaps than science as practiced in modern times. Science is always about trial & error, my point is that it was not different from the times of ayurveda. This is as opposed to the biblical or quranic words of god, not the same thing at all.

Also, it is fair to say that Indian ancient texts have suffered significantly more destruction than others due to religious persecution and genocides against a peaceful civilization. The genocides are especially important because most information was transferred through word of mouth.


You do realize AYUSH ministry funds Unani Tibb, too? Just saying.


I am not a historian.

They may have been ahead of the Europeans in the past. But that’s irrelevant now. They are not ahead of modern civilisation.

And perhaps there’s some lost knowledge, but these alternative treatments as practiced now have never been shown to be effective.


You can sell papaya seeds for $200/treatment.


Only until people find out they're papaya seeds and just go buy a papaya for $3.


That papaya might not be safe though.


Why wouldn't a papaya be safe? Everything I've seen says that the seeds are always edible.


Ironic comment : Drug companies tell us we can't buy their products through other countries (at much lower cost) because "safety".


In fact they have been used to adulterate stocks of black pepper since a long time.


Just like that?? Papaya seeds are good, but the papaya itself is not safe? — this does not make sense anyway you look at it.


Ayurvedic medicine is far from infallible. Take this for instance, a selection from a reasonably popular Ayurvedic text (https://www.amazon.in/Bhojan-Chikitsa-1-Ganesh-Narayan-Chauh...) that is aimed at a general audience and purports to explain the health benefits of various common foods (my translation follows):

> कैंसर: कैंसर में पहले तीन दिन रोगी को उपवास कराएं, फिर अंगूर सेवन कराना आरम्भ करें। कभी-कभी एनिमा लगाएं। एक दिन में दो किलो से अधिक अंगूर न खिलाएं। कुछ दिन पश्चात छाछ पीने को दी जा सकती है। अन्य कोई चीज़ खाने को न दें। इससे लाभ धीरे-धीरे महीनों में होता है। इसकी पुल्टिस घावों पर लगा सकते हैं। इस रोग की चिकित्सा में कभी-कभी अंगूर का रस लेने से पेट-दर्द मलद्वार पर जलन होती है। इससे डरना नहीं चाहिए। दर्द कुछ दिनों में ठीक हो जाता है। दर्द होने पर सेक कर सकते हैं।

> Cancer: for cancer, have the patient fast for three days, then begin their intake of grapes. Perform enemas intermittently. Don't feed the patient more than two kilos of grapes in a day. After a few days, buttermilk may be given for drinking. Do not provide anything else to drink. In the next few months, an improvement will slowly be noted. A poultice may be applied to their wounds. In the course of the treatment of this disease, the intake of grape juice will occasionally cause stomachaches and a burning sensation on the anus. This is no cause for concern. The pain will subside in a few days. Compressions may be given to alleviate the pain.

You could argue that such obvious quackery has also been peddled by Western doctors who are trained in the scientific method. That is true, but at least these doctors' manuscripts wouldn't be accepted by any reputable journal. Admittedly I'm not sure how the Ayurvedic medical community organizes itself in India, but I'm willing to bet it doesn't place so much emphasis on the scientific method, and you also imply this in your comment.

A less extreme example of what's found in the book, a remedy for diabetes using bitter gourd juice:

> मधुमेह: रोगी को १५ ग्राम करेले का रस सौ ग्राम पानी में मिलाकर नित्य तीन बार करीब तीन महीने पिलाना चाहिए। खाने में भी करेले की सब्ज़ी लें।

> Diabetes: Have the patient drink 15 grams of bitter gourd juice in 100 grams of water 3 times daily continuously for three months. They should also eat cooked bitter gourd.

I think you'd agree that the remedy above isn't as good a treatment for diabetes as an insulin regimen--the one prescribed, by the way, by Western medicine.

In summary, there's no doubt that traditional systems of medicine like Ayurveda or Chinese traditional medicine have discovered legitimate remedies, but it seems like they generally prescribe a large number of false positive treatments. And if you also believe that the scientific method is the best epistemological apparatus that humans have found so far, then you should be skeptical of these remedies until they have been clinically demonstrated to be effective.


thats how Steve Jobs went he foregone real treatment for a holistic one. Math is pretty evident there. http://osxdaily.com/2011/10/20/steve-jobs-refused-early-canc...

Before you say a word about your cancer treating poop juice remind you taking chances with someone else's life.


I did not claim it to be infallible, nobody does. It is a science, on equal footing with any other science. It followed the same rigorous trial & error procedures that is required of all sciences. And science does gets things wrong. Ayurveda roughly translates to the knowledge / science of health. Just because it is from India & it follows Hindu traditions does not disqualify it from anything.


I'm going to chime in to the growing chorus of HNers trying to explain to you why you're wrong.

There's no such thing as "any other science." There is science as developed and practiced by qualified researchers all over the world - including India - and it's all one and the same science, which is a set of protocols for rigorously checking the relevance of what you're seeing, and the huge body of useful human knowledge that these methods have brought forth. And then there's everything else, which encompasses "folk wisdom," quackery and outright fraud. If Ayurveda and its kin provided comprehensive documentation of rigorously controlled, published and peer-reviewed studies, it would be science; as it doesn't, it isn't.


Agreed that there is no "any other science". But human (Western) prejudices have made Vedic sciences (as in science originating from India) the lesser science, which is what I am trying to clarify. The reasons for this prejudice are too many, but it has a religious angle, an angle that the Church has pushed for too long that people have forgotten any other angle. Also the angle of Islamic destruction, which has ensured that most texts & documentation are destroyed under the pretext of war & iconoclasm. Please see my other comments.


> But human (Western) prejudices have made Vedic sciences (as in science originating from India) the lesser science, which is what I am trying to clarify.

What, exactly, are you talking about here? It'd be better if you could cite examples.


So let me get this straight, if I have a Muslim last name I can't claim any attachment or heritage to Ayurveda even if my family uses it and we've been in the subcontinent for 500 years?


Not sure why / how you got that idea. I frankly do not get the point you're trying to make.


There are 3 kind of logic systems 4 corners(chatuskoti), 3 corners(trikoti), 2 corners( binary logic). Entire western civilization is build top of binary logic. Only one god, america has friends(5 eyes) and enemies but no nutral parties(there are occasional cat paws ). From western binary logic they can't even explain zero(mathematically). One reason is science is not global thing it's creation of given culture(In current case western culture). Because western main philosophy is binary one and as per there binary logic they thing there are only one science. And since west control the world for half millennia and still control the world they teach us there are only one global/universal science. example if you change Euclid's main assumptions you get entirely different geometry. In some comments says that ayurved is joke or useless fact is I know(and probably OP know's) it can cure series illnesses like cansces/diabetic and etc. Trying to understand ayurvedic methods using modern western apothecary teaching is absolutely useless.


Ayurvedic theory (5 elements etc...) is usless compared to modern science, so much that I dare to say it is wrong...

Does it have predictive (and thus real explanatory) power? Imagine a test (other similar tests can be imagined):

give some powder to a western chemist/biologist and an ayurvedic to examine its "elements". You are not allowed to feed it to any human or animal. Decide if it is poisonous! Who will be more successful?

Ayurveda knows nothing about modern chemisrty or biology (real elements, cells,...), its theory part is rubish (from the perspective of natural science not from social science perspective (history, philosophy...) of course).

You may trust its experimental results, but that is not science in itself just a tradition...


How do you know Hindi?


In short, State Department money. Namely:

http://clscholarship.org/

https://startalk.umd.edu/public/about

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: