Hacker News new | past | comments | ask | show | jobs | submit login
Odorant diffuser improves memory and neural functioning in older adults (nih.gov)
38 points by nopinsight 11 months ago | hide | past | favorite | 25 comments



Fantastic. Now we're going to see more of those toxic aerosol air freshener machines everywhere. They use petroleum solvents as a carrier and sensitize people toward becoming allergic.

Those fan-propelled oil droplets of air wick are also bad. They are toxic, but due to their oily nature, they soak into every single porous surface. You know how cat piss is impossible to get out? It is the same with these too.

We are also not allowed to know what the ingredients are either. It is a trade secret labelled as "fragrance". Could be leftover chemicals from deepwater horizon for all you know.


I can't stand these air fresheners, they make my nose and throat itch...


> Funding: This work was supported by Procter and Gamble

Not bash the researchers or the quality of their research but I'm unsure if a negative result would have been acceptable to their continued funding. I'm sure P&G would like to use this research to sell their diffusers after all.


These are always hard to read and I want to make a mention because we are critical in this area (like medicine and food) but when it comes to stuff like ML no one brings it up (pretty much all the ML papers you see are supported or directly from large tech companies).

Usually if the results are positive they're not exactly made up. Negative results just don't get reported. You should always maintain skepticism though and be critical. Those are still good qualities and I don't want to discourage that in any way.

(Sorry, little ranty below)

I say "not exactly" because every researcher does embellish their work a bit. I'm not going to directly criticize the researchers here (well I will) because this is more a product of the reviewing process (i.e. researchers) who are often quick to reject. The bigger issue is that reviewers see themselves as filters (i.e. adversarial to the reviewed work) and not as editors or consultants (i.e. on the same team as the reviewed work). I'm not sure if this is a psychological thing or incentive (both for sure) but it's probably something we should be discussing due to the feedback loop that it creates and is resulting in more a prolific environment where it is impossible to distinguish an ad from science. Which isn't good for science and especially for people trying to read science (i.e. why science communicators are often shit).

I'm also a big believer that we should be publishing negative results. Fuck man, most research is failing. You find out how to do something by failing to do it a hundred/thousand different ways. Why aren't we encouraging this? I thought papers were just about us scientists communicating with one another...


Well if you read the paper you'll see that the Introduction cites a lot of other research that is not supported by P&G. Not that you shouldn't be skeptical. Just pointing out that they didn't just pull the idea for this study out their ass. Also the statistical significance and magnitude of effect is quite mindblowing


This scientific study brought to you by Glade's new oil scent diffusers. Buy one at your local Wal-Mart or Target today!

This is where conspiracy theories are born, and why they are so hard to fight.


These disclosures are a relatively new phenomenon, and sure, they can feed conspiracy theories but they can also be used to fight them.


This study mananaged to finance 38 participants worth of MRI time and (likely significant) monetary compensation for all 132 participants. Yet it wastes no more than a couple of sentences on how it threw out 109 participants' worth of data on cognitive tests, resulting in a drop out rate of 83% for the cognitive side of things, and still considers its findings in this area valid.

(section 2.5 - Impact of COVID-19 pandemic)

The discussion similarly does not waste the reader's time on possible validity issues or confounders.

I can only speculate that more scientific rigour would have been applied had it been grant money being thrown around.


For anyone else who didn't RTFA, here's the direct quote. Seems reasonable to me...

----------------------

2.5. Impact of COVID-19 pandemic

Due to the COVID-19 pandemic, the UCI campus was closed in April 2020, and remained closed until the Fall of 2020. In addition, many participants did not feel comfortable entering the campus due to COVID-19 concerns even after the campus was officially open. As a result, participants who would have completed their 6-months of participation after April 2020 were either not able to return or chose not to return to campus for their second assessment. During the campus shutdown, contact was maintained with the participants who were impacted, and they were encouraged to continue their sensory enrichment, however, compliance was variable. When it became clear that the campus was going to remain closed for an extended period, we developed methods to remotely conduct the cognitive assessments using videoconferencing (Zoom app). When the campus re-opened and research participants were allowed back onto campus, participants who had received MRI scans at baseline received their second MRI scan.

The data set used for the cognitive assessment analysis was reduced due to a number of possible confounding issues including the different conditions present for the cognitive assessment testing at baseline (in office) and that given remotely (in their home using videoconferencing), the possible sensitivity of that testing to the immediate physical environment during the assessment, as well as the variable timing both between the date of the baseline assessment and the date of final assessment, and the date of their final assessment and the date they discontinued their sensory enrichment. Accordingly, in our data analysis for cognitive assessment, we only included individuals who had completed their 6-months of participation prior to the UCI shutdown (a total of 11 controls and 12 enriched). For the MRI analysis, we included everyone who returned to campus for their follow-up MRI despite the difference in time (range: 6–17 months; a total of 23 controls and 20 enriched).


Unfortunately, reasonable has nothing to do with it.

Though I must admit my first comment is rather sarcastic and not very informative. Let me elaborate on why the high drop out rate is so damning.

Let's use a marble analogy because they are common in statistics. You gather 132 marbles of that vary in size in a way that is representative of the marble population as a whole. You then randomly assign them to either an intervention (n=68) or control (n=64) group. This random assignment is already an intervention in its own right, but with these numbers you could still say the groups are pretty much comparable.

Now, you run your experiment. Your hypothesis is that the marbles in your intervention group will grow in size, and not the control. But you don't know that, and we should assume the null hypothesis until disproven.

It's time to measure your marbles. It would make sense to measure them all, but you don't do that. You take a sample (n=12) for the intervention group and another sample from the control (n=11) and measure those samples. The rest are dropouts.

The reasons that caused these marbles to drop out are irrelevant. I'm not suggesting the researchers are paid shills and consciously selected marbles to suit their narrative. For all we know all the other marbles were stolen, or are invalid for other causes outside the researchers' control.

The problem is that this act of subsampling has wrecked the experimental design either way. The null hypothesis suggests that both groups were not only initially equal, but also remained equal regardless of intervention. So you must ask yourself the following: if I took such a small sample from each group before I had even performed the intervention, what are the odds that one sample would be significantly bigger than the other by pure chance?

I'm not doing the maths, and they would heavily rely on the imaginary distribution of marble sizes in this analogy anyway. But I dare say that one sample would be significantly bigger than the other one often enough. And half of those times it would be the intervention group. So before even considering the effects of your own intervention, you're already fighting a losing battle. Because of subsampling the null hypothesis has grown stronger and can now explain even a very large difference between groups.

So what do you do as a conscientious researcher when faced with the hardships of campus shutdowns and participants that didn't do cognitive tests in person but via zoom? My recommendation would be to run your statistical tests on everything and publish it anyway. Acknowledge that garbage in = garbage out, there were many confounders, so you can't draw any conclusions. Sometimes pandemics get in the way of science.

You could even publish the results for all participants side by side with the restrictive subsample. If they both point to the same result, perhaps you even have grounds for some kind of valid conclusion! Makes you wonder why they didn't just do that, huh.

What you definitely shouldn't do is only publish your statistical tests on the subgroup and draw conclusions based on those alone. And then not acknowledge how the massive group of participants that you _chose_ to leave out could have affected the results, or how the choice to leave them out affects the null hypothesis.

Hope this helps!

(I'm not a researcher but I work in academia and have on occasion assisted in experimental design.)


Oh yes, I do agree that the sample size is problematic, but that's honestly not uncommon in anything with medical research. Samples sizes are always crazy small and people are making wildly too large of conclusions from them given that. But it's also not like you can expect to form a hypothesis and then get 50k people to participate in a study. You gotta show that there's something to the idea first. That's what these kinds of papers are about.

And for the dropoff rate, I'm less concerned with that than 1) the (different) bias introduced by the people who chose to drop out vs not and 2) the overall sample size being smaller. Were people that dropped out uniformly distributed and there was still a large sample size, it wouldn't be of any concern. The statistical error you're making here is actually the same as the Monte Hall one. Basically, ignore the previous group and pretend they don't exist. Though there probably is a bias introduced from the dropout (which isn't biased in the Monte Hall problem), but there's so many other effects going on here that it's hard to say. (And medical students and human phys students aren't known for having the strongest stats skills. To be fair, it's a fucking tough field and they got a lot of extra factors that don't make it any easier)

But again, let's never think of these works as proofs of effects but rather like a proof of concept. Even if they didn't have any dropout I'd say the same thing here. Evidence is good, more people makes the evidence stronger, but you gotta keep pushing for stronger and stronger evidence but university research papers in human studies are just never strong evidence.

I am a researcher. I do think it is good that you are being critical, but it's often too easy to be overly critical and is a mistake a lot of junior people make. There are ALWAYS mistakes and are ALWAYS things to poke holes in. But that's not the point of publishing. The point of publishing is to communicate to your peers. So in that respect being overly critical is a hindrance to scientific progress. It's because the context matters. This actually ties into my other comment in the thread fwiw. Just make sure to see things for what they are (it's why I basically ignore articles that talk about papers now, because they do the same thing but usually vastly over exaggerating the conclusions.)


On the other hand, I'm allergic to most fragrances (itchy eyes, running nose, stuffed up sinus). I have to hold my breath while almost running through the perfume department on the ground floor of a department store. I cannot walk down certain aisles of my grocery store (cleaning supplies, detergents, soaps). I dread staying at higher-end hotels because they spray their perfume everywhere. The smell of laundry detergents and fabric softeners makes me sick. Same with most dishwashing soaps. The deodorizers in Uber/Lyft cars can be terrible. The fragrance at the Denver airport bathrooms is so overpowering, I have to hold my breath while I'm in there, and then gasp through my mouth before running out.

I don't remember fragrances being so terrible when I was younger. But maybe it's another case of "it was better in the old days".


> I don't remember fragrances being so terrible when I was younger.

Developing allergies to perfumes / strongly scented things is pretty common as people age.

Companies are also much more likely to try to incorporate particular scents into their 'brand', so more places like hotels and brand-name stores are using more scents nowadays.


This sounds like total horseshit, but the effect size is enormous. A 226% improvement on the Rey Auditory Verbal Learning Test


The old cliche about stopping to smell the roses might have some validity.


Do you think they faked the data? Or something else?


> Male and female older adults (N = 43), age 60–85, were enrolled in the study and randomly assigned to an Olfactory Enriched or Control group. Individuals in the enriched group were exposed to 7 different odorants a week, one per night, for 2 h, using an odorant diffuser

> A statistically significant 226% improvement was observed in the enriched group compared to the control group on the Rey Auditory Verbal Learning Test and improved functioning was observed in the left uncinate fasciculus, as assessed by mean diffusivity

> 7 essential oil odorants (rose, orange, eucalyptus, lemon, peppermint, rosemary, and lavender; from The Essential Oil Company, Portland, OR)


I really wish I could take results like this at face value like I once did, but after the ongoing reproducibility crisis, I am very skeptical.

Olfactory sensation decreases dramatically with age, so even if I bracket the above concern, there's just not a lot of prima facie credibility here. At best, it looks like a placebo effect.

Can we all just stop popularizing, awarding tenure, and making public policy on the basis of results which haven't been reproduced yet? I.E. wait until there really is a scientific conclusion?


Hah, nice. A long while back I was interested in building a diffuser system that would rotate between different oils throughout the week. The idea at the time was that each new scent would act as a memory anchor of sorts to reflect back upon. That way when I smell the smell in the future, I'd be flooded with memories of the days that smell was present. Never did it, but it was a fun thought experiment.


> That way when I smell the smell in the future, I'd be flooded with memories of the days that smell was present. Never did it, but it was a fun thought experiment.

Pretty sure there's some good marketing copy in there somewhere; chaps memory enhancing odorizers: Smell the Future (TM)

If there's enough distinct and not unpleasant odors, and sensory bandwidth works, you could have your smell of the week, plus maybe a smell that ramps up over a month before a few special days. Your partner's birthday might smell like something; thanksgiving would smell like pumpkin spice (clearly), etc. It's beginning to smell a lot like Christmas (TM)


Thanks for the laugh :)


It's probably not worth the risk of negative effects on your respiratory system. Your lungs aren't great at clearing out tiny droplets of oil.


Any study on potential long term side effects of the odorants? Perhaps the target age group doesn't have a long term.


As a lifelong (congenital) anosmic, I often think about the practical consequences of lacking that sense. A whole therapeutic avenue cut off, if I'm thinking about this correctly, is a new one.


This makes me think of observations re: functioning in adults with hearing loss (tl;dr hearing aids help mitigate cognitive decline).

Maybe it's simply that remaining stimulated is critical to sustaining cognitive resilience, such that odors - much like any other sensory stimulant - help keep your mind on its toes?

Fun to think about




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: