There were 4 experimental groups with different parameters of flicker-based "entrainment", but there was no control group where they didn't flash lights in people's faces at all! As far as I can tell, the results could equally well indicate that flickery lights discombobulate the brain, harming learning, unless done at a precise frequency.
Welcome to high end academic neuroscience. It's a place where banal, but completely rational explanations are to be ignored in favour of hyperbole and media hype. The study will have been designed to only allow for publishable results. Anyone who understands the hardware involved will certainly question how this "entrainment" was achieved.
Supporting your claim, from the paper: Participants (n = 10) were excluded due to experimental (e.g. using incorrect response keys, self-reported intolerance of the visual flicker)"
>As far as I can tell, the results could equally well indicate that flickery lights discombobulate the brain, harming learning, unless done at a precise frequency.
You mean like the 50-60 hz AC powers main frequency of florescent light bulbs? I wonder if one day we will look back on that as the next lead pipes
Years ago when I was much younger I worked with o-scopes on a regular basis. I remember being quite surprised one day when I tried putting my thumb on a probe tip and happened to have my other hand somewhat near the power cord. You can quite easily see the 60hz sine wave get stronger with proximity of the hand to the power cord. The signal drops off with the square of the distance or something like that, but might be another unanticipated realm of influence.
Also you might want to google "RF hearing" and see who was very interested in this area of research :D
I'm curious - and no expert here - but if what you say is correct (I haven't checked the paper), I thought this is the sort of thing peer review screened for?
If it's the case there was no control, why would peer review allow it and why would that not be mentioned in the article about it?
There is a group they label "control", but they simply get fed flicker at frequencies they predict to be non-optimal.
They did measure differences, so the experiment does have power (unlike an experiment with no control group at all). It's a valid experiment, on its own terms. But without a 0-flicker control group we can't absolutely say whether it improves learning over the 0-flicker baseline we all experience in daily life, only that some flicker frequencies/phases are better than others (and by a wide margin). This is still an interesting result! There may be other results I'm not aware of that argue against my discombobulation theory. I'm sure the experimenters have a deeper grasp of the field than I do.
It's still an interesting effect, even if it isn't useful. The reviewers were presumably evaluating the paper on its merits as a neuroscience paper, not its suitability as an educational treatment.
The reason the article hypes the results so much nonetheless is that it's a press release by the university where the researchers work.
“Passing” peer review is a bit of a misnomer. Peer review doesn’t prevent you from publishing anything. It just prevents you from publishing obviously flawed, or obviously fraudulent experimental results in this particular journal.
You can always shop around to different journals, use disclaimers, get the name of a prominent peer on tour paper, or find a journal with no reviewer.
If you really want to publish, you will.
Disclaimer: I haven’t read the actual experimental paper. I don’t claim they are flawed. Lack of a control doesn’t mean you are flawed. You could be using baseline results from another study as a comparison.
>Lack of a control doesn’t mean you are flawed. You could be using baseline results from another study as a comparison.
No you can't! This is naughty! There are too many uncontrolled variables between studies. You need to run your own control group, every time, to have any experimental power.
To be sure, it's common to cheat in the way that you describe, for budgetary reasons. That doesn't make it good science.
Such nonsense. I wish we had a cutoff for effect size just like we have a cutoff for p value.
The effect size here is so incredibly small that this is a negative result. It shows that this doesn't matter at all to anyone, if it's even real given how marginal it is. These researchers should be ashamed for putting out such junk and selling it at something big.
Been using brain.fm which claims to do similar with their cerebrally-engineered audio. Anecdata point one can report feeling much more focused and engaged when it is playing.
Unless you're wearing an EEG device, it's probably different from what's being studied here. Audio entrainment without feedback has been known for 40+ years and hasn't shown meaningful benefits AFAIK.