Unless they are claiming the process either:
1) detects how the brain correctly models Newtonian physics vs incorrectly.
2) detects accurate knowledge from incorrect. I can't imagine this lol rich assertion would be the actual claim. But the title does vaguely imply it.
Source: I work in a very similar fMRI lab.
Update: on reading the paper, it seems that the goal of the experiment was to distinguish between engineers and novices, and it conflates 'knowledge' with 'understanding', so the claims that the discrimination is on the basis the subjects' understanding of the issues appears to be editorial overreach.
The problem is such pattern aren't any guarantee of understanding. Just "feeling confident" seems like such an easily correlated thing. More people who learn a subject sufficiently to do well on a test might "feel confident" but certain kinds of people might just feel confident without that actual learning.
If most people on the battlefield are covered in their own blood will soon die that doesn't mean gathering a gallon of a person's own blood through several donations and then throwing it at them at a Gettysburg musuem will kill them.
Would learning a key concept well but incorrectly defintely show up?
Could you use this for crime investigation or prediction?
- see how familiar the suspect is with the crime scene photos
- see how familiar they are with certain criminal acts or concepts
- see how each member of a lineup does with these tests
The real intersing cases would be the outliers who literally think differently yet have same or better performance. Even if they are just say brain damage cases that don't have the same location so do it some place else picks up the slack.
The real solution is to build a robust egalitarian culture, not to clutch pearls every time something potentially harmful and potentially useful is created.
> But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we're not living in an abstract society, we're living in the society in which we in fact live.
-- Joseph Weizenbaum, http://tech.mit.edu/V105/N16/weisen.16n.html
That's not pearl clutching, that's being serious, and ignoring it for long enough will make it impossible to build a robust egalitarian culture -- because the potential for oppression and the conditioning of being subject to automatic processes, confusing "measurements" with things, and "labels" with people, will have been amplified extremely, not to mention the sheer capabilities of mass manipulation and control, with rather little to show for on the other side of the scale.
I saw a group of cosplayers wearing these back then. Someone mentioned the name of one person in the group, and their ears went up. Watching someone play a video game showed them attentive while playing a level, and relaxed while the next level was loading. So, in a limited way, it worked.
(Problems were 1) cost, about US$200, 2) fragile mechanical design, 3) too heavy 4) short battery life 5) ears too big. Someone ought to try this again.)
The Muse (https://choosemuse.com/) is much more of a consumer-friendly device, and it was super easy to get working at a hackathon years ago, but it looks like they suspended access to their SDK.
I also don’t really see the advantage of this kind of examination. One still needs to develop tests to trigger the students thinking on the concepts—it seems we merely trade one potential form of error on the part of graders (checking students answers) for another (misinterpretations of results or bugs or in the algorithm or unanticipated changes in the model).
Just because certain practices are historically prior does not mean they’re necessarily worse. Certainty is a slippery beast. The results here still depend entirely on the assumptions of the developers of this ML algorithm, their notions of what it means to “understand something” etc.
Our era’s obsession with throwing technology at literally everything will bite us in the bum at some point (if it hasn’t already).