Hacker News new | past | comments | ask | show | jobs | submit login
Brain activity can be used to measure how well you understand a concept (neurosciencenews.com)
155 points by prostoalex 9 days ago | hide | past | web | favorite | 42 comments

I don't see where they accounted for those who incorrectly believe they understand the topic vs those who actually understand.

Unless they are claiming the process either:

1) detects how the brain correctly models Newtonian physics vs incorrectly.


2) detects accurate knowledge from incorrect. I can't imagine this lol rich assertion would be the actual claim. But the title does vaguely imply it.

With these kind of fMRI studies, there's usually a written assessment before or after the scan to verify your understanding of the concepts. They're making claim 1, that you can detect how learning happens. There's also usually a baseline scan before the class or whatever education method takes place.

Source: I work in a very similar fMRI lab.

I have a related, but different, question about what the classifier is discriminating on. The article quotes Kraemer as saying “in the study, we found that when engineering students looked at images of real-world structures, the students would automatically apply their engineering knowledge, and would see the differences between structures such as whether it was a cantilever, truss or vertical load.” This makes me wonder whether the algorithm is discriminating on knowledge of engineering terminology, rather than what the subjects understand of the physics involved. I am curious as to how the non-engineer subjects who got the answer right compare to the engineers.

Update: on reading the paper, it seems that the goal of the experiment was to distinguish between engineers and novices, and it conflates 'knowledge' with 'understanding', so the claims that the discrimination is on the basis the subjects' understanding of the issues appears to be editorial overreach.

Sure, I believe you can rigorously find brain patterns that are correlated with understanding. The obvious, "correlation is not causation" applies, however.

The problem is such pattern aren't any guarantee of understanding. Just "feeling confident" seems like such an easily correlated thing. More people who learn a subject sufficiently to do well on a test might "feel confident" but certain kinds of people might just feel confident without that actual learning.

But they're testing the participants to see if they actually understand. Re-read gp

They're testing understanding for the learning set, but applying a correlation to estimate understanding on the test set.

What is scientific causation but correlation that works to an acceptable quantity of Sigma?

Causation also implies well sequential causality in some way and reproducibility. A subtle but crucial difference.

If most people on the battlefield are covered in their own blood will soon die that doesn't mean gathering a gallon of a person's own blood through several donations and then throwing it at them at a Gettysburg musuem will kill them.

As Feynman discussed in "The Character of Physical Law", there is more to it than that, but I think one can say that the empirical evidence underpinning all science can be characterized as correlation that passes various statistical tests.

Causal studies need to control for confounding variables, for one. There are all sorts of spurious non causal correlations with low p values.

Also, it is a matter of state of mind and mood - you might not believe you understand the concept because you do have bigger picture, but you might also understand it as intended by the test, but not believe that you yourself understands. We have an image of ourselves

fMRI wrecked by impostor syndrome and Dunning-Kruger.

I wonder two things...

Would learning a key concept well but incorrectly defintely show up?

Could you use this for crime investigation or prediction?

- see how familiar the suspect is with the crime scene photos

- see how familiar they are with certain criminal acts or concepts

- see how each member of a lineup does with these tests

I would imagine you can get false positives from _thinking_ you understand a concept - can do plenty of mental processing without ever really having any thoughts on the subject that are technically correct.

I give it five years before this is involved in technical interviews.

And after 20 years it is applied to neuroscientists with the result that it is complete nonsense. Not the first time something like that happened.

"Hello, please lie in our comfortable fMRI scanner"

The true value of this likely/ isn't/ in its ability to access directly the understanding of the subject - we can tell that already but /how/ the thinking process works. Correlations only tell you consistency not being right.

The real intersing cases would be the outliers who literally think differently yet have same or better performance. Even if they are just say brain damage cases that don't have the same location so do it some place else picks up the slack.

Well hello dystopian world, we meet again. On the plus side, kids won't have to take long boring exams anymore. On the down side, everything else about discrimination...

Remember Timmy we have to wear our helmets in school. Thank you young man.

Reminds me of this article that was here a while back: https://supchina.com/2019/04/05/chinese-parents-want-student...

Discrimination doesn't have to be bad. It can be used to direct resources where they will do the most good, for instance.

The real solution is to build a robust egalitarian culture, not to clutch pearls every time something potentially harmful and potentially useful is created.

> It is true that a computer, for example, can be used for good or evil. It is true that a helicopter can be used as a gunship and it can also be used to rescue people from a mountain pass. And if the question arises of how a specific device is going to be used, in what I call an abstract ideal society, then one might very well say one cannot know.

> But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we're not living in an abstract society, we're living in the society in which we in fact live.

-- Joseph Weizenbaum, http://tech.mit.edu/V105/N16/weisen.16n.html

That's not pearl clutching, that's being serious, and ignoring it for long enough will make it impossible to build a robust egalitarian culture -- because the potential for oppression and the conditioning of being subject to automatic processes, confusing "measurements" with things, and "labels" with people, will have been amplified extremely, not to mention the sheer capabilities of mass manipulation and control, with rather little to show for on the other side of the scale.

"Goodmorning kids, Please wear your helmets"

Whenever I read one of these articles/headlines, I wonder how/where can I get a device to measure brain activity. Some sort of fitbit for the brain (ideally with no connection to 'the cloud', of course).

The cutest device for this came from Neuroware in 2011 - neko cat ears connected to an EEG sensor. The system could pick up basic brain states - relaxed, attentive, startled - and moved the ears accordingly.

I saw a group of cosplayers wearing these back then. Someone mentioned the name of one person in the group, and their ears went up. Watching someone play a video game showed them attentive while playing a level, and relaxed while the next level was loading. So, in a limited way, it worked.

(Problems were 1) cost, about US$200, 2) fragile mechanical design, 3) too heavy 4) short battery life 5) ears too big. Someone ought to try this again.)

OpenBCI sells kits. I personally wont buy until they get up to like 64/128 channels but they have decent 16/32 channel kits right now. They even started offering all-in-ones so you don't have to mess around with picking the wrong set of components to get up and running

Interesting, thanks for the tip. Have you tried one of those to decide not to buy them? The 'open' in their name is encouraging. How far are we to train some sort of assistant that executes commands based on brain activity patterns, e.g. instead of shouting '<assitant-name> turn off the music', just think it and have it executed?

I've used OpenBCI before, it's pretty good - noise filtering was the hardest part, which isn't really that bad of a problem to have. Here's a paper showing how to read eye movements with it: https://sci-hub.tw/10.1109/ICORR.2017.8009392

The Muse (https://choosemuse.com/) is much more of a consumer-friendly device, and it was super easy to get working at a hackathon years ago, but it looks like they suspended access to their SDK.

No, don't buy anything from Muse. Those devices are awful and the firm works (worked?) hard to reduce visibility on the problems for example in their developer forums, instead of working to address the problems. I owned multiple device versions from the Kickstarter forward and they were all universally finicky and shit. The one redeeming feature were the (patented) sensor pads, which can work better than more "traditional" sensor pads. But they dry out and get gross pretty quickly, and also are easily damaged, and it felt like the Muse people really wanted deep down to be in the business of supplying people with fresh new sensor pads like -- badum-psh! -- some kind of subscription. Interestingly, I also used one from a maker event, and it definitely seemed like they had picked the cream of the crop in order to market them better. But the production units were terrible!

Can you confirm that you are bashing the right company? There is nothing on a Muse that I can imagine fits the description of a sensor pad that could wear out. I have two devices and they work fine -- but EEG is itself a major challenges and dry electrode even more so.b

Oh jeez what the hell, you're right. I was thinking about Emotiv! Well it'd be nice if I could unfuck this situation, as I also got a Muse through Indiegogo and even though I gave it to a friend some years ago, it did what it said on the tin.

The muse has a solid metal strip. Are you thinking of a different company?

Yes, you're totally right, and I am totally wrong. I was thinking about my Emotiv headsets and not the Muse!

Yeah, I've written off using emotiv for anything because they keep all your data, and can do what they want for it, and rent it out to you. Muse, OpenBCI and OpenEEG all the way.

It appears the multiple choice test worked better (a greater difference in scores). Which I suppose isn't surprising, they aren't limited to the yes/no style questions that the fMRI used.

It's getting a sense of some kind of measurement of how it is written in the brain. It's not understanding : brain != you. and not limited to the concept. For example, it doesn't take into account how well you "feel" about the concept. People are not computing machines per se. People are very emotional for one thing and without any research everybody knows that your current mood and state of mind influences a lot. Two people with the same understanding could fire different brain patterns, perhaps.

Me on CORS: _________

That's going to be my next interview question...

What is the evidence this approach would transfer to new concepts, new test materials or non-Dartmouth students?

This can be used to A/B test online courses to perfection.

Seems like a very broad conclusion for such a narrow study. I’d question whether it is as successful when it comes to concepts that are more fluid, such as those found in the humanities.

I also don’t really see the advantage of this kind of examination. One still needs to develop tests to trigger the students thinking on the concepts—it seems we merely trade one potential form of error on the part of graders (checking students answers) for another (misinterpretations of results or bugs or in the algorithm or unanticipated changes in the model).

Just because certain practices are historically prior does not mean they’re necessarily worse. Certainty is a slippery beast. The results here still depend entirely on the assumptions of the developers of this ML algorithm, their notions of what it means to “understand something” etc.

Our era’s obsession with throwing technology at literally everything will bite us in the bum at some point (if it hasn’t already).

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact