
Measuring actual learning versus feeling of learning - prostoalex
https://www.pnas.org/content/116/39/19251
======
Rainymood
This stuff is super important and it is good that it is being researched
seriously. The money shot is right here in the abstract

>We find that students in the active classroom learn more, but they feel like
they learn less.

>We show that this negative correlation is caused in part by the increased
cognitive effort required during active learning.

In other word, "real" learning is hard, it takes work, it's not fun. The
problem is that "the algorithms" (whatever recommendation ML algorithm YouTube
has) works so damned well we are sucked into this endless shallow junkfood
dopamine loop of videos that give us _just_ the right amount of dopamine to
click on the next one.

If you're struggling, you're learning.

~~~
ghaff
"A learning should be fun" assumption gets baked into an awful lot of
discussions around Ed Tech and related topics. It's not that learning should
always be painful/boring/etc. but I'm not sure that coming at education from
an angle where if it's not enjoyable there must be something wrong is the
right one.

~~~
grawprog
I don't know, I actually do find learning fun. Even when it gets hard and
frustrating. Almost especially when it gets hard and frustrating. It feels
good when something finally clicks. Though I find it's not until some time
later i realize this 'thing' I used to struggle with is now easy to
understand.

For me it's the same reason I like video games. It gives me the same feeling
learning something new, especially when I can connect it somehow to previous
knowledge, that completing an especially frustrating section of a game I have
to do over and over before I get it does.

------
danieltillett
When I was a professor (natural sciences) I decided to play around with
different learning systems to see what worked best. I discovered the obvious
that learning depth is negatively correlated with the student evaluation of
their lecturer during the course and positively correlated a year later
(students hate learning when they do it, but love the subject so much later
that they choose your next course the following year because they learnt so
much).

The more useful thing I discovered was getting the students to read the
lecture notes before a lecture boosted the class average by 10% points. 10
minutes spent reading the lecture notes before the lecture and answering a few
simple online multiple choice questions about the lecture was surprisingly
effectively at improving learning. My assumption is it cut down on the novelty
overload effect where the student’s brains shut down mid-lecture and so they
were able to get more out of the lectures.

~~~
dictatorsunion
Apparently, the reading technique is used throughout AWS to make meetings more
productive.

[https://news.ycombinator.com/item?id=20980025&ref=hvper.com&...](https://news.ycombinator.com/item?id=20980025&ref=hvper.com&utm_source=hvper.com&utm_medium=website)

> AWS purportedly puts design documents forward in the form of six-pagers.
> They start meetings with a 20 minute silent reading session. It's like the
> book club from hell.

>> Not just AWS- that's an Amazon-wide technique. And it's freaking amazing.
You should try it.

~~~
daxfohl
I half wonder if in lecture format, a silent 60 minute reading session would
be more productive. Lecturer is available to answer questions but other than
that the textbook is the guide.

------
emilga
Robert Bjork [0] talks a lot about this, in particular in [1], and also in his
1h long lecture "How we learn vs how we think we learn" [2].

It's important to distinguish between "performance" (how well you're doing
right now) vs "learning" (how well you do after some time delay).

Compare blocked practice with interleaved practice. Suppose you're practicing
calculating the area of different geometric figures of types A, B, C, etc.

Blocked practice means you do problems in the order of: AAAABBBBCCCC, etc.

Interleaved practice means you mix them up: ACCBAABACA, etc.

Blocked practice increases your performance (how well you're doing right now)
because problems of the same type cluster together, i.e. you're able to
"cache" the right formula and just plug in the numbers. But this doesn't help
learning, because you're not practicing recognizing what features of the
figure should prompt you to retrieve which formula from memory.

Interleaved practice reduces your performance, because more cognitive effort
is required to retrieve the right formula, you might get it wrong, etc. But it
improves learning, because you're training yourself to recognize which figure
requires which formula.

So "desirable difficulties" can be introduced (of which interleaving is one)
to increase learning at the cost of reducing performance.

[0]
[https://bjorklab.psych.ucla.edu/research/](https://bjorklab.psych.ucla.edu/research/)
[1] [https://youtu.be/gtmMMR7SJKw](https://youtu.be/gtmMMR7SJKw) [2]
[https://www.youtube.com/watch?v=oxZzoVp5jmI](https://www.youtube.com/watch?v=oxZzoVp5jmI)

------
jimmyvalmer
I learned everything doing problem sets, and was completely lost during
lecture. For every gifted undergrad who can integrate in real-time the
lecturer's material, there are dozens if not hundreds of us normal folk who
need to work things out at our own pace. With the advent of AI feedback
systems, hopefully the western european notion of lecture becomes a thing of
the past.

~~~
ghaff
Non-interactive big room lectures can work but, in my experience, mostly by
really top-notch lecturers especially for topics that are a bit less technical
or, at least, not wholly math centric.

One big issue I think is that the big lecture hall environment is something of
an anachronism. There is something of a forcing function but, for the most
part, with a modicum of production work, the video version can actually be
more useful than the live one. (At conferences these days, which I mostly
attend for the "hallway track" anyway, I find I usually prefer to watch the
streaming feed than to be crammed in a room with 10,000 of my closest
friends.)

~~~
friendlybus
The school model is a bit fascist. The youtube or video model is an
interesting way for academics to propagate their ideas because only the people
who actually want to be there, show up. The ability to get Q&A from a
community of broad but like-minded people in their interest for the topic
makes for an incredibly focused experience. Like being engrossed in a campfire
horror story rather than being forced to listen to paint dry.

The big problem with video, that makes me still value text based tutorials
from the yesteryear (2000s), is that it's very hard to search for a particular
sentence or string of information in a video. The varied pacing and delivery
of information between speakers can get vary frustrating when you have to
break your work flow to get to that exact moment in a video where the guy
repeats what you want to hear inbetween ums and ahhs, and his 3rd time
explaining a very basic concept.

Text still allows the reader to be in control of the pace & is the most easily
searchable for the right information, which is vital for maintaining focus &
flow at work. Video needs to catch up to that, hopefully automatic
transcription can save the day.

~~~
ghaff
People who create video and audio content should definitely consider getting
transcriptions made. Automated transcriptions are still pretty mediocre but I
get all my podcasts transcribed by humans. It's valuable for a lot of reasons:
SEO, listeners who prefer to consume text, and myself for reusing material in
other forms.

~~~
friendlybus
If you mean getting your podcasts transcribed means you run a podcast, what is
it?

~~~
ghaff
Innovate @Open is my relatively new one.

------
mncharity
Note the intervention to prime a desired response to difficulty.

With active learning, it's clear that early description of the approach, its
effectiveness, and that it often doesn't feel like that, improves student
approval, and outcomes(IIRC).

With 'belonging interventions', priming at-risk freshman with a narrative of
"everyone here struggles - you get help, work hard, and succeed" inoculates
against a narrative of "I'm struggling - they made a mistake admitting me - I
clearly don't belong here", flipping a feedback loop of (not)forming academic
and emotional support networks and (not)seeking help, and (not)succeeding.

One fun corner of medical reform involves medical interventions that are
cheap, easy, with little downside, large upside, and no disagreement that they
should happen. And still often don't. Promptly getting aspirin when presenting
to the ER with heart pain, I recall as an example. It's been a long-term
process improvement struggle that the profession has been pursuing for decades
now.

One reflection of the state of education process improvement, is that this is
barely even a conversation yet. And perhaps something personalized semi-
automated teaching can help with. Everyone gets an intervention 'punch list'.

For example, checking and rechecking that _everyone_ , but especially students
from no-previous-college families, has an "education is something I create for
myself with help" model, instead of a "education is something the teacher does
to me" model, seems something we might actually do, to great effect.

------
wangii
it's a very interesting research topic but I'm disappointed by the paper
(after a very quick look).

one immediate question I had, after read the abstract, is how are they going
to measure learning result. without looking into the testing material, I'd
argue if the tests were not carefully designed, it could produce whatever
result anyone likes, by adjust depth/width ratio of the given subject. maybe
the testing materials are so well regarded in the field so there would be no
justification necessary, but by ignoring the possibility to readers, it feels
not trustworthy to me.

back to the explanation of the result, it's entirely possible that learners
(and the authors) not able to draw distinctions between information and
performance. I'd assume the traditional teaching would expose much more
information to students (anecdotes, connections to other fields), but 'active'
learning could give better result in performance building in a narrowed
domain.

I'll read carefully into the paper sometime later, but for now, it's not a
paper I'd recommend. (feel free to downvote if I get anything wrong)

~~~
smogcutter
The paper covers this:

“The instructors did not see the TOLs [post-class test of learning], which
were prepared independently by another author. 9) The author of the TOLs did
not have access to the course materials or lecture slides and wrote the tests
based only on a list of detailed learning objectives for each topic.”

------
purplezooey
The article keeps ranting about active learning, but only describes one form
of it (physics demonstrations and "interactive quizzes"). The problem here is
the complete lack of active learning tools in stem and the garbage that passes
for it in a vacuum. (no pun intended)

------
mncharity
One possible atypicality, is that Harvard freshmen have had unusually
"excellent" pre-college teachers. Teachers clearly presenting well-organized
information in an easily-accessible manner. And so are less likely than
students with poorer teachers, to view understanding as something one creates
for oneself, by wrestling with the material. So the wrestling of active
learning is interpreted as a teaching fail. And the class overlaps and
emphasizes the rude awakening that their pre-college learning strategies are
no longer sufficient.

------
scranglis
This has been known for sometime, and is the reason we’ve built brilliant.org
the way we have.

I’m sorry for the self-promotion, but it is extremely relevant in this case.

------
daxfohl
The charts look weird. The intra-group student variances were so small on the
tests of learning in all cases. That seems not right.

------
bread_juice
That's fantastic! Being a lecturer at one of the universities I am very
grateful to this research. It changes my perspective.

------
gojomo
This reminds, also, of the Dunning-Kruger effect. Here, those who've truly
learned more are also more reserved in their self-evaluation – while those
given a more comforting-but-transient "gloss" of instruction overestimate
their learning.

