
On Bloom's two sigma problem: A systematic review - bschne
https://nintil.com/bloom-sigma/
======
wbillingsley
This article seems to have the common failing that it never really gets past
page 2 of Bloom's paper.

Bloom's paper is mostly cited in the intelligent tutoring systems world for
it's title and first page (the identification of the problem, which was
summarising two of his students' results), but the majority of the paper is
spent analysing other interventions, such as improving instructional
materials, home environment and peer group, enhancing cues and participation,
etc to see how close they can get to matching tutoring.

The bigger issue with trying to use the numbers from Bloom's paper today is
that "effect size" assumes the control groups are similar. Bloom's studies
were in US high schools in the early 1980s. Most ITS papers are on university
students or interventions that are also supported with extra staff time and
training. They're often on different subjects, with different levels of
support in the base class, differently motivated students, different home
environments, different teaching styles, etc, but we expect the results
between studies to be comparable. I doubt even modern US high schools are like
US high schools of the 1970s and 1980s. The control group for Bloom's studies
no longer exists.

To give an extreme example of this problem - that control groups in education
are not at all similar - when I was doing my PhD on AI in education, I was
doing it at a university that gives "supervisions" (small group tutoring) to
all its students. So if I were to have tried an "effect size" study on
anything I built, my _control group_ would have been equivalent to Bloom's
_intervention group_ (in that they _did_ receive small group tutoring) not his
control group.

When I was doing my PhD, I had a little informal tea-time quip which was that
the flip side of effect size analysis is everything control groups "don't do"
if they're to be comparable. Bloom finds classroom morale has a 0.6 sigma
effect, so should we take it that the base is that our studies should take
place in classes with bad morale? Assigning homework has a 0.3 sigma effect,
so presumably control groups don't do that (let alone mark it because that has
a 0.8 sigma effect)?

------
vajrabum
The 2-sigma challenge refers to Bloom's research finding that one on one
tutoring using mastery learning lead to a 2 sigma improvement in student
performance. The article is in part a discussion of whether this conclusion is
justified. I love this quote from the article, perhaps because it suggests
that things could get a _lot_ better at our schools without miraculous or
infeasibly expensive interventions by focusing on reliable smaller effects.

 _The 2-sigma challenge (or 1-sigma claim) is misleading out of context and
potentially damaging to educational research both within and outside of the
mastery learning tradition, as it may lead researchers to belittle true,
replicable, and generalizable achievement effects in the more realistic range
of 20-50% of an individual-level standard deviation. For example, an
educational intervention that produced a reliable gain of .33 each year could,
if applied to lower class schools, wipe out the typical achievement gap
between lower and middle-class children in 3 years—no small accomplishment.
Yet the claims for huge effects made by Bloom and others could lead
researchers who find effect sizes of "only" .33 to question the value of their
methods._

------
smogcutter
A quote in the Alfie Kohn article linked in TFA is, although anecdotal,
particularly damning of DI:

> Reporters for the New York Times and Education Week visited Direct
> Instruction (DI) classrooms — in North Carolina and Texas, respectively —
> and coincidentally published their accounts in the same month, June 1998.
> The Education Week reporter found that sixth-grade students, successfully
> trained to do well on the main standardized test used in Texas, couldn’t
> explain what was going on in the book they were reading or even what the
> title meant. Apparently, she concluded, “mastering reading skills does not
> guarantee comprehension.” The Times reporter had been told by the for-profit
> company running a DI-style school that all of their kindergartners had been
> trained to read. “All you have to remember” as a teacher, he was told, “is
> that you can’t go off the script.” But when the reporter showed the children
> “something basic they’d never seen,” they couldn’t make heads or tails of
> it. A regimented drill-and-skill approach had trained them to “read” only
> what had been on the teachers’ script.[7]

------
chrismorgan
I keep getting distracted by the bad line wrapping in links, where it just
breaks mid-word with no hyphen or anything.

The offending CSS rule is

    
    
      main .about p > a,
      main .post p > a {
        word-break: break-all;
      }
    

This should just be removed, it’s unambiguously wrong. I’d have said it should
be using `overflow-wrap: break-word` instead, but the containing paragraph
having `word-break: break-word` makes that unnecessary.

The word-break and overflow-wrap properties are fairly nuanced, and I honestly
suspect that _most_ times I’ve seen them used they’ve been used incorrectly.

Another change that should be made to the page’s styles is reverting to left-
alignment inside tables, and probably turning off hyphenation there as well.
Left justification with hyphenation works well enough on regular body text
(it’s a stylistic choice, shall we say), but inside tables with narrow columns
it’s destructive. I’d add this:

    
    
      main .about table,
      main .post table {
        text-align: left;
        hyphens: none;
      }

------
taeric
Finding little evidence of learning transfer feels like a massive blow to many
methodologies.

Of course, I think I'm just bemoaning that my industry seems so opposed to
situational training.

------
jimhefferon
So, is there software that facilitates Mastery Learning, say, under Linux? I'm
not sure I understand what is involved.

~~~
AtlasBarfed
It seems to be based on avoiding cargo cult learning, so you need to learn
everything from "first principles" (hi N-GATE!).

So Linux would probably require computer organization and operating system
theory mastery, and from thence you get to Linux's specific implementation(s).

Learning is clearing a fairly complex DAG, and it would be nice to have a good
understanding of theoretical mastery prerequisites and the next concepts a
mastery step enables at a good granularity, but basically wikipedia links
between articles seems to be as good an (open source) map that I've seen.

------
hamolton
This might have gotten more attention here if the link tet didn't contain
educational research jargon. Seems like a serious literature review, though,
at a glance.

~~~
gumby
Plenty of YN posts contain domain specific jargon from physics, mathematics,
chemistry, biology, and various computing sub disciplines. I don’t see that
educational jargon in a paper intended for practitioners in that field should
be any different.

~~~
artir
Nintil.com author here. I'm no education researcher, but I think the post is
relatively jargon free, other than the jargon I deliberately introduce after
explaining what it is I think.

