> Stan J. Liebowitz is the Ashbel Smith Professor of Economics at the Colloquium for the Advancement of Free Enterprise Education in the Jindal School of Management at the University of Texas at Dallas.
> Matthew L. Kelly is a research fellow at the Colloquium for the Advancement of Free Enterprise Education in the Jindal School of Management at the University of Texas at Dallas.
Here's another piece by them on net neutrality: https://www.insidesources.com/state-governments-drop-net-neu...
Here's the site for the "Colloquium for the Advancement of Free Enterprise Education": https://jindal.utdallas.edu/centers-of-excellence/capri/cafe...
Here's a press release about the founding of this entity, which includes some information on where the funding for it comes from: https://jindal.utdallas.edu/news/new-program-at-the-jindal-s...
> a local donor — a UT Dallas graduate who wishes to remain anonymous
From the article:
> This is starkly illustrated by comparing Texas and Iowa. According to U.S. News and World Report, Texas, which ranks 33rd, is far surpassed in educational quality by Iowa, which ranks eighth.
> Think about that: White students do better in Texas than in Iowa. Black students do better in Texas. Hispanic students do better in Texas. Asian students do better in Texas. Given these facts, it is absurd for U.S. News to rank Iowa higher than Texas in terms of educational performance.
So... an alumni from the University of Texas was so angry that Iowa was ranked above them in education they partnered up with the Koch brothers to fund an $840K institute at UT Dallas that would publish a study with an alternate way of doing the rankings that would put Texas above Iowa?
That's some next-level rivalry right there.
The fact that they are members of some brand-new academic* unit that claims to be "for the Advancement of Free Enterprise Education" suggests that there's some non-zero chance they knew the conclusion about public spending on education before they started.
I raised a small set of facts without making any claim/appeal/accusation of anything sensational or conspiratorial.
If researchers at the Colloquium for the Advancement of Drink Your Damn Milk publish a study concluding you should drink your damn milk, it's reasonable to weight their conclusions accordingly until you see how others with more domain knowledge assess them.
-- Mark Twain
Reason Foundation is a 501(c)(3) nonprofit organization supported by donations and sale of its publications. Its largest donors are the David H. Koch Charitable Foundation ($1,522,212) and the Sarah Scaife Foundation ($2,016,000), according to disclosures. Other major donors are Donors Trust and Donors Capitol Fund, which in turn do not reveal their donors. The Reason Foundation is part of the libertarian Atlas Network,, the State Policy Network and ALEC.
Gotta hand it to them, pretty brilliant to call anti-regulatory extremism, antipathy to public institutions, and outright rejection of structural inequality "reason." Guess I am unreasonable.
Not that anyone so far managed to point at anything particularly nefarious that Kochs do, other than them being rich and active.
If one speaks of THE Adolf Hitler, does it demand a replay of the historical record?
The race adjusted performance is interesting and useful but they're basically using it as a proxy for wealth. Why can't they just use some metric that directly represents wealth?
If Massachusetts schools are efficiency with money than the system nation wide is far more screwed up than anyone is willing to admit.
That state ranking depends in part on money spent is abhorrent. That should be tracked for reasons that should be obvious but not part of a composite score meant to represents results.
The bit about unions lines up with my anecdotal experience. I worked with the education department in college and they did not hold teachers unions in high regard. If your unions are so bad that college professors at a state school in a blue state gripe about how they hold back progress then I think it's fair to say your union is pretty bad.
I have no idea whether this is true or not
The Hispanic kids grow up in mostly uniformly Hispanic environments, so they have less need to learn English, while latter grow up in more diverse environments. If hardly any neighborhood kids and classmates speak your native language, you need to learn English to communicate with them, while if you're Hispanic, chances are you're growing up in a majority-Hispanic neighborhood and are going to majority-Hispanic school, so you can do mostly fine with only basic English skill.
The same thing is true about your parents, so poor English proficiency of kids is not caused by poor English proficiency of parents, but rather both are caused by the same factor.
I'm a middle aged white male and I've been confronted by neighborhood watch and by the police while walking in other neighborhoods.
If I had attacked someone who confronted me, as Zimmerman claims Trayvon Martin did, it could easily have ended with a gunshot.
I agree that a big part of #blacklivesmatter is claiming that black folks face unique problems, but in order to make that claim they ignore the cases in which white people have the same experiences.
As a white person from an impoverished family who grew up in a mixed race inner city neighborhood, I see a lot of wealthy white people who claim my life experiences don't happen to white people. I see Hollywood and the news media making the same claims. I see BLM making the same claims.
That's a distorted perspective of life in America that ignores the existence and life experiences of a lot of people.
No, it's doing worse than ignoring them. It's actively trying to silence them.
They're perfectly valid, until they claim that no white people have similar experiences. A lot of privileged people really seem to imagine that all white people are as privileged as they are.
And when sociologists lump the population of Beverly Hills and Silicon Valley together with white people born in the ghetto, the resulting statistics mask the experiences of white people who aren't as fortunate.
Specifically in response to that link, I believe the overpolicing of America is a serious problem, and the war on drugs is a war on poor Americans. I know it because I've seen it. I know too many people who've gone to jail or prison for drug crimes. But I won't call it "The New Jim Crow" because I've seen it affect too many white people as well.
More white people than black people are killed by police. More white people are jailed for drug crimes. The per capita rates are lower among "all white people" but the problems aren't evenly distributed; for certain subsets of white people the per capita rates are equally high.
The lives of those white people matter as much as the lives of the people BLM cares about.
Regardless, Rayiner was making a pretty straightforward point, and you've derailed it with a tendentious harangue about Black Lives Matter. Not here, please?
Second point: Rayiner introduced Trayvon Martin and BLM to the discussion, not me. If your objection is to derailing the discussion with an unrelated topic, you should be making it to Rayiner.
> The per capita rates are lower among "all white people"
If you're trying to limit the discussion to the argument that black people are less privileged than white people overall, you're doing EXACTLY what I've repeatedly said: ignoring the experiences of less privileged white people who aren't any better off simply because different white people are more privileged; trying to exclude them and their experiences from the discussion.
u sure about that
According to the Washington Post, roughly 50% of people shot by police are white, 25% are black.
A fact that doesn't help poor white people at all. Their lives aren't any better just because you're privileged.
There is much research (once you sort through the partisan fluff from research institutions with obvious bias) that supports the positive impact of revenue per student on academic achievement on standardized tests. It's actually funny this article popped up, I legit just read a paper from 2015 called "A Cost-Benefit Analysis for Per-Student Expenditures and Academic Achievement". I genuinely read that this morning. Weird. The authors found that "There was a significant correlation between revenues available per student and ACT scores as one outcome measure of achievement." And just to drive that point home, they replicated the findings from a 2002 study, further solidifying that sentiment.
And that's just the most recent one I've read. I'm sure there are more recent. And that is definitely just one in the series of research related to per-pupil expenditures.
Also, I'm afraid this piece serves no purpose other than to be self-congratulatory to the 'lower taxes at all costs' group and right-to-work proponents. Why I say this: research pieces probably shouldn't include snide comments like
"high-tax, high-spending progressive utopias."
Maybe that's just me? Am I off base?
edit: also, the comment section on that article is awful. Just awful.
What? Why should you count expenditure as a positive? The only thing that ought to matter is achievement. If you hold achievement constant, but spend more, that is objectively worse. Which is the whole point of the article.
> There is much research (once you sort through the partisan fluff from research institutions with obvious bias) that supports the positive impact of revenue per student on academic achievement on standardized tests. It's actually funny this article popped up, I legit just read a paper from 2015 called "A Cost-Benefit Analysis for Per-Student Expenditures and Academic Achievement". I genuinely read that this morning. Weird. The authors found that "There was a significant correlation between revenues available per student and ACT scores as one outcome measure of achievement." And just to drive that point home, they replicated the findings from a 2002 study, further solidifying that sentiment.
Yes, but did they adjust for the confounding factors the authors of this article point out? I'll bet they didn't.
Because achievement is hard to measure.
> If you hold achievement constant, but spend more, that is objectively worse.
Which again depends on what you mean by achievement.
Consider e.g., Lakeside's teletype terminal in 1960s. If computer access didn't increase aggregate ACT scores (which, it probably didn't...) was it then "objectively a waste of money"?
Having grown up in post-NCLB school system, I'm extremely skeptical of standardized tests as a measure of educational achievement. We're a couple decades into confusing the map for the territory on this one. It's Lockhart's Lament  on steroids and in every subject.
Funding levels are at least positively correlated with paying people well, which -- in every field I've worked in at least -- is typically correlated with higher quality employees.
Except that they already are measuring achievement. If you count expenditure as a positive, you assume your conclusion that spending is good.
> Which again depends on what you mean by achievement.
No, it does not. It is always worse to spend more for the same level of achievement. If your argument that achievement is mismeasured, that is a completely separate issue.
They are measuring performance on standardized tests.
If by "achievement" you mean "does well on multiple choice exams", then sure.
I don't trust standarized tests as a metric of achievement. Even without metric gaming they are pretty awful. And we've been solidly in systematic metric gaming land for least the past two decades.
The entire K12 system is designed around increasing these test scores.
> If you count expenditure as a positive, you assume your conclusion that spending is good.
No, you assume there's a hidden function that you can't measure or explain but which is representative of reality and also positively correlates funding levels with performance.
I know it's a radical proposal in STEM communities, but sometimes measuring things causes a lot more problems than it solves.
Sometimes yes, sometimes no. But spending is never a good measurement of achievement.
> No. My assumption is that measuring achievement is inherently impossible in something as multi-faceted and large as universal education.
You think it is impossible to measure educational performance at all? Surely you don't believe something that silly. You think it is measured imperfectly. And I agree with you. But i'm not sure what your point is here. Certainly imperfect measurements don't mean we should just measure random, unrelated variables like spending. I also don't see how it means we shouldn't try to measure, and improve our measurement standards. And it certainly doesn't mean, that the conclusion of this article is in any way incorrect (which is not that the measurements are good, but that the way other people were using these measurements were even less good than using them correctly).
Nobody said it was.
> You think it is impossible to measure educational performance at all? Surely you don't believe something that silly.
Obviously not. That's a ridiculous strawman and I think you know it.
> But i'm not sure what your point is here.
My point is that the availability of resources IS a useful metric.
> Certainly imperfect measurements don't mean we should just measure random, unrelated variables like spending.
How is availability of resources unrelated?
The idea that the level of resources available is irrelevant strikes me as fairly insane. You're really saying that you'd have the same quality teachers as $40k/yr as at $90k/yr or $500,000k/yr? CLEARLY resources effect outcomes.
It's not the whole story, but it is a useful feature. To claim it's completely irrelevant because "oh look multiple choice exams that everyone has been systematically gaming for 20+ years" seems absolutely insane to me.
> I also don't see how it means we shouldn't try to measure, and improve our measurement standards.
> And it certainly doesn't mean, that the conclusion of this article is in any way incorrect (which is not that the measurements are good, but that the way other people were using these measurements were even less good than using them correctly).
I think the fundamental thesis of the article -- that we can scientifically manage a system of universal education -- is fundamentally and impossibly flawed. Especially given the current state of assessments and the incentive structures built around them.
> We fixed two serious problems common to traditional rankings. First, we removed factors that do not measure K–12 student performance or teaching effectiveness, such as spending per student (intentions to raise performance are not the same as raising performance)
Otherwise, it's just weighing one questionable feature more than another questionable feature.
Test scores do NOT provide a holistic picture of the quality of a school system. Neither do funding levels. BOTH of those things can be increased while not improving or even harming actual educational achievement. They are FEATURES, and reasonable models can weigh them in different ways.
The assertion that funding levels might be positively correlated with (unmeasurable -- you've conceded this point!) true outcomes is not unreasonable.
The assessments the authors are criticizing actually have it completely right. Both test scores and funding levels are PROXIES for the actual quality of education that's happening. They are features. It's unclear how to weight these features against one another, but the conjecture that either of those features might be correlative or even causative is totally reasonable.
The authors, however, go off the deep end and completely mistake one particular map (and a damn bad one at that) for the territory.
If the authors' assertion made any sense, then we could replace all teachers with DRL.
No it's not unreasonable, as a hypothesis. But if you wanted to, you know, test that hypothesis, guess what you'd need: performance measurement. Which means that the only way you can prove that spending matters is by measuring its impact on performance, which is already being measured by the other variables. So, there is zero reason to believe that incorporating spending as an additional variable can improve your assessment of quality.
Once we assert that those tests are not measuring performance, the claim that performance is "already being measured" is just wrong and we're back to square one.
So in the face of a known insufficiency of objective measures, and an identified correlation between a common statistic and some identified metrics, it is legitimate to use that correlating factor to help identify better performing school systems which include the as yet unidentified measures until better measurements can be objectively identified.
But one must acknowledge at the same time this is not ideal, and much more effort should be made to understand the true nature of education beyond simple test or economic results, ie, in my opinion, a better understanding of philosophy and the nature of the human being, so that we can understand what truly embodies a good education.
As a tangent, may all multiple choice tests die an eternal death. They have only one redeeming quality. They are easier to grade. And even that metric is dead for many subjects in today's world. One of their more unredeeming qualities is they hide ignorance and are just too easy. All you need is a minimum of a bullshit radar to get to a 50-50 chance of being right most of the time (so if you know 70%, you will probably score 85%). Another good chunk of it, that same radar is enough for 100% accuracy due to the ridiculous alternative choices. Hell, I got bored on my PSAT and got a Mensa invitation for my score because I (intuitively) picked up on a pattern in the answers and filled it in for the last 30-40% of the test.
How do you figure that? The correlating metric only explains the variance it correlates with. But we're already measuring that variance directly.
>> To add to this, they need not be wrong (and really should not be to make the argument), just significantly incomplete. From what I gathered from this thread, this was already agreed upon by everyone.
>>> Unless those tests are NOT measuring performance in a meaningful way.
>>>> Otherwise, it's just weighing one questionable feature more than another questionable feature.
>>>>(or something)>>> I think the fundamental thesis of the article -- that we can scientifically manage a system of universal education -- is fundamentally and impossibly flawed. Especially given the current state of assessments and the incentive structures built around them.
And so on...
So there's a general agreement that the current tests (and/or the social system they're embedded in) are extremely flawed and at least a contest over assertions about whether good enough tests event exist.
Which is to say, for the fifth(?) time, the basic problem is that standardized tests are not doing even a remotely reasonable job at "measuring that variance directly". So pretending like they are legitimate metrics (and/or any more legitimate than other fairly broken metrics like funding levels) for measuring overall educational attainment is at best flawed and at worst -- I assert very likely! -- the very source of the whole problem with K12 in this country.
On that note, I would like to close out this thread by imploring you to read the essay I linked to in my first post: https://www.maa.org/external_archive/devlin/LockhartsLament....
Now, in terms of getting a bead on the true signal, do we care about the spending variable? The only reason we have to believe it has any relationship to the true signal is that it correlates with the measured signal! If we didn't know the measured signal directly, that might be useful. But we do.
What I am saying is this: The performance tests are the best we have. There is no additive value of this 'spending' variable, as far as anyone i'm aware of can tell. If you have information to the contrary, i'm curious to hear it, though.
Now, you may, perhaps even rightly, that say that the performance tests we have are so bad that they're useless. Well, i'm not going to take a position on that. The position that i'm taking is that adding 'spending' to your model does not minimize your loss function here, and there is no reason to believe that it would.
Providing such evidence was literally the main claim made in the top-level comment on this now very long thread...
Also, the fact that spending correlates well with other test-based performance metrics (see: the top-level comment on this now very long thread). And that the state rankings that result from the NAEP-based ranking are wildly inconsistent if you construct rankings based on other tests (e.g., state exams, college extract exam scores).
You provided evidence that spending has additive value in predicting outcomes? Please, cite it again.
> Also, the fact that spending correlates well with other test-based performance metrics (see: the top-level comment on this now very long thread).
You want to use, as evidence to support the view that spending provides novel information over and above performance evaluation, that it correlates with performance evaluation?
If public money is invested, it should be done based on ROI. If there is no ROI, that’s welfare, and welfare should go to those with the greatest need.
The problem we're currently facing -- on full display in this article -- is the wide-spread delusion that confuses "ROI" with "performance on one extraordinarily poorly designed multiple-choice test from the 70's".
1) Wages weren't being measured, spending was. Not necessarily the same thing.
2) Teacher wages may be relevant as moral issue, but they are not the same thing as measuring the quality of a school, from an academic perspective, which is what the point of this article was.
The "high-tax, high-spending progressive utopias" remark was mirroring the Paul Krugman op-ed that was mentioned in the introduction to the article.
The article's methodology looks at academic achievement directly, so there is no need to use spending as a proxy.
Per my reading of it, they used only one measure, the NAEP. The GP mentions a paper that used the ACT as a guide. In the least, we should resolve the difference between the NAEP and the ACT, if there is any. But, like the SAT or College Acceptance rates, those are just a few measures. Achievement is not easy to define in this context, let alone measure. All the test scores are just proxies for 'achievement' as a general term. Should we measure household income at the 10yr post mark too, College graduation rate, number of pregnancies, marathon runners? It's all just a proxy in the end for trying to determine, in granular detail, if education is worth spending tax money on.
What are our goals?
The second one can be taught, though.
That's the stuff we want them to know. We want people who know more math and vocabulary to score higher. That's not 'gaming.' That's just the basic concept of testing.
That's what I'm talking about; they don't need to actually know how to do the stuff that's on the ACT, because it's multiple choice and there's tons of tricks you can use instead. That's why it's "gaming" -- you don't need to know the subject.
No, it doesn't. It looks at performance on one specific test. Which could very easily be uncorrelated or even negatively correlated with actual "academic achievement".
Both funding levels and test scores are features you might include in a model that predicts the overall quality of education a student will receive.
The authors start from the premise that they're trying to find a function that minimizes costs while maximizing test scores. The publications they are criticizing start form the premise that you want to find the best school for your kid.
As an aside, twenty years ago I would believe that test scores are a fairly decent metric. Today, I'm inclined to believe the opposite. Some of the worst schools I've volunteered in did relatively well on standardized tests all things considered. But the education was abysmal; they knew they couldn't teach the material, and they needed to get enough "proficient"s, so they started teaching test-taking instead.
Conversely, schools that would get those "proficient"s by default had the breathing room to step back and e.g. read some of Euclid's Elements in the Geometry course, or incorporate some extra-curricular mathematical programming and basic proof writing into the algebra sequence, or fund a robust arts program.
Those things probably didn't boost their standardized test scores, since standardized tests don't test for those things, but it absolutely made a night-and-day difference in "academic achievement".
So, yet again, to beat a dead horse, NAEP != "academic achievement".
But it's even worse than that. The best metric, from a purely predictive perspective, for whether a school provides a good education, is the level at which the local community taxes itself to pay for education. Wealthy suburbs tend to have excellent schools. The authors of the article even, confusingly, concede this point.
If I were building a model to tell you where your kids will get a good education -- which is exactly what "U.S. News, Education Week, WalletHub" are doing -- I would absolutely include local taxation, bond issue pass rates, etc. in that model. And would probably weight them at least as heavily as ACT or NAEP or state standardized test scores. It's an errily good proxy for everything from teacher quality to parental involvement. I suspect that if money were on the line, the authors would do the same.
Maybe the only serious take-away from this article is that consumer advice shouldn't be mistaken for policy-making advice. WalletHub might be able to tell you how to navigate the current system in a way that maximizes your kids' academic attainment, but it's not necessarily going to provide you with great policy-making advice. Which... duh.
That’s a great metric for predicting how rich and white your student population is. If you want to equate “rich and white students” with “quality” you’re welcome to do so.
Hence the last paragraph of my parent post, btw.
What do you think would happen to the regression if per-student expense was both an independent and a dependent variable?
I don't know what variables the authors are talking about when they say "we ran multiple regression analyses on our data, which included several other variables", so I hesitate to make a claim on what their outcomes could be. There are so many things they could be looking at; it's almost impossible to know what they're really evaluating without being able to look at the data.
My experience as a teacher is to struggle to understand how it can't be directly correlated. Of course well-paid, well-benefited teachers are going to work harder, but there's much more to school funding than teacher pay. Quality of the school building, newer textbooks, all of that stuff comes into play with per-pupil funding. It's not hard to imagine that schools with higher per-pupil funding will have better resources and environments conducive to learning.
Also they were assessing efficiency, and since the cost component of efficiency (performance/cost) is measured as per-pupil spending, then we wouldn't get very useful results if performance=f(per-pupil spending).
Standardized tests are already included in the ranking.
If you're choosing a state to live in to give the best public education for your children, then the per-student expense really isn't important as you aren't directly paying.
If you're a politician to rate states based on the efficiency of education spend, then their ranking is more relevant.
I spent the first 14 years of my life in Las Cruces, New Mexico (the state's second biggest city), and the schools I went to were not very good. I did high school in Maryland and the school system was a lot better (I initially struggled because I was so far behind everyone else - a friend of mine who moved to North Carolina for High School told me he had the same problem). In the author's ranking list, they place Maine at #48 and New Mexico at #41. I find this very hard to believe. There are so many problems that New Mexico has that Maine doesn't seem to have (gang problems, drug problems - at least at the magnitude that I saw in New Mexico). Unless Maine has some pretty bad schools, this just doesn't add up. US News lists Maine at #6, a ranking drop of 42 spots seems really significant.
Off Topic: I went to Las Cruces a few years ago when my alma mater played New Mexico State. The scenery was breathtaking.
It's certainly possible the rest of the state has better schools, but I never heard anything about them while I lived there.
That is the INTENT, sure. But does it work? If you have someone that graduated from a school A and someone that failed to graduate from school B that is in an entirely different area, can you draw any meaningful conclusions? Particularly knowing that school A has no particular requirements on why you graduate, while school B does?
I'm far more likely to complain about "But when we disaggregate student performance scores by racial categories (white, black, Hispanic, and Asian), the rankings change dramatically." I wouldn't want to go too far with that before ensuring that I wasn't trying to curb swimming deaths by keeping Nick Cage out of movies: http://tylervigen.com/view_correlation?id=359
Removing graduation rates is definitely troublesome. Any school system could then improve its numbers by encouraging the worst students to drop out. Problem students both take more resources to educate and produce worse scores. That's bad for efficiency, but it is not obviously bad for society.
I am also troubled that they are using measures of performance in grades 4 and 8, but not something like SAT scores. I understand that SAT scores could be hard to get. But a lot of what people care about in an education system is how well prepared you are for college, and not how far along you were in grade 8.
The first statement is true. The piece would be a bit stronger if the researchers noted that by publishing in Reason, they're at the very least giving the appearance of potential confirmation bias. Your second statement makes an unsupported assumption though.
RE: SAT scores - not all students take the SAT, some students take it multiple times, the SAT is taken at different points in time per student, and it's a test that is so influential in college admissions that success on it is strongly correlated with wealth, as scores are best improved by tutoring on test taking strategy, not increasing general knowledge. I agree that grade 4 and 8 are probably not optimal to get a complete picture of education quality, but there is no reward to the student for doing well, so they're less likely to have spent substantial time investing in beating the test.
I'm the founder of PolarisList (https://www.polarislist.com), a high school ranking based on the number of students sent to Harvard, Princeton, and MIT.
This prompted us to take a stab at generating state rankings based on our own dataset, and came up with the following list. We calculated this by looking at the number of students in a state who matriculated to the aforementioned colleges from public high schools divided by the estimated 2017 population:
2 New Jersey
4 New York
8 New Hampshire
12 Rhode Island
24 North Dakota
25 South Dakota
32 West Virginia
34 North Carolina
35 New Mexico
47 South Carolina
48 Washington DC
A couple thing stand out to me:
- All 3 rankings have Massachusetts and New Jersey within the top 10
- All 3 rankings have Oklahoma and Louisiana in the bottom 10
- Our numbers differ from the other datasets the most on:
- New York (#4 on our ranking, #30 on Liebowitz/Kelly, #31 on US News)
- California (#9 on our ranking, #34 on Liebowitz/Kelly, #44 on US News)
- Alaska (#11 on our ranking, #42 on Liebowitz/Kelly, #46 on US News)
School A - full of underachieving kids that are pushed to get avg grades
School B - full of avg kids getting above avg grades
School C - full of smart kids getting top grades
School D - full of smart kids getting nearly top grades, but soft skills, nice personalities and VIP friends
School E - Mix of kids getting relevant grades
Which is the best school for each kid type? They're impossible to rank. Many schools with top grades are only there because they kick out failing kids, not because they're better at teaching. Many schools are rated good because they can improve grades, but you wouldn't want to send a smart kid there. Many private schools sell themselves on creating well rounded kids, not necessarily good grades.
For the point of ranking schools, there are statistical methods to measure student growth, instead of achievement, such as SGP (student growth percentile) and VAM (value-added modeling), which kind of address part of your question.
Also, based on my own anecdotal experience of standardized testing in grade school, these tests rarely include questions about "controversial" topics in history or scientific ideas that are opposed by Christian fundamentalism. I suspect these are major areas where southern districts would struggle, given their disproportionate implementation of cough alternative textbooks.
Overall, this smells of a common trend in American institutions where there's an ideological interest at play. An issue is raised, and it's decided that we need more data to decide. The data collection (in this case standardized testing) is designed in such a way that it produces results that skew toward the ideological position. Then the bad data is used to justify the ideological position. Law enforcement works the same way. Actually now that I'm thinking about it so do our elections.
> There are a lot of aspects of education that standardized testing can't (or won't) measure: access to extra-curricular programs, funding for art/music departments, technical education, etc.
> these tests rarely include questions about "controversial" topics in history or scientific ideas that are opposed by Christian fundamentalism.
These are legitimate criticisms of standardized testings, but not excuses for ignoring its results, if you don't have better data available.
> Then the bad data is used to justify the ideological position. Law enforcement works the same way.
Maybe instead of using "bad data", like how many crimes are solved, on law enforcement, we should just ask how much is spent on it - the more the better?
Was that efficient? I suppose. Was it a desirable education? Not really.
If race is a mediator for income levels and in turn funding level shouldn't the researchers here include the difference in funding levels between income levels into their model to determine the true performance per spent dollar?
I might be wrong as I'm no expert in the field but I think these type of problems are what causal models try to address. I've really enjoyed "The book of why" by Judea Pearl on the topic. It got me interested in learning more about causality.
I am struggling to imagine which category is the flip category. If iowa is not doing better for white black or hispanic, which racial category is somlatge as to flip the average?
50% Hispanic who average 25
50% white who average 29
State average: 27
10% Hispanic who average 24
90% white who average 28
State average: 27.6
It has 84% white population, not noticeably different to UK makeup at 87%
Iowa has 88% White (https://en.m.wikipedia.org/wiki/Iowa) or maybe 91 not too clear what they mean as difference between white and not hispanic
So them is some pretty small numbers to play with - somehow the "whites do better, and iowa has more whites" argument has only got 4-7% to play with
I would like to see the actual figures but it smells to me
Second, instead of looking at overall racial makeup of a state, you should be looking at the racial makeup of public schools. For Texas it’s been 52% Hispanic and 28% white. The former percentage is steadily rising year over year, while the latter is steadily falling.
Which makes the Simpson's paradox quite possible. So Texas does not have a racial makeup like UK. (I only have spent a little time in Houston, mostly I found it populated by cars)
Besides, the race in big-gap states is an unwitting proxy to income, so disaggregate by household per capita income (relative to local cost of living) if you want to disdain. This methodology would work well in all states and expose the challenges of anxiety and the quality of a student's home life in their education attainment.
This, of course, makes sense. Students from disadvantaged backgrounds face unique challenges. Under the traditional methodology, states can do a crappy job of educating these students so long as they have few of them. States that have large populations of disadvantaged students, by contrast, are penalized in the rankings even if they do a relatively better job educating those students.
Texas doesn't do badly at educating Hispanic students -- it does a superior job at that. It has low overall performance because Hispanic students have terrible performance, and Texas, even though its Hispanic students are better than average, has a very large number of them.
This kind of thing is generally known as Simpson's Paradox. https://en.wikipedia.org/wiki/Simpson%27s_paradox
Cohorts are useful because they capture a group with some shared attribute, and racial cohorts are particularly useful because they're overwhelmingly consistent through time and datasource for a particular person. The disaggregation into racial cohorts in a ranking like this would be a useful indicator of how the outcomes of a particular person would change solely by moving states. But it's not an effective indicator of evaluating a state taxpayer's contribution to the quality of students coming out of the state's schools -- students who presumably perpetuate the results of the quality of their education in their life, producing a myriad of poorly-understood impacts on the state as they do so. Both of them are worthwhile questions to answer, but they're very different studies. But one calling another flawed on the basis of this is jarring.
The opposite is true: the disaggregated ranking is the only good measure of "a state taxpayer's contribution" because the taxpayer has little to do with e.g. the fact that Hispanic students face additional challenges because they're from an ESL household.
What you're really saying is that the composite metric is a better measure of the average "quality of students coming out of the state's schools," which reduces to the assertion that it's better to have a school system full of privileged white kids than minority kids who face challenges. I don't think we should be ranking school systems that way.
The first methodology benefits from disaggregation by racial cohorts such that the reader can optimize given their particular circumstances, but the magnitude of contributory factors to that success are opaque and still averaged statewide. Most obviously, this also includes funding, where funding received by particular students -- and especially not funding sourced from parents of students in the same cohort -- are not separated out. They shouldn't have to be, as schooling is a public good, but if they aren't, then the metric doesn't accurately reflect the impact of the socioeconomic condition and proportions of the cohorts in the state, but splits them out the results solely to avoid having to consider races other than one's own.
The latter question is typically what is being studied, although pop science coverage often leaves the reader free to interpret it as the former.
"How well does the average student do?" vs "If you raise a family there, how well will your children do?"
For the former question, you don't want to control for racial mixture. For the latter question, you do. Most parents should care more about the latter question than the former.
Poor white students do better academically than well off black students. So splitting by race is more informative when discussing academic performance.
I find it hard to believe that on average rich black kids are doing worse than on average poor white kids.