It is just one other indicator of the flaws of the free market approach to everything in the US. Whether it is health care or education, treating it as a product or service sold to consumers creates perverse incentives.
The market has become a religion in the US. It is taken as dogma that it is always the most efficient way to do things. When it is stunningly inefficient. I've used health care in the US a number of times, at it is quite grotesque how they treat it as a sales transaction dishing out as much pills and treatment as they can, almost like a car salesman trying to get you to buy add-ons.
My US doctor insisted on giving me anti-biotics despite the fact that I had the common cold and protested that this does not work for a virus infection. Whatever you issue is they attempt to give you the most expensive treatment even if something cheaper would do. And there is in an unhealthy obsession with unnecessary frills. Anything to get the customer to keep coming back.
I quite liked American university, but it was extremely lax. They gave students endless opportunities to fix their grades. Again anything to make the customers happy. Grades was rather pointless as it only seemed like A's mattered.
Calling the American healthcare system "free-market" is disingenuous at worse and hilarious at best. It's probably the most regulated and convoluted sector of the US economy. In contrast, the pet healthcare system in the US is much less regulated. As a result, getting care for my pets is so much easier to navigate (you can actually get price quotes!) and much multitudes cheaper than human healthcare, despite the similar medical procedures and training required of doctors.
It's funny (and sad) how difficult it is for people with real problems to get doctors to prescribe them something like Xanax. My friend has a dog that does terribly on car trips, and the vet gave her a huge fucking bottle of Xanax bars. The same Xanax people use. A shit load of it.
A better example might be dental care, which has many of the same nice properties as veterinary medicine, but is harder for the haters to dismiss.
People get elective dental procedures all the time (braces) that are highly affordable and conveniently delivered, without getting pushed into bankruptcy.
Okay this is the worst argument for an unregulated healthcare system I have ever heard.
1) How much money goes into pet medicine research vs human?
2) What happens when a pet is sick and the cost of care exceeds money of owner?
You want a health care system where you use 15-20 year old medicine with no research, and most bills > $1000 lead to euthanasia? Sure that would be a cheap healthcare system...
You want a health care system where you use 15-20 year old medicine with no research
What? Animal health is a huge industry. Zoetis was spun off of Pfizer and is making money hand over fist. Lots of new animal drugs are approved each year.
You have packed a large number of biases in a single line:
> You want a health care system where you use 15-20 year old medicine with no research, and most bills > $1000 lead to euthanasia? Sure that would be a cheap healthcare system...
15-20 year technology (and I hope you agree modern medicine is, broadly speaking, a form of technology) is not old; it is robust and time tested. Today, building business critical operations in Fortune 500 companies on top of Linux is a no brainer, but not so much in 1996; and whoever told you otherwise back then was both unprofessional and self-deluded with evangelist zeal.
Your claim that veterinarian medicine has "no research" does not hold water either. Sure, there might be less money being expended in researching animal health care than human health care, but this does not mean that the science behind veterinarian medicine is any less valid. If anything, it's probably more valid because there are so many less emotional hot-button issues and underhanded political agendas lurking around it.
About the $1k bills, there are two ways to see it. You claim that it is a bad thing that most >$1K procedures end up with pacient death. I, on the other hand, would like to point out that this is a proof that it is possible for a whole calss of modern health care professionals to charge <$1k for procedures that solve most non life-threatening conditions and remain gainfully in business. Note this argument is valid regradless of your position on human euthanasia.
Finally, your use of "cheap" healthcare system is used as a double-entendre. You are conflating "affordable" with "stingy" in that single word. As exposed above, you can clearly see that the first not necessarily implies the second.
Animal drugs are the same as human drugs, plus all the human drugs which fail fairly late in the development pipeline. Animal physiology re: absorption, toxicity, metabolism etc. is pretty well studied because most mammals are used for drug testing/as models of human disease.
Calling the American healthcare system "free-market" is disingenuous at worse and hilarious at best.
You're right - on the literal level. BUT parent said: "It is just one other indicator of the flaws of the free market approach to everything in the US" - and if their statement is interpreted as implying - "the flaws in what's called the free market approach" then the rest of their statement stands.
The health care system is a good example of the profit-driven, marketized-through-regulation, financial-speculation driven system.
And the thing is that no amount of praise of the literal free market is going to change these market-regulated-industry-fusions into free market. I mean, it's a standard approach for functionaries to praise the free market then jump to whatever gawd-awful financialization serves those functionaries' backers.
I mean, "just let them die" is usually an option for seriously ill pets. Generally society frowns on this fashion of treating humans. So "everyone" agrees some compromise is necessary in human health care - but naturally one then wakes to find the latest compromise is a cleverly constructed worst-of-all-possible-world except for X many large interests (education, healthcare, prisons etc).
The problem with that comparison is that often we euthanize pets when the expense is no longer justified. We don't do that with humans. As long as human life is on the line, it will never be truly "free-market". Everyone will pay beyond their means if it means they can stay alive.
Before Obamacare, 1 out of every 2 dollars spent on health care in the US was spent by the government. It is hardly a free market. The education business has heavy government financial involvement, too.
On the other side, check the software business. It's a powerful engine for growth and wealth in this country, despite having driven prices for software literally to zero. And the software business in the US is the most free market industry in the country - no licenses, permits, subsidies, tariffs, regulations, etc.
The issue with health care and free markets, is that a fully free health care market would not morally sit well with a lot of people, and some sectors of health care lack the ingredients that make free markets work.
EG: in a free market, if someone has a healthcare emergency, yet cannot afford to pay for the service required, the free market would say -- dump this person in the street. Not exactly a morally pleasing choice. Free markets assume a rational and informed consumer; a person having a medical emergency is not always going to be a rational and informed consumer.
Not to say there is plenty of scope for a competitive marketplace in healthcare (there is, particularly for less critical care and wellness etc.), but certain portions really don't fall in scope in my opinion.
The problem with the US healthcare market is that it kind of pretends to be this "free market healthcare" system, while in reality being a half-baked socialized medical system built in bits and pieces over time. Some people are covered by the government (Medicare and Medicaid). The government relies on companies to provide much of the remainder (with tax incentives), but heavily regulates the type of health care that can be offered (eg HMOs). Since not everyone is employed by a large company, this leaves many gaps where people have no de facto health coverage at all. Except for emergency rooms, which (thanks to some notorious emergency street dumping instances in the 1980s) will accept anyone these days.
It's a mess. "Obamacare" merely plugged a couple of gaps. Much greater reform is needed, IMHO.
But there does seem to be a chorus of people who, with their strong belief in free markets, believe that the US healthcare system is a free market one (it's not) and consequently believe that the US healthcare system is the best in the world (anyone who has traveled knows how laughable this statement is). It's hard to get meaningful reform as long as this belief persists.
> EG: in a free market, if someone has a healthcare emergency, yet cannot afford to pay for the service required, the free market would say -- dump this person in the street.
(In practice: some private hospitals refused to accept poor uninsured patients, so these patients would be shuffled around in an attempt to find a hospital that would accept them. Sometimes patients would die before a hospital was found.)
Healthcare is not and cannot ever be a free market. There is a massive asymmetry because everyone will need it at some random point not of their choosing and when they do need it the consequences of not having it can be death.
Healthcare is not a fungible good. If you have aggressive cancer you can't just switch to a phsychiatrist because he/she is a "cheaper good" than your oncologist. Delaying purchase of your heart surgery may mean death.
"Excuse me sir, I know I might be having a heart attack, but can you please provide me a quote for this ambulance ride? I'd like to do some due diligence." Said exactly no one.
Government subsidy (and occasional gov't supported monopolies like Bell Labs) is responsible for the software industry. This is very well-documented. Public takes the biggest risks, then rewards are privatized. Hence the economic importance of programs like DARPA. http://marianamazzucato.com/the-entrepreneurial-state/
How much has the Internet done for your business that a smaller, proprietary network wouldn't have?
Metcalfe's Law only kicks into high gear when everyone is capable of joining the same network. If there are three or four proprietary networks with limited and expensive communications between them, growth is hobbled.
Or, to take another tack, how much has access to clean water done for you? How much has access to a house that isn't burgled done for you?
Just because you personally never received a check doesn't mean the government hasn't given you benefit.
> How much has the Internet done for your business that a smaller, proprietary network wouldn't have?
Why do you think other networks wouldn't have grown global?
> If there are three or four proprietary networks with limited and expensive communications between them, growth is hobbled.
I agree, and there was tremendous pressure to interconnect them. The "inter" part of internet is the result of that, it's why it's called the internet rather than Arpanet. Numerous packet switching networks were developed in the 1960s, not just Arpanet.
> Why do you think other networks wouldn't have grown global?
Because privately-run networks have to turn a profit by the end of the fiscal year, if not sooner. It's too easy for a company to engage in profit-seeking behavior which harms long-term growth, like demanding fees to interoperate with it. The core Internet protocol suite, OTOH, is completely specified by RFCs which are free to read and implement.
FidoNet was and is an interesting system, but it relies on a system which is a public utility and/or publicly subsidized in most of the world: the telephone network. Dial-up networks like FidoNet only work if there's enough phone service to bother with.
There are SBA loans for small businesses. They benefit non-software companies too - your local McDonald's is probably "government-subsidized" in this sense. The local McDonald's is going to have an easier time getting said loan, but the program is open to any business.
Local and federal governments need developers to write software and will use contractors, but that's not really a subsidy.
The NSF will occasionally fund research that may or may not use software. For example, I see that the Sage math software has a few grants listed:
I don't know that any of this really supports the grandparent's comment, though. McDonald's is closer to being government-subsidized than anything in the software industry.
SBA loans exist, but they are not specific to any industry. I'd be interested if you could show a connection between SBA loans and the incredible growth of the software industry.
I used DEC systems, then PC systems. I tried SCO Unix, but it was terrible and abandoned it.
Consider also that the DEC's 1967 TOPS-10 operating system led to RT-11 to CP/M to MS-DOS to NT to Windows 10. I'm not aware of government subsidy having any role in that.
Just because you can't trace a government dollar directly to Windows 10 doesn't negate the (correct) point that government subsidies directly resulted in the internet and software industry we have today.
What significant subsidies did Microsoft/Google/Apple/Oracle/etc. get?
Also, Edison invented the vacuum tube upon which it is all based. I read a biography of him, and he never got a subsidy. Shouldn't he get all the credit?
Yahoo and AltaVista was there before Google. To say we wouldn't have search engines without the NSF is not believable. The page rank paper did not require an NSF grant.
> Ampex was developing a giant database for the CIA, and Ellison was assigned to the project, along with Bob Miner and Ed Oates. The CIA's codename for the project was Oracle. The project was something of a disaster, but it led eventually to the formation of Oracle Corporation and its current hugely successful range of databases. And in the same way that Apple based its Macintosh range of personal computers on original research done at Xerox's Palo Alto Research Centre (Parc), Oracle's hugely successful "relational databases" were inspired by an idea developed by IBM.
Or do you mean we don't need copyright protection, limited liability and bankruptcy protection allowing entrepreneurs to take risks, laws protecting investors and corporations, public education system, and so forth?
Why would Edison want all the credit? Newton said he only saw as far as he did by standing on the shoulders of giants. We are all inheriting the accumulated wealth and knowledge of all the people who came before us. Why not the people around immediately before us who built the society we live in?
Are you arguing that the free market is incapable of inventing networking? What about FidoNet, BIX, CompuServe, Prodigy, MCImail, Ethernet, RBBS, etc.? Even I, before I'd heard of DARPA, invented a network (it sucked, but still). The first thought anyone with 2 computers had was to connect them together - i.e. a network.
Software is essentially a luxury item though. I know that we engineers like to think we're the gods of the universe, but nobody needs software to survive. People survived just fine without software for literally all of human history.
Its never free enough.
And if it where, it was the customers fault.
And if the customers behaved flawless and it would not work.
There would be traitors, sabbotaging a perfect system.
In a closed worldview critque acceptance is not even possible anymore. Anything brought up is not evaluated but instantly deflected. There is no - "Lets measure this" and compare (Lets call it the free market approach to ideas ;) just offense and defense. Thanks for proving the Pa-OP right.
When you define "free market" as "absence of government regulation" then you can get whatever result you want, because you can build something that looks like a government inside it, not call it a government, and have it do the thing you need a government to do. And then if you get one of those things which is doing the bad things governments do, you do call it a government and it isn't a free market anymore.
The lesson from this is not "free markets are bad," it's "simplistic ideologies are useless."
We know that private actors are bad at some things, like funding basic research and building roads and utilities. We also know that governments are bad at some things, like making consumer products. So let the government build the roads and let GE make the washing machines.
The worst alternative is to have the government do half of something. This is the US healthcare system. Single payer has problems, but it works. The complete absence of government in healthcare would have different problems but would also work. What we have doesn't work. It's a corrupt mechanism for funneling public money into drug companies.
Here's the test for whether the government should do something: Does it make sense for the government to pay for this and then give it away for absolutely no money whatsoever to every citizen of the country? If not, the government shouldn't do it. Notice that the answer for healthcare is then not obvious, but the thing we have is not a valid answer.
The point is not that making health care more free market would necessarily improve it. The point is that using the US healthcare system to beat up on free markets is fallacious, because the US healthcare system is not a free market in the first place.
Cards on the table, I do think that more freedom in the market would be good for the entire system, from top to bottom. But I can find myself in total agreement with those who think it should be more socialized that what we have now is a really crazy, chaotic system that seems to manage to combine the worst of both worlds. As a result, it's every bit as valid to claim that socialized health care is a disaster because the US health care system proves that it doesn't work... which is to say, an entirely invalid and fallacious claim.
I used to read about countries where it was a common practice to pay bribe money to be given a job, and think how awful that was.
Now I live in a country that buries its youth in debt that can't be discharged through bankruptcy, to pay for an increasingly ineffective education, to get a job almost anyone could do, that lists higher education as a requirement when it really doesn't have to.
If you zoom out to the big picture, it's almost as if employment in the US is becoming a pay-to-play system.
>Now I live in a country that buries its youth in debt that can't be discharged through bankruptcy, to pay for an increasingly ineffective education, to get a job almost anyone could do, that lists higher education as a requirement when it really doesn't have to.
You need to screen for at least a two year degree to get employees who are functionally literate for both reading and writing of moderate complexity.
There's a real steep capability slope in terms of education and information worker qualification. Most high school graduates or GED folks without specific education lack a good broad understanding AND lack vocational training. A previous employer had a specific mentorship program that had really good results with at-risk youth, but that required substantial investments in time and $.
For your first bit, do you mean things like fathers paying some shop owner some money to give their sons a chance at a job over some other kids that went through a hiring process, or do you mean things (still going on) like politicians backing certain regulations that just so happen to mightily benefit the corporation they become a highly paid member of when they lose their reelection or retire?
Pay-to-play, spending money to make money, has usually been the rule even if you're born rich. The trick is always on how to spend less money than other people at your level to make the same or more -- your required piece of paper, while still more expensive than it used to be, isn't unavoidably crippling-debt expensive, and so with a little forethought you can get in the game with a lot less expense than some of your peers.
If basic income can somehow work economically we might turn into a free-to-play system, I don't know if that would actually be better overall though.
>> I used to read about countries where it was a common practice to pay bribe money to be given a job
> For your first bit, do you mean things like fathers paying some shop owner some money to give their sons a chance at a job over some other kids that went through a hiring process, or do you mean things (still going on) like politicians backing certain regulations that just so happen to mightily benefit the corporation they become a highly paid member of when they lose their reelection or retire?
I assume he's talking about neither of those, but rather the common practice of buying a job for yourself. For example, you might go to a highly-placed English functionary with a big bag of cash and purchase a post as customs inspector for a port somewhere in the Empire. Historically this was more the norm than the exception in colonial Europe.
(Fun fact: the customs inspector position was generally a lot more expensive than notionally better jobs like "governor" (also for sale). That's because the customs inspector was ideally placed to take bribes himself.)
> What caused the most recent bout of grade inflation? Some point to the rise in tuition—and educational debt...
One of the largest reasons for skyrocketing tuition is the US government's guaranteed student loans. And those loans are very difficult to discharge by declaring bankruptcy, which makes the loans less risky to the schools. This is the farthest thing from free market capitalism.
Grade inflation is a very good example of why the institution offering the education should not be in charge of testing if the education was effective.
The institution offering the education must be able to test if their education is effective, education doesn't work well without tight feedback loops. Neither teachers nor students are drones whose behavior can be measured by some perfect third party process or human(s).
That said what passes for education at most places works best when you do start treating teachers and students as drones. The logical conclusion of that is something like Khan Academy with spaced repetition, and the savings in time and money worldwide would be tremendous if that system were to be embraced by the US department of education. What passes for education is definitely worthwhile for some things, but it's not really in the traditional spirit of education.
Of course the education institutions need to be able to get feedback on their teaching if they are to be effective (the tighter the loop the better), they should just not be charge of determining if it is effective.
Yes, and we also have grade inflation - we used to mark on an A-E scale but now we have an A* too. BBC article, with interesting graphs: http://www.bbc.co.uk/news/education-11012369
The only flaw with free market economies is assuming the US actually has one. The regulation of commerce is so extreme in some areas as to border on near fascism (don't equate the term with NAZI) : where government dictates prices for goods and services without risking anything of its own. Politicians setting rates and guarantees that are not driven by market need.
grade inflation exists because of this unwillingness to offend people and telling people they are not the best if offensive. hence why colleges like affirmative action is that they will get candidates not used to being at the top of their class and be more willing to accept lower grades (go look it up, it does exist and been studied)
the US health care system, well it only got into the condition it was in because of over regulation. preventing insurance from crossing state lines pretty much doomed it
If it were a free market, you would see prices advertised. This is how you know it's not a free market. This is true in a lot of markets, not just healthcare.
My US doctor insisted on giving me anti-biotics despite the fact that I had the common cold and protested that this does not work for a virus infection.
Your doctor doesn't make any money when they prescribe something. They make money off the office visit. There was absolutely zero financial gain for the doc prescribing you antibiotics.
"...the free market approach to everything in the US"...It's almost as if you believe there is a free market in education or healthcare or...almost anything in the US.
The author is depressingly right. The people this is ironically going to hurt most: The really smart/hard-working students who attend public universities.
I used to be one such student. I attended a large public university that had a great deal of "intellectual diversity." The majority of the students were average, in terms of their drive/motivation/intellect. However, there was also a numerically large minority that was just as talented and driven, as the students whom you would find in the Ivy Leagues.
Which is why companies like Google and Facebook still showed up to recruit at our college, every single year. Which is why elite post-grad programs like Stanford's recruited a lot of our graduates, every year. They wouldn't be interested in the average student at our college, but they were certainly interested in recruiting the top 10-20% of our student base.
With grade inflation though, if everyone's grades are bunched up together in the 3.8+ range, it becomes extremely hard to distinguish between the top students and the average students. And one unlucky bad grade is all it takes for a top student to suddenly seem like an average student. In such a world, without any reliable way to distinguish between the top students and the average students, companies like Google would not even bother showing up to recruit at public universities like ours, and would instead restrict themselves to selective universities like Stanford.
One of the most important roles of the university system, is to serve as engines for social mobility. "Even if you're born into an average family and attend an average public university, if you work really hard and get good grades, you can still get a post-graduate degree from an elite university, and get recruited into elite jobs." Grade inflation, or getting rid of grades entirely, may lead to equality of outcome within a university, but it's also going to severely weaken one of the most important mechanisms for social mobility.
I find this graph outlining the frequency of different grades from 1949-2009 to be enlightening [1]. Around 1940's only 15% of grades were "A", and in the last few years they went up to close to 45%.
The source data is coming from research compiled by Rojstaczer and Healy [2].
How can you differentiate between grade inflation and knowledge inflation? Perhaps the increase in grades is because the difficulty of the material doesn't rise as fast as our ability to teach it and students' ability to learn?
I'm inclined to agree about knowledge inflation (in the sense explained in the following paragraphs), but I don't think it has to do with the perceived difficulty of subjects changing, or necessarily with instructors inability to teach at a level of rigor that might pull the average down (by increasing perceived difficulty).
Regarding "knowledge inflation", I'm not sure that we see an inflation in knowledge per se (at least in a way that affects grade inflation in college courses), but I think what we do see is an increase in 1) quantitative methods, and 2) cross-talk between fields. This influences the subject matter people in a variety of fields are expected to know (e.g., the amount of math, comp. sci., stats, etc...) , and thus, I suppose, inflates the expected knowledge in this sense.
Anecdata: I teach calculus 1, and guess how many math majors are enrolled? Zero. It's mostly bio, and other sciencey fields. This affects how I teach (and grade!) calculus. I'm willing to bet other courses -- like intro programming, intro stats, etc... -- experience something similar. Is this an explanatory factor for grade inflation? I don't know. There's obviously a huge number of things to consider.
At least in my case it seems that the distribution of GPAs becoming more skew left and closely grouped has made it much easier to distinguish myself. Companies recognize that GPA is becoming a less useful metric and are quick to consider anyone who presents them with anything else to consider. I've had very little trouble finding work or getting interviews because I've mainly focused on personal projects and actual experience rather than school. Anyone could, in a month, do more to distinguish themselves from the pack by completing a personal project than by hunkering down and raising their GPA by a few points.
I think the culture of the institution plays a big role, too.
I had my ass lovingly handed to me by the good professors of University of Illinois Urbana-Champaign. They were more than happy to hand out Bs and Cs even with "a good effort" and wouldn't bat at eye at giving a D or F to someone who didn't do the homework / didn't do well on tests / didn't show up to class.
I always got the sense that big public schools have more of a "your education is your responsibility" ethic about them, encouraged by the class sizes (large -- hundreds of students -- especially in lower-level classes) and general "machine like" feel. Whereas private schools, with their smaller class sizes, huge endowments, etc. felt more like the student's education is the professor's responsibility, and a failing grade for the student was as much a reflection of the instructor's ability to teach, as the student's work/study habits, or academic performance.
In any case, I'd really love to see how much grade inflation affects schools like the big 10, vs. the Ivy League. I suspect it's much less of a problem in big anonymous public schools, but just a hunch.
"wouldn't bat at eye at giving a D or F to someone who didn't do the homework / didn't do well on tests / didn't show up to class"
Are there schools where this isn't the case? I sure hope not. When I was in college nearly every class's grade was determined at least 50% by test scores. Then 10-15% homework, and the rest projects.
At my university (UCSB '12), grades were close to meaningless. I got a BSc in Math and graduated with a 2.8 (or lower?) because I loaded up on upper division math courses. I had friends who got 3.9's because they got a BA in communications or biz. econ. So clearly GPA varies heavily with degree, not to mention institution.
There were some professors who straight up said they always failed the bottom X percent of the class. So if you knew the material and "passed," you still got an F. Or the professor whose final was literally "memorize this two page proof and regurgitate it verbatim." Or the <ethnic/gender> studies professor who fail your papers for disagreeing with them.
There should just be Pass / Fail. Enough with this GPA circus.
I agree about moving to pass/fail, if the bar to pass is moved up a bit. I went to a school with relatively low grade inflation and I would say any I didn't really have a good understanding of a class unless I got a B or higher, barring some mathematical oddities (e.g. 50% of grade determined by one test).
I suppose it depends on the professor's philosophy of what a test should be as well. If the test is intended to make students stretch their knowledge then it might be OK to make a C. That would mean you understand what was covered in class but couldn't make the next intuitive leap on the test. However, most tests I took weren't like that. Anyone who understood the material should have been able to make at least a B.
UCSB '12 here as well. I got a BSEE and I found the contrast between grades in lower division and upper division classes to be quite stark. In the lower division classes you could get away with anything - for example people cheated on their online physics and math homework so everyone did well. The classes were too large so the TAs grade everything, and it's much easier to convince a TA to give you more credit for something than it is a professor. The class size makes professors resort to things like iClickers for participation points and the online homework I just mentioned. It degrades the quality of education while also creating a system with more loopholes that smart people can exploit, since everyone is optimizing for grades in the end.
In upper division, the classes got considerably smaller and considerably harder. Grades meant more, and the system was not easily played. When you have a class of 20 you can't screw around anymore. You can't hide behind your TA. You get what you put in.
That's not to say I didn't have funky grade experiences in upper div too - I had one class series that actually had a serious grade deflation problem. The average on our second quarter final was 18.5%. In the end, if you tried hard and did well in the lab, your grade was basically a participation score. Now, I learned an enormous amount in that class because it was so hard. It was demoralizing at times because everyone though they were going to fail (I vividly remember contemplating my future during that final, wondering if I would ever pass that class). So in that case, grades didn't really matter.
Personally I think the solution comes with class size. Grades become inflated because as more people are managed by a professor and their TAs, the less careful they have to be while handing them out. There's too many exceptions and they can't spend all their time tracking down who is being truthful when they say their dog ate their homework or their roommates keep them up all night because they're drunk. It's much the same problem as large corporations have with employee performance scores - as the body you are judging grows, your criteria become much more generalized and the judges much more disconnected from those being judged.
I think it's more than that. A college is worth $X because its graduates go on to Do Great Things. Now suppose there's only 10 colleges in the world. 9 inflate grades to a 3.8 average and 1 keeps it at 3.0. Who is going to get hired easier?
I think grade inflation is two fold. 1) students gaming the system and chasing grades rather than education, and 2) the system gaming against other participants.
I couldn't agree more with most of your points but I think that pass/fail isn't the right approach.
Taking an aggressive schedule in order to get your "I spent four years doing what I was told" paper and GTFO ASAP hurts your grades
Working a job hurts your grades
Extracurricular hurt your grades.
What I see is a simple duty cyle problem. Students have a finite productive capacity (not sleeping isn't a semester-term solution). If they take a schedule that requires more than a certain % duty cyle then they take a GPA hit. If you could account for the amount of productivity required by a class you can take duty cycle into account when calculating GPA. For example, 120 credits in 4yr is a lot easier than 120 credits in 3yr. Five 1xx or 2xx gen-eds in one semester is a ton easier than four 3xx or 4xx classes in a STEM discipline. Having some way to account for the extra effort required to run wide open for a semester or run close to wide open for 3yr would go a long way.
People playing the game (college) on easy mode (BA in business, no job, is a member of a frat with a really good test bank his major, etc) get the same GPA as people playing on hard mode (engineering degree with minor in something else, academic club involvement, works a job on the side, etc) and there's no way to tell them apart when it's all said and done.
If there was a way to score classes (and professors) based on typical grades then profs could grade how they want and students could take what they want and not be punished. You could do something like track the average grade for a class/prof combination and track average grades for students. You could eliminate the effect of one prof who teaches one of many sections of one required class as well as the effect of a bunch or really good/bad students enrolling in a class on one particular semester. You could also eliminate the effect of taking a ton of really hard classes or unknowingly enrolling in a section taught by a really tough prof.
Basically if you applied dynamic curves to classes at the university level then you can account for easy/hard classes, profs and schedules. There's got to be some catch but I can't think of one that couldn't be worked around with effort since it doesn't have to be perfect, just better than nothing. You don't need to make getting bad grades in four hard classes have an equivalent GPA to getting good grades in four easy classes, just making some way to partially account for the difference would help. Obviously this wouldn't be in the financial interest of colleges but that's a different issue.
Sure, there's probably an more effective way of differentiating students, but at the end of the day, you'll just have another system that can be gamed somehow. P/F is simplest approach, and sometimes worse is better.
Don't knock the pass/fail thing until you've tried it :)
I attended another UC that offered students the option to take classes pass/fail. I used it for gen-ed requirements so I could focus my attention on getting good grades in the classes for my major. It was really nice not to have to stress over a test in a non-core subject knowing that it wouldn't hurt my GPA. And, in retrospect, I think I actually remember more of the material from those classes than the classes I took for a grade. My guess is that stress is counter-productive to actually learning.
There is something to be said for pass-drop, with a high standard for passing.
Anything less than a pre-inflation "A" is stricken from your record. Any "B+" work (or worse) won't affect you. Retake until you succeed.
Only the advanced (mostly final year) classes are required. Take the beginner ones if you feel you need them, but they never count for anything and there isn't even a record that you took them.
If you want grades to express students' strength relative to each other, a quota of say 35% A's per course might be appropriate.
On the other hand, if grades are meant to show how students do an a more objective and time-invariant set of criteria, maybe grade inflation is OK. Maybe it means that student's are getting generally smarter, or more efficient at learning. IQ has been rising over time as well, as has efficiency in many workplaces. Perhaps grade inflation, at least partially, reflects this as well?
I am not saying that this is the case. Rather I would like to throw a hypothesis out there, as it is one that is not often expressed. I would love to hear arguments for and against the above ideas.
The article, at a glance, doesn't really seem to give information on the data that lead to a "shorter" conclusion. Does it account for the fact that the US receives a ton of immigration from Mexico and South America, whose population is quite short? Are American families with no hispanic genetics also getting shorter? My guess is no.
I use grades to express both relative strength and objective performance. Here's how I do it:
1. The individual grade a student receives on each component of a course is a reflection of their mastery. These grades are not force-ranked or subject to a curve. Therefore, the student may draw conclusions from each grade about opportunities to improve.
2. The overall grade a student receives for the course is force-ranked. I strive for a distribution of no more than 25% A's, and no more than 50% B's. This grade is therefore a reflection of a student's relative class standing, and not necessarily their actual mastery.
>Others argue that student bodies have simply gotten smarter, but by a variety of metrics, this is not the case: standardized test scores and graduate literacy have not improved as grades have risen.
Disclaimer: I don't think this explains grade inflation, but it might be a partial reason. I'm somewhat playing devil's advocate here.
Percentage of A's given at Yale is increasing? Could be grade inflation. Could also be that they're picking the top 0.01% and smaller of students from a larger and larger total population.
Example math:
If 50% of students were worthy of A's when they were able to recruit the top 1000 students out of let's say 100,000 (the top 1%)...
If the population doubles relative to the number of students they admit, they can recruit the top 1000 students out of 200,000 (the top 0.5%), all of which would have been more likely to be "A" grade students.
Now everybody is getting closer to 100% A's as long as there's been no change in grading standards.
That works at Yale, but you'd actually expect this type of effect to deflate grades overall as higher and higher percentages of the country are going to college.
That included a lovely hidden assumption that the population has some static or perhaps declining intelligence/academic motivation. Given the increase in student resources, as well as as rising trend in standardized test scores (SAT, ACT, etc) I'm inclined to dismiss the hypothesis.
Building on that, the resources available to students are increasing. Miss a lecture or have a gap in understanding the material is but one click away in a multitude of formats so students can supplement as needed. The school itself is providing TA hours and tutoring sessions on top of that to help catch students falling through the cracks (the cynic says getting them to pass to milk them for more money, but it is to some degree symbiotic).
The number of well qualified and motivated students attending college has never been higher and it always baffles me when people suggest the universities should respond by jacking up standards to preserve some nonsensical ~academic elite~.
Grades are both a poor pedagogical tool and a poor measuring stick. They’re too coarse, too removed from the immediate learning feedback loop, too inconsistent from course to course, and too ambiguous in meaning (should you get the best grade for working hardest? for understanding the material best? for the best writing or mathematical reasoning skills?). At best, they provide a tremendous distraction for everyone; at worst, they convince some students who are performing well they can slack off, make students at the bottom give up hope and stop trying, cause incredible anxiety for students in the middle, and also create huge headaches and heartbreaks for instructors. Simultaneously, they give parents the misleading impression they know what is happening at school, and thereby discourage parents from getting more directly involved in their children’s education.
The main reason grades were adopted and persist is that they’re relatively easy to apply, aren’t that complicated to reason about heuristically (even if this is often misleading), and they scale across an institution / between institutions.
In my opinion, grades should be entirely abolished for students up through middle school, and replaced with written reports from the instructor. They aren’t really appropriate later, either, but high school grades seem to be one of the only available tools for guiding the college admissions process and I don’t presently have any better equally scalable idea. Colleges should of course do whatever they want, after all, the students are adults at that point.
Then again, I feel rather the same way about institutional schooling in general.
> and replaced with written reports from the instructor.
I find it hard to believe that American middle schools don't produce written reports on their pupils.
In the UK we have a system of graded exams at the end of middle school (age 16), but the old 'A' level system used to guide university admissions is losing ground to the International Baccalaureate (IB) qualification. The school my children attend is switching over entirely to IB for high school students from this year. I like it. The IB is a much broader qualification that A levels and includes project assessments and an extended essay as well as examinations. It is of course graded, but looks like it provides a much more complete assessment of a pupil's abilities than e.g. a B+ in Maths. I'm a fan.
I went through my higher education basically without grades. Almost all courses were project based, which were done in teams. If the team decided you were a non-perfomer, there would be a discussion with a teacher.
This freed up a lot of time for the teachers, so general knowledge was tested by teachers in personal meetings, where you had to explain to him/her what you had learned. If your explanation wasn't good enough, the teacher would explain things again so you would understand. So it's not just a test, but a more personal way for a teacher to feel if you really understood the subject.
A system like this only works when everyone loves what he or she is learning. 40% of the students didn't finish the first year. After the first year almost everyone made it.
Only if you think you should leave introverted people alone in the corner of the classroom, forever. Working in teams forces introverted people to learn ways to cope with their natural attitude and it learns the other team members how to deal with their introverted counterparts. If you want your team to deliver the best products you better learn about eachothers strengths and weaknesses. In the first year you learn a lot about team dynamics.
I actually like the concept of group based work much more in a grade-less environment. When grades are involved it over-burdens the good students because they have to pick up the slack for students who don't care, or risk getting a lower grade.
I met teachers whose grading philosophy was that the results has to form a normal distribution across the pool of students. If we're adopting that then we're having grades denoting a relative quality not an absolute one (which pretty much everyone assumes). The author may be right in her implication of faulty grading, but the evolution of grade concentrations over the span of many student generations does not necessarily denote an existing issue. The education got better over the years, maybe the results are from increased learning and teaching efficiency. Was there a test taken both now and 60 years ago and considered their respective grading results in the drawn conclusion? And finally, I don't want to criticize the author, but the idea of "world without grades" is just silly. Grades are the result of a measurement and the measurement in itself is something valuable everywhere, including in education. The author could have had a more limited claim, by only denouncing current grading practice performed at the same institutions that do the teaching.
I can't imagine anyone who has gone through school and thought that grades were somehow an "absolute" measure of quality. It's quite clear that grades are relative to both the standard of your fellow students and the current academic standards for your school. The whole idea of a "grade" is to sort the relative quality of various things or people, and that is what they are supposed to do. The problem comes when people start to THINK they are an absolute measure of worth, and thus believe that they are entitled to an "A" because they worked hard. Never mind that everyone else worked hard too, and some of them performed better.
A lot of the "stress" of grades seems to be bucketed into a general claim of ego-shattering "I'm different and moreover lesser than my better peers!" effect. I think it would be bad if we got rid of grades based on that philosophy of coddling. The more significant cause of stress I think is simply that grades are tied to things beyond the student's particular ranking in a class. The article mentions that the possible beginning of the inflation trend was the simple relation of your grades to the likelihood of being shipped off to Vietnam. Since then it's only gotten worse, and I can think of several other things grades now tie in to. If your grades are bad enough, you might lose your funding that allows you to go to school in the first place, and you need that for the pretty much unrelated problem of tuition being ridiculously high. (I don't really think the students-as-customers mentality that has developed also comes along with a I've-paid-now-give-me-an-A mentality.) I would find the threat of losing a scholarship a lot more stressful than the fact that some people in my class are smarter and/or work harder than me, and knowing a few people with such scholarships I was amazed by their apparent composure come finals season. If your grades are bad enough, certain companies won't look at you when you apply. This has become less of a deal in tech (though one place I interviewed at did ask for my GPA) but is still a common filter. If your grades are bad enough, graduate programs won't have you even with a strongly worded letter of recommendation from one of your professors.
The article proposes some sort of top-down solution (from the government's department of education?) is needed. Maybe that can work, I'm skeptical. The approach is at odds with keeping rankings private to the school and unconnected to external things.
Why not use percentile instead? It's fair. It's clearly the best way to compare students within a class or within a school. (I've heard India has been doing this for years.)
If you then insist on normalizing the percentiles across schools and eras, have the students also take a standardized final exam (like GRE, but specific to each course). You don't have to test every course, just the ones that teach fundamentals. This should be more than sufficient to make schools (and percentiles) comparable. This is comparable to the british first/second/third-class system.
It'd also do wonders for pressuring schools to up their game. It's be mighty embarrassing if a tony private school scored substantially lower on their standard tests than a moo U. It could also validate MOOCs and other alternative forms of instruction.
Grad inflation are a serious problem for excellent but poor students and their social mobility. It used to be that one could distinguish oneself with grades. Not so anymore. One of the most significant remaining factor: Alma mater brand name.
Grade inflation is known for a long time and really bad for students. According my teachers, when they graduated (40 years ago) the best student of my college scored 14/20. When I graduated, the top student scored 18/20. The real problem is that public intitutions hire based on college scores so younger grads have an unfair advantage.
To solve grade inflation a bell curve would work: best tests would grade A+, next ones would grade A and sucessively independently of scores itself. You would have always the same relative amount of each grade scale along the years.
When I first started teaching (university level), trying to come up with what I thought would be acurate grades would keep me awake most nights.
Asking fellow faculty revealed that they too had wrestled with the same problem, they developed a "feel" for what was right, and that they had simply accepted it as a pain point.
I refused to accept that. One semester I told them at the beginning of the semester that they were competing against their classmates. I was not setting upper and lower bounds for grades, their classmates were.
I used the full range of grades (A+ to F), and when I tabbed the grade distribution, the results freaked me out: It was an almost perfect bell curve. I decided to use the method again the following semester. Same results. I did it again the following semester. Same results again. I then permanently adopted the method.
Other results included the, "Why didn't I get an "A" on Project/Paper X" drama during office hours dropping off to nothing;
Student's were much more comfortable knowing they were competing against their classmates, rather than trying to "figure out" their professor;
My Teacher Evaluation scores didn't change across my changing grading methods;
My stress level went _way_ down, and allowed me to better concentrate on creating and delivering content.
I wonder if this could be related to more classes being taught by adjuncts. Tenured professors don't have to care what students think of them. I'm surprised this question was not raised by the article.
In the UK we introduced the A* grade as a "solution" to this problem; first at GCSE level (end of compulsory high school, age 15-16) and then at A level (pre-university, age 17-18).
Even that just postpones the problem, as grade inflation continues. I expect we'll see A* * before long, and then eventually grade sheets will look like eBay reviews, and students will just get a star count.
Given the tendency of clustering of grades there is no scale which cannot be compressed. A1, A2, A3 could become A.000001, A.000002... and thus still mean nothing useful.
Some of the most elite private high schools in America do not issue grades. Instead, universities have to rely on standardized test scores and recommendation letters
I love the US, but only as a tourist destination. There is no policy in the US that I can think of that I prefer over the way things are organized in my own country. I don't care if I pay 60% taxes. I'm well off, as well as my temporarily unemployed neighbours, there are no beggars in the streets, the roads are paved nicely and I can visit a doctor without him bankrupting me.
To be fair, for someone who actually lives in the US, I can say I am well in the middle class who enjoys these blessings. As with any place in any country it's going to have its underdeveloped parts.
I see it highly unlikely in the first place, but lets feed the senseless question. I am financially secure enough to hold until my 'unluckiness' wares off.
If someone is between jobs they can (usually) use COBRA to keep the same coverage they had at their last job by paying the full cost (the part the employer paid & the part the employee paid).
Since employers often pay more than half the cost, this is often expensive and also only lasts for a certain period of time (18 to 36 months, depending on the situation).
They could also buy coverage on their state's healthcare exchange at whatever the market rate is - depending on the state this may or may not be affordable.
If someone can't afford coverage they might qualify for Medicare.
If someone is uninsured then they are liable for 100% of the bill.
It's not subsided you just pay a group rate. Basically COBRA let you maintain coverage by paying full out of pocket costs. That got to be over 500$ / month for a young, healthy, single person getting solid coverage and not actually cheaper than what was on the market. It was not even tax deductible like employee health coverage.
I am not trying to prove anything, I'm just asking (I'm not from the US).
If that's the case, why are people so many worried about losing healthcare coverage after leaving their jobs (to work on personal project for example)? Is it different if you leave voluntarily vs being sacked?
I don't know if the terms of your departure affect COBRA coverage or not, but Medicaid is means tested so its coverage is based on income and financial measures, not the reasons why you are in the financial position you are (e.g. leaving v. being terminated).
Except in the states that refused the Affordable Care Act Medicaid expansion, unemployed people qualify for free medical care via Medicaid, assuming they did not keep their old health care through COBRA or something.
This is a bit off for most people that might actually be affected by the unemployment->insurance loss->need to get insurance asap problem.
About 30-40% of non-medicare America lives in non-expansion states, more if you include long-term resident undocumented (not just seasonal workers).
Medicaid is not available to 'unenemployed' people in most states in any sort of automatic way, so if you were employed in most middle class jobs, you won't qualify without serious tax shifting around the eligibility year enrollment cycle.
If you make under 133% of the federal poverty level in non-expansion states, you don't qualify for the exchanges.
Each non-expansion state's metrics for Medicaid qualification are slightly different. Unless you are under-18, a pregnant mother, or have ESRD, you probably don't qualify.
Also, to a different point above, COBRA is not 'discounted'. It temporary, but generally just as expensive as private insurance.
Kaiser Family Foundation is a great place for digestible data on US health care.
> If you make under 133% of the federal poverty level in non-expansion states, you don't qualify for the exchanges.
You can buy insurance on the exchanges, you just don't get subsidies. Which does make it a largely moot point, since the last time I checked premiums for the low-end plans ran around $600/month unsubsidized, obviously out of reach for anyone who would have otherwise qualified for Medicaid.
> you won't qualify without serious tax shifting around the eligibility year enrollment cycle.
I could be mistaken, but I've helped people with this process personally and we didn't run into any problems in that regard. Lose your job, you're making $0/month, you qualify and there's no tax things we had to deal with. (I can't say, of course, that we did it all correctly but nobody has come back and punished her for what we did.)
> Also, to a different point above, COBRA is not 'discounted'. It temporary, but generally just as expensive as private insurance.
I am fully aware, but a hypothetical middle to upper-middle class person who lost their job would have that option available to them unless they had no savings.
They also live in gated communities with armed guards, have to put up with theft and batter, carjackings, etc.
I genuinely do not understand how you can tolerate living like that - youtube channels telling people how to stop their house being robbed ("It works! My house is the only one that wasn't in the whole neighbourhood! Woo!")
I've been to the states for three weeks, driving from cheap motel to cheap motel.
As you say, most of the US is just fine. But even in my three weeks I have seen lots of homeless people at busstops and intersections. I've heard distant gunshots in the (very) cheap motel in LA, where the police sirens were so common that I mistakenly thought that we were really close to a police station. I saw so many people who were dangerously obese that I can't blame the people who repeat the "FAT MURICANS" meme.
Some major roads were riddled with potholes and looked nothing like the images we saw in movies that supposedly play in LA. Let's just say the image I had beforehand (mostly instilled by Hollywood movies) was exactly the opposite.
I was in Italy for 2 weeks and saw more homeless people and beggars than I see on a regular bases in the US. I also saw huge camps with people living out of tents on the Tiber in Rome and several camps of gypses living out of cars and RVs. Its all relative.
There were just as many fat Germans or Brits as fat Americans touring. The Italians were mostly slim.
The cobblestone streets in Rome and Florence had plenty of holes.
You obviously for some reason don't like Americans. The biggest difference I saw was there was almost no opportunity in Italy. Young locals with college degrees bartending or waiting tables for 8-9 euros an hour because they can't get a job (same thing happens in America but the locals I spoke to had desirable degrees).
I've been to Costa Rica, Nicaragua, Mexico and Panama and heard sirens and gun violence in most of those places despite guns being illegal.
Been to London and Scotland and heard ambulances at night in the big cities.
It's all realitive but the one thing I've seen in America is more opportunity.
> I'm well off, as well as my temporarily unemployed neighbours, there are no beggars in the streets, the roads are paved nicely and I can visit a doctor without him bankrupting me.
...is pretty well entirely true for me - the temporarily unemployed seem to do OK, there's no homeless people, roads are very well paved, doctor is relatively cheap, while...
> They also live in gated communities with armed guards, have to put up with theft and batter, carjackings, etc.
Sounds like some alien civilization. I've never even seen a gated community, I have never, nor known anyone, who experienced theft or battery and certainly not a carjacking. About ten years ago there was a carjacking a few miles down the road from my parents.
I understand that in some urban areas there are places like this, but I hesitate to put that too strongly since I've never seen them in the cities I've been to. It is certainly not representative of the country as a whole.
The article you posted is from 2004 and refers to data from "recent years", meaning probably around since the year 2000 or earlier, a timeframe in which Mexican immigration was at or near its peak. Stats from 2009 have nothing to do with it.
Also, try to be civil in responses, attitude is not needed, especially when I am struggling to see how my post had any preconceived notions. I asked a question and made a guess as to what the answer was, and you're acting as I was somehow stating my guess as a fact or that it was somehow inappropriately biased.
That's Mexico specifically. Immigration in the US across the board increases the population. I don't know if immigration from Mexico + Central America increases or decreases. And you seem to have conveniently ignored
> The article, at a glance, doesn't really seem to give information on the data that lead to a "shorter" conclusion.
and
> Are American families with no hispanic genetics also getting shorter?
The only two things I've said in this thread were that a citation was needed (which doesn't mean I disagree or don't believe you, just that a citation was needed), and that when someone asked a question about Hispanics your response was specifically about Mexicans. And in that comment I literally said the phrase "I don't know."
Meanwhile you've acted as if your facts need no citations and that I am somehow "picking ideas from thin air" when I've only asked questions about things about people have said. But taking your own advice, I did "some research."
From the first search result when Googling "are Hispanic Americans getting taller:"[0]
> The reason [Americans are not getting any taller], explains Leonard, is most Americans now face few nutritional or health-related stresses in their youth. People grow most as infants and then as adolescents, and most Americans have avoided disease and eaten enough meat and milk in their youth to reach their genetic height potentials.
> "There is large heterogeneity in ethic compositions," he says. "When you look at Asian-Americans, Hispanic Americans — those are the pockets where increases in height are still happening."
So one supports your claim that Hispanic Americans are growing, but not that "White Americans are getting shorter."
[ed: Sorry if that comes off as mean, I was trying to be joking.]
Also the comment was "Does it account for the fact that the US receives a ton of immigration from Mexico and" The and means both statements are true, but as I posted the first is not just false in magnitude (little vs lot) but negative.
Some help average white male is approximately 5 foot 10 inches tall. Compare that to other countries. Now, does that suggest your hitting a genetic limit?
The market has become a religion in the US. It is taken as dogma that it is always the most efficient way to do things. When it is stunningly inefficient. I've used health care in the US a number of times, at it is quite grotesque how they treat it as a sales transaction dishing out as much pills and treatment as they can, almost like a car salesman trying to get you to buy add-ons.
My US doctor insisted on giving me anti-biotics despite the fact that I had the common cold and protested that this does not work for a virus infection. Whatever you issue is they attempt to give you the most expensive treatment even if something cheaper would do. And there is in an unhealthy obsession with unnecessary frills. Anything to get the customer to keep coming back.
I quite liked American university, but it was extremely lax. They gave students endless opportunities to fix their grades. Again anything to make the customers happy. Grades was rather pointless as it only seemed like A's mattered.