"We believe the purchasing of online and blended courses is not driven by concerns about pedagogy, but by an effort to restructure the U.S. university system in general ..."
I agree with him completely here, a large part of this discussion is not about pedagogy but is financially driven. However, unlike him, I think this is a good thing at a time where many universities are battling high costs and students loans have reached record high levels.
The problem with his main argument (and with similar arguments about teacher ratings, etc.) is that it pits the best possible case against the proposed idea in trying to refute it: Isn't having an excellent prof/teacher in flesh and blood who cares about your education better than some canned video? Well, obviously it is, but the point is a lot of professors/teachers are not like that, they, like everything else, have a distribution of excellence.
If we rephrase the argument as: Isn't having lecture videos of good faculty preferable to having an underpaid, uncaring professor/teacher or the option of canceling that course because the university cannot afford it? This is the question that Prof. Hadreas needs to answer, I think.
I think a better strategy would be to figure out why, exactly, universities are so strapped for cash (I've worked in higher education, it isn't a simple issue) and target those issues instead.
This is not to say that pedagogical advances can't be made, but as you pointed out, these advances, to the extent possible, should not be influenced by budget constraints lest we end up with cheaper, but proportionally worse, education.
Why do you believe that worse but cheaper education wouldn't provide a better value to students?
This concern is partially motivated by my background. I used to work for a university doing academic technology-related stuff. In my opinion, many of the technologies that were being acquired didn't contribute much to the quality of the education offered (beyond looking sexy when prospective students came in for tours). However, the tech was often extremely expensive and, in fact, required additional fees in many cases.
That doesn't mean that we shouldn't attack the problem you mention, though.
So I'm not surprised that the pilot program had great pass rates, as pilot programs are usually staffed by eager people with amazing support networks who are interested in putting in the time to make it work. It'll be more interesting if SJSU does ramp up their experiments to start to include professors or lecturers who are just fulfilling their service requirements and seeing what the difference is between student performance under those faculty and the prior model.
1. Superstar class - Excellent researcher and excellent teacher - A good MOOC is not a threat but an opportunity for you to bring even more understanding to your students. Or the superstar hosts his own MOOC.
2. I'd rather do research class - Excellent researcher, but not excellent teacher - A good MOOC is not a threat but an opportunity for you to spend less time teaching, while you can still focus on teaching about your own research as a plus. Or only teach MS and PhD-level courses, where the market in courses would be too specialised for MOOCs.
3. I'd rather teach class - Not excellent researcher, but excellent teacher - A good MOOC is not a threat but an opportunity for you to get extra and world-class material to your students, while leveraging your teaching skills for that personal touch.
4. I'd rather just do nothing - Not an excellent researcher, but not an excellent teacher either - A good MOOC is a big threat to your business model. You won't cut it research-wise, so your in the C-league of universities, but the MOOC will make your failings as a teacher a lot more obvious. A good class is just a few mouse clicks away for students, and why pay if you can get it for free?
Of course there is a signalling effect at work as well. But I find it hard to imagine you would get much signalling from a school filled with class-4 teachers. Both you and your future employers know what universities and colleges are class-4.
The future that Peter J. Hadreas deplores is the one where class-4 disappears and "cheap" education is met by class-3 educators. It's a background fight IMHO. And one where students are the winners. The best students still get to go to the best universities. But all the other students at least get lectured by the best teachers, and get taught by enthusiastic teachers instead of bored B-level researchers.
Personally, I think the Udacity "continuous Socratic method" approach is better than traditional lectures because it forces constant thought and engagement.
...like a textbook, which obsoleted teachers over a century ago.
I think the answer to both is probably "yes" but certainly not in the quantity they are today. I wouldn't want to be an employee at a teaching college over the next 10 years, much less a professor. The higher ed bubble is about to pop (between new technology, skyrocketing costs, and reduced perceived value, it's going to get hit on every side) and its going to be ugly when it does.
All too often the argument is framed as "replace professors" etc, but it doesn't have to be so binary.
I found lectures mostly a waste of time. What I found was that the lecturers were simply presenting the latest chapter from the prescribed text book. So, I ended up reading the chapter the night before and just used the lecture as confirmation of what I thought I had learned by reading. Very occasionally I used the lecture to ask some question, but that was rare. By Year 2, I barely went to lectures and simply studied at home, handed in the required work, then did the exams.
Now, I fully accept that what I did does not suit all, in fact I am happy to be considered a minority case, but, and here is the point I'm waffling towards, I did have to question the true value of actual lecturers if all they did was present chapters of a text book. A video on-line might well have been better.
Another thing I ended up questioning was the length of the degree course. It could have been very easily done in 2 years, if not less.
Small class (less than 30-40 students): not much lecturing happening, maybe a little bit of PowerPoint but mostly interactive discussion with the professor on the readings
Large class (could be several hundred students): twice-weekly lecture by professor on high-level concepts with a little Q&A at the end, then separate twice-weekly small discussion section on the readings with a grad student TA.
The model of combining a mass lecture section with a small-group discussion section can scale to MOOCs, that's why they have discussion forums, and people are also forming small study groups on Facebook, Google+, and locally IRL to discuss the materials. It just requires a little more motivation on behalf of the students to form these small groups themselves instead of having them predefined for them.
But a straight up lecture? I'd rather read a book. And really, even with good lecturers, most of my real learning comes from applying the knowledge somehow: writing computer programs, writing essays, etc.
Thinking about it, I think inspirational teachers are more vital in schools. Idea being, once inspired, the kids will drive forwards themselves.
Even with a great teacher, the speed at which the material is presented is never going to match each person's learning flow. Too slow for things they get instantly, too fast for concepts they have trouble with, and everyone is different.
And even debate in a small class is inefficient. Contrast a debate in a 20-person class with a debate in the HN comment threads. The class will sometimes be focused on a thread I'm not interested in. Sometimes I will want to chime in, but others are holding the floor. Time is a scarce resource and 20 students are contesting it. On HN, we can all read and comment simultaneously. No downtime.
All it takes is writing "Massive Open Online Course (MOOC)" the first time it was used.
The obvious downside is the lack of interaction with the lecturing professor.
This was another point I hadn't thought of:
"The thought of the exact same social justice course being taught in various philosophy departments across the country is downright scary - something out of a dystopian novel"
MOOCs are great for low cost and continuing education. I'm doing one right now, but it feels very lacking compared to my college experience, and I keep wondering how it's integrated in "real" courses.
Yes, a single social justice course would be bad. Consolidating the thousands of Algebra courses might be a good thing. It's really a strawman argument as MOOC's actually expand the number of courses available. You could choose between multiple versions by the same professor, potentially, not to mention other professors. Small schools with only one course offering of Obscure 101 would have alternate options. Everyone getting taught one thing is hardly a problem.
Which is why every professor writes their own books, right? It's be scary if lots of people across the country were learning from the same textbooks.
Will a MOOC instructor answer my emails, take a phone call, or meet with me in person?
Will a MOOC instructor help me network with potential employers and internship sponsors?
Will a MOOC instructor be my mentor and help me navigate an increasingly difficult job market?
Will a MOOC instructor connect me to other like-minded students and professors?
Will a MOOC instructor act as an advisor for any interest groups or clubs at my school?
Will a MOOC instructor know who I am?
I, for one, have gotten way more than my money's worth from Coursera classes. They have been amazingly educational and terribly useful.
Personally, I'd say I got my time's worth. Time is worth a lot, and I've put a lot into the classes, and I feel it has all been more than worthwhile.
The article kind of bugged me simply because the professors didn't admit any self-interest. Everything was posed in terms of altruistic concern for their student's well being. I kind of doubt that's the case. While they may have some concern for the students, I'm inclined to believe their motivations are driven primarily by self-interest (which is totally normal but less admirable).
Together we trudge on towards a future requiring a basic minimum income.
(Funny story: I have a professor that was in the Center of Mass group for NASA, where they painstakingly calculated the center of mass of each object and part of the shuttle, then computed the center of mass for the whole shuttle. Of course, a computer can do that instantly now.)
I honestly do feel bad for the people that lose their job when new tech comes around, but why should I be forced to subsidize an outdated way of life? Plus, it isn't like new tech kills jobs overnight; the writing is on the wall, and in the case of driverless cars and MOOCs, the transition will probably take at least a decade.
History has shown that improvements in technology increase the standard of living across the board for all classes. A minimum basic income will hurt us in this instance because people will not have an incentive to have jobs that actually produce value to others.
Transportation advances caused great economic difficulties from 1870-1890 
What happens if the hits keep coming as technological innovation accelerates and new mass employment industries are not created?
> I honestly do feel bad for the people that lose their job when new tech comes around, but why should I be forced to subsidize an outdated way of life?
I would say a basic minimum income would be investing in defense against violent revolutions. At some point people will just revolt if they are starved or even lack upward mobility.
> Plus, it isn't like new tech kills jobs overnight; the writing is on the wall, and in the case of driverless cars and MOOCs, the transition will probably take at least a decade.
There are long ramp up times for retraining and educating people. And I don't think education is going to be enough.
I think the most radical and unpalatable truth is that a huge segment of humans are just not going to be intelligent enough to provide value in the coming robot and machine intelligence run economy.
Our brains and memories could be so much greater, and we should engineer ourselves with all deliberate speed.
Humans have to upgrade themselves before they are obsolete.
I would not invest too much time into speculative futurology at this point.
I mean, if somebody brings data, we can talk about those. Without that data, I don't think anyone can predict correctly whether we will end up in
a) a dystopia with most people not being able to find work and starving or
b) a utopia where the cost of life is so low that anyone can live fully without even working.
But this is a pretty big assumption. I'm not saying that you can't put a price on anything--you can. But that doesn't mean that the price is accurate, in terms of the value acquired, which is what some other people are trying to point out. The market assumes you can rationally determine the real value of things, in money terms, and compare all things equivalently. But the future utility of something, let alone knowledge, is impossible to measure with any hope of accuracy.
If everybody measures everything with money based on some expectation of value--itself measured in terms of the return on investment of existing money/time--and most people are wrong, then it might have very bad long-term consequences for individuals and society.
Lots of things are lost when we replace one way of doing things with another way, and usually the dollar value of those things is never considered, but is externalized and conveniently forgotten. Same is true for the way we treat the value of natural resources (air, water, wildlife, minerals) as zero until exploited. It's a pragmatic necessity, enforced by the limits of our measuring tools, but that doesn't mean that nothing is lost.
So, when you say "standards of living have increased", what measures are being used? Past the essentials (food/water/housing/clothing/security), we get into very grey areas. Knowledge and ways of life are changing, and maybe the loss is more costly than people are willing to admit in the rush for material prosperity.
Enter the subjective theory of value . Market theories do not necessarily try to reduce everything to a dollar amount. Human choice and action are the only true deciders of value, and those decisions are expressed through marginal utility evaluations.
> So, when you say "standards of living have increased", what measures are being used? Past the essentials (food/water/housing/clothing/security), we get into very grey areas. Knowledge and ways of life are changing, and maybe the loss is more costly than people are willing to admit in the rush for material prosperity.
I understand your sentiment; we are using certain indicators as proxies for measuring standards of living, but maybe those indicators are the wrong ones. To that I reply: it is up to the individual to decide on how to become happy. The fact is, the amount of capital in the world has increased enough so that the common man can much more easily decide what makes himself happy. You are projecting your opinions of what is necessary and what is superfluous in your assessment of how people use their capital when you say a "rush for material prosperity" is a bad thing.
It is easy to criticize wealth creation systems (capitalism) and measures of value, but if you can't express alternative explanations or systems then there is nothing to really talk about.
The inventor of the steam shovel was accused of causing permanent unemployment with the argument "each steam shovel will replace 100 men with shovels."
He replied "or 1000 men with teaspoons."
These "mass unemployment events" aren't anything new, and they historically haven't caused long-lasting unemployment.
I think this line of thinking is unfortunately sanguine in light of the Wests youth unemployment levels and the uncharted territory we are heading into with robotics.
The beauty of the basic minimum income is that it leads to a more efficient economy. 1000 men with teaspoons thinking is a great atrocity, I don't know what else besides a minimum income will stop politicians from pandering to their constituents in this way.
No, what they cause is long-lasting shifts in the distribution of the rewards of production to more heavily favor capital than labor, rather than causing long-lasting unemployment.
As long as people have some use in productivity, and need to work to eat (limiting price elasticity of labor supply on the low end), technology won't produce long-lasting unemployment, just, under ceteris paribus assumptions, long-lasting reductions in the ratio between the market-clearing price for the average unit of labor and the value of the total output of the economy.
Technological change can be classified as labor-augmenting, capital-augmenting and labor-neutral change. You seem to be claiming that all technological change is capital-augmenting.
There's been quite a bit of research in this area, and your claim is overwhelmingly refuted... in fact, Daron Acemoglu has multiple research projects purporting to show that most technological change is labor-augmenting.
A summary is available at http://economics.mit.edu/files/967
No, I'm discussing the effect of capital-augmenting technological change, and specifically why I would expect depressed wages rather than long-lasting unemployment to be the principal durable effect of such change.
I'm not making any claim about the distribution of technological change among labour-augmenting, capital-augmenting, and labor-neutral categories.
Or mass starvation / die-off of the surplus human population.
To the prof's who think lecturing to a class of >15 students is teaching them in a way that is different from what the students can get thru video, I say you're fooling yourselves. And you know this to be true.
What's beautiful about where this is headed is a) it's inevitable, and b) it will lead to a better education experience. How can it not be of benefit if the 100-level mundane subject matter is handled via video (with perhaps a teacher being available for questions and tutoring), and have the higher level courses be of smaller class size with mostly in-person teaching? We could probably prune some of the teaching staff and still deliver a better experience.
Before, I was mostly thinking that online education is good for things with well-established and fairly uniform curricula (e.g. many math, science, and engineering courses). But it really struck me that the class in question was called "Justice". To think that we'd just have a few notable professors writing the curriculum for such a broad and subjective course is disturbing.
Reduction of education to simple learning of facts and information will drive the world into the ground. It has certainly already begun, as have the results. What we need is more generalized, humanist, liberal arts and sciences education, not less. I swear to you if I have to work with one more one-track single-minded robot programmer I'm going to start cursing.
For pete's sake, get some breadth and human contact—the only way to a true education. I want to see more of this type of response to MOOCs, and frankly, I wouldn't care if the entire concept was forcibly rejected from society. That would be a start.
Why are only top universities producing MOOCs anyway? Do San Jose lack any renowned classes, or just the will to MOOCify them?
I personally think that MOOCs should supplement the lectures and professors should focus on directing discussions in class, solving problems, directing projects, making sure no one is left behind, providing his own view and experience.
MOOC's are two things, a curriculum and a means of teaching. If either of those are subpar, you can both evaluate that, and/or make a case for why they are detrimental. Aside from that, these are just tools and pieces of content. It's not an existential threat to professors any more than Wordpress is an existential threat to web developers, and certainly not a threat to content publishers.
This last sentence frames the discussion perfectly, and I agree with its sentiment but not the unnecessarily gloomy warning to other professors. An MOOC is a tool in the toolbox and can be used or abused. The sentiment is to not abuse this tool. The warning is to not make poor tools but is stated in an underhanded way that could derail a very healthy discussion. MOOC's have unfathomable potential as a tool for educating people in a much larger circle than rich students attending a "high quality" university and that context should not be covered up when it does not favor your rhetoric.
So I am glad to hear his warnings and hope others heed them. But I also hope people continue to improve this tool in the educational tool-box. There, I fixed the last sentence of the open letter.
"Professors who care about education should strive to produce products that enhance education for students in public universities and students everywhere."
These San Jose professor are simply trying to fight inevitable disruption of the current university system, which is simply broken.
Sounds about right. Salaries are the number 1 expense, so it's a great place to cut.
That sentence makes no sense.
UC Berkeley produces edX content, and it's a public university.