Of course I don't think you need AI for this. You just need something that is slightly interactive so you can modify the learning path to be more efficient. You could even do it with a book. That is the real win of adopting technology, that it can change your mindset. Presumably that is also why the mindset sometime come before the technology, with things like sci-fi. Unfortunately I think in many markets people are focusing on just adopting technology, but not the mindset. So you don't get all the benefits or even make things worse. You get fancy proof of concepts that don't do much at scale.
I think the bigger problem is the overall influence that "traditional learning" has on the mindset of the learner. In both the USA and China, you can see what every young person's mindset is focused on - studying for that big college entrance exam. Everything is simply a means to achieving a good score on that exam.
Perhaps if we allowed kids to move away from that idea of the "big college entrance exam" or similar we could dive into the deeper stuff like...what do YOU want to be when you get older, what problems do YOU want to try to solve and so on. If someone wants to work on video games then start pushing them down that path as soon as possible...get them to learn coding, media design, computer graphics and so on.
China does not care what you want, and neither does the market economy. They want you to be what they need.
There are two ways to sort it out: teach people how to learn first, and teach them how a (small) variety of professions that are in demand work.
Only then, specialization. Current Chinese model is to identify talents early then force specialize and reject people even at advanced stages, leaving them with nothing.
Western model used to be more general even a century back, but is going now more the same way.
People are passed without working knowledge of estimation and arithmetics even, have no idea about basic politics and more.
Oh yeah, I completely agree. An ideal education system needs to be something that is more open to everyone and able to teach people at different levels and for different purposes. Have courses for vocational skills and courses for more academic/theoretical type learning. The factory model of education is obsolete.
It's not the old factory model at all. It's the model taken from competitive sports.
It is superior in employing talent but leaves many broken people ultimately resulting in social costs.
Short name: rat race.
Old factory model provided identical education, relatively broad, to everyone (zero help if you got stuck) - and provided single vocational education on top. (Including research vocations.)
Yet more ancient one started with a trade education based on propinquity; rather than scouting and filtering.
> In both the USA and China, you can see what every young person's mindset is focused on - studying for that big college entrance exam. Everything is simply a means to achieving a good score on that exam.
This is less true in the USA than any other country I know of. The "college entrance exam" is not nearly as decisive or as prepared for as in other countries and most of the curriculum has nothing to do with it.
In the USA, we do not even use the same college entrance exam in all higher ed systems. Different state systems use different exams, and then so does each private college and university. There are several, and which you take (or you may take several) depends where you are applying.
It's also not one exam, in the sense that it gets held multiple times a year and retaking it (if you're willing to pay for it) is possible and a lot of people do it and submit their best scores.
The SAT & ACT are also mostly limited to the humanities and math (and science for the ACT), with an optional writing section. China requires Math, Chinese, a foreign language, and any three of Physics, Chemistry, Biology, Geography, Politics, and History. With the SAT all those elective subjects are separate exams and most people do not take them.
I think another big factor is that past a certain point university prestige doesn't really matter in the US. In East Asia a lot of stock is put into reputation of university that you get into, but the well of decent American universities that get you good job prospects is much deeper.
The learning path is a guess. It's a good guess made by experts but still a guess. This approach allows the learning path to be refined using the actual experiences of the learners.
I think that is something that is more true for something like an elite school. They could potentially refine their qualified guesses to make better decision. But there is also a huge long tail of institutions that probably don't make guesses at all.
I want to say there is a misconception about what technology does for education as current MOOCs mainly focuses on distribution, rather than what distribution enables which is to invest in what you distribute. But realistically that is probably more about incentives. They just do the easiest thing and effectively ends up giving everyone "reality tv" rather than "game of thrones".
What this kind of technology in education also enables is distributing the learning path, the planning or even part of the teaching. You can have hundreds of people spending large resources designing the systems and materials since it can be used by a lot of people, and also can be distributed to where it makes the most difference.
Of course it all comes back to the mindset and what reasons you do them for. They whole thing could of course also end up being a mess, or even abusive. But that is the challenge with technology.
One problem with AI in education is that it promotes magical thinking. That is, if you rationally understand what the AI is doing, it seems awfully simple. And then the disappointment comes: if it is simple and understandable, it isn't really AI.
That's why you see a race for obscurity -- lots of jargon to describe the complexity of the system.
In reality, good educational software is about good design, good goal alignment, good assessments, good sequencing, good intervention material -- it's all based on classic instructional design, not a superior algorithm. Has the software been set up to support data driven continuous improvement? That is more important than an algorithm.
I'm going to disagree here. I have enough understanding of AI to have built a deep learning framework from first principles. That deep learning framework makes a pile of money for Amazon to this day.
That I know how it works because I wrote the thing doesn't take away from the miracle that it makes that kind of money for doing complicated but not all that complicated stuff.
When I look at generative adversarial networks and the recent attention-based language models I am similarly amazed at what they can do.
But I am also 100% aware of their limitations and I think the hype that they are near-term AGI technology is the real magical thinking.
I was in China few months ago. My general impression is that parents are crazy generous toward their children's education. There are whole department store building devoted to extra-curriculum classes for all different ages - and they ain't cheap at all.
My impression of Squirrel AI is really just using more advanced analytics for education, which is good in concept; but really they are selling parents' the fear that "if you don't buy our services, your children WILL be left behind."
I'm guessing "instructional design" is referring to how you model the problem domain being learned so that it's amenable to use with adaptive learning systems?
Most research in this area is drawn from the domain of “Intelligent Tutoring Systems,” (ITSs), thoroughly described in Woolf’s book “Building Intelligent Interactive Tutors.” [WOOLFE2009].
I'd say the classic Intelligent Tutor System construction was pioneered by ADVISOR [BWB2000]. It's a two-agent architecture that first trains a student model on real-world data, then uses the student model to train a pedagogical agent using a more resource intensive algorithm. Most modern ITSs make use of Representation Learning (RL) to train the pedagogical agent.
One of the main techniques for structuring the problem domain is Bayesian Knowledge Tracing (BKT) [CA1995], where you model the domain being learned as a Bayesian skills network; i.e. a 4-tuple of probabilities (init, learning, guess, slip), updated in a Bayesian fashion. An excellent survey work in this area is given in [BGTPF2010].
If you're interested in the RL part of the problem, and how it can work with this setup you might want to look into partially observable Markov decision process — POMDPs. Emma Brunskill has done some nice work in this area.
[WOOLF2009]: Woolf, Beverly Park. 2009. Building Intelligent Interactive Tutors: Student-Centered Strategies for Revolutionizing e-Learning. Amsterdam ; Boston: Morgan Kaufmann Publishers/Elsevier.
[BWB2000]: Beck, Joseph, Beverly Park Woolf, and Carole R. Beal. "ADVISOR: A machine learning architecture for intelligent tutor construction." AAAI/IAAI 2000 (2000): 552-557.
[CA1995]: Corbett, Albert T., and John R. Anderson. "Knowledge tracing: Modeling the acquisition of procedural knowledge." User modeling and user-adapted interaction 4, no. 4 (1994): 253-278.
[BGTPF2010]: Brunskill, Emma, Sunil Garg, Clint Tseng, Joyojeet Pal, and Leah Findlater. "Evaluating an adaptive multi-user educational tool for low-resource environments." In Proceedings of the IEEE/ACM International Conference on Information and Communication Technologies and Development, pp. 13-16. 2010.
But not much closer, if content, rather than delivery, is the pressing bottleneck.
My experience has been, that of the few first-tier astronomy graduate students that aren't mistaken about the color of the Sun, as many learned it in seminar coverage of common misconceptions in astronomy education, as learned it in their own atypically extensive and successful astronomy education. Current content is really, really bad. A "Primer" will require much better.
AI doesn't help much if your large textbook publisher's science education content is written by inexpensive liberal arts majors with no science background, consulting with "scientists". If one instead uses science graduate students, double majoring in education, with misconception lists in hand, you might get that Sun color right. Maybe. And interviewing researchers about their own research areas is even better. But...
Say you're writing a children's picture book about atoms. What magnitude of domain expertise support do you need? How about a small room full of MIT physics professors? Is that enough?
Can you see a bare atomic nucleus with your naked eye? Why not? Did you just now say too small, rather than beyond-violet "color"? That much is easy. But what if you wish to discover that a couple of atypical atomic nuclei can be seen with your room-lit naked eye, can be made to brightly fluoresce visibly (a multi-step spin-isomer decay, with one step visible)? So you can include a photo, of a glowing green dot in a vacuum vessel, in your picture book, to reinforce that nuclei are real physical objects. My experience has been that a room of MIT professors, most with some nuclear physics background, is likely an insufficient gathering of expertise. Unless it's a lunch for a visiting professor, whose focus is nucleus simulation, who self-describes their focus as a (with emphasis) "small" subfield, and who perhaps thinks all this obvious. Reflect on that: A room full of MIT physics professors is insufficient domain expertise to write an excellent children's picture book about atoms. How might we then go about writing a Primer, if it takes such an awesome gathering of expertise?
Before Primers can be airdropped, they need to be written. And I suggest we're not yet even at the point of recognizing and acknowledging the primary challenge there, let alone scoping, funding, and addressing it. It's not pedagogy and ed tech that's blocking the road to a Primer. It's the sciences. And their incentives and funding.
It's something I'd love to be working on, but finding people to work with has been a challenge.
Interesting idea and I completely agree. I have delved into my ed tech projects and reached more or less the same conclusion - content is a bottleneck. Now I don't know if your overall critique is correct but I think the major thing we can agree on is this: we need high-quality source material that is preferably free. I think the solution is something similar to wikibooks (which I personally think is a failed project at this point) but it needs to solve the incentive problem...to encourage people to contribute similar to how people contribute to wikipedia. Writing an article is easy for a few people to do....but an entire book or curriculum for an entire course...now that's a different story. I do think there is a solution to it which is mainly just a hybrid approach of paying people to write the high-level draft and then allowing a community of people to edit and revise it. If that doesn't work then we simply need a funding mechanism to write open source books.
Once we have these open source books then it becomes significantly easier to build additional tools on top of that source material - exams, practice questions, lecture videos, study notes etc.
Most of the ed tech players I see are simply developing tools that are trying to fit into a broken system. Simply put - the current system needs to be scrapped and replaced with something better, built with the internet in mind and built to be a major disruptor of GLOBAL education.
I am currently working on a side project trying to bring it to MVP stage. This is the idea I have in mind: Let's build a new platform that rebuilds education for the 21st century - start with writing open source textbooks. At the beginning, they don't need to be perfect, they can be revised as the platform grows. Then build a bunch of innovative self-study type tools using the latest in technology to make learning efficient, easy and fun. In terms of monetization - simple advertising and premium services (tutoring, live streamed classes with teachers etc.) and of course a simple credentialing system with the possibly to have proctored exams to add validity. Long-term vision would be this: a type 1 civilization university - all major subjects would have a course with a free textbook and an amazing set of tools to self-study it (or affordable options to get tutoring/view live streamed lectures with real teachers). Add in localization tools to translate all of these courses into the top 20+ languages. Each course would have a rigorousness protected exam for students to get a employment worthy credential. Eliminate all admission requirements and allow everyone to attend for free.
> Writing an article is easy for a few people to do
Writing a bad article is easy. And pervasive in science education.
I'm suggesting there's a level of excellence, at which even a small fragment of an article, for any audience, takes a surprisingly large gathering of expertise. So large as to be implausible with current social/technical/incentive infrastructure. And I'm speculating that it's an interesting level of excellence, with benefits that might justify the investment. Or at least the discussion of the possibility.
> write the high-level draft and then allowing a community of people to edit and revise it
I'm unsure how to describe why I don't buy this.
Imagine a draft newspaper article, which says "Foos do qux". The fact-checker objects, "But some foos don't do qux". The reporter "fixes" the draft with "Most foos do qux". The newspaper fact-checker says "yes, that's great", and it goes to press. Then someone who actually understands foos, points out to a colleague, that the entire focus on qux was misguided, confusing, and engenders misconceptions, and that indeed, the very concepts of foo and qux are badly flawed. "It's news media - what do you expect?" the colleague responds.
So there's a concept that you can wordsmith you way out of getting something badly wrong. You've likely seen some process where two parties have profoundly different concepts, but instead of discussing concepts, are engaged in text tweaking.
But what if, in excellent content, even high-level organization is sensitive to expertise-intensive details?
"Ok, for the foundational concept, we have proposals for atoms, a different definition of atoms, molecules with atoms as a degenerate case, atoms in molecules, nuclei, electron energy, electron clouds, configuration spaces, trajectory spaces, ... . Let's explore the correctness, accessibility and fruitfulness of each foundation." What if you can't even write a draft title, let alone structure a presentation, until after a massive collaborative process among educators, research scientists, and creative catalysts?
> the incentive problem...to encourage people to contribute
An MIT project to create cell-biology VR content approached the need for expertise by pulling in researchers for interviews about their areas. But there was a reoccurring difficulty... getting the researchers to leave. Such was their enthusiasm.
So I offer the hopeful possibility, that a project with just-the-right sweet-spot shape, might accomplish things that seem impossibly difficult, when seen less clearly.
> open source books
I'm sympathetic to OER. But in the context of transformative improvement...
Chemistry education research describes chemistry education content using adjectives like "incoherent", and as leaving both teachers and students steeped in misconceptions. It's not clear to me that a reasonably scaled OER effort can move that needle.
Years ago, NSF almost decided to create a national science education wiki, analogous in scale to wikipedia. That might have had critical mass for transformative change. But they didn't.
This article could be interesting, given the depth and breadth that one could explore given the topic. Unfortunately, it quickly turns into fluff marketing for the Chinese company they focus on. Is Technology Review accepting money for articles?
> The 13-year-old decided to give it a try. By the end of the semester, his test scores had risen from 50% to 62.5%.
If you make having a high percentage at a standardised tests the point of education and use software to make you drive those numbers up, negative externalities are going to come back to you in a few decades in crazy ways... And the article does note this: "Earlier this month, the government also unveiled a set of guidelines to focus more on physical, moral, and artistic education". How much that's going to be gamed as well is going to be interesting I guess.
anecdote, if you will.. in a major University setting, for natural sciences, a mixed-level study-group is listening to a presentation on a model of a forested ecosystem using remote sensing and "AI", though the emphasis was on ML with certain inputs. An American with a liberal-arts background and good CS training, asks "in this model, how can we find the limits of the validity of these assumptions. A real world is more complex than what is being modeled, so how can we describe that and find 'blind-spots' in the work here" .. Meanwhile, a serious student who may have been born in China, asks "how can the model results be cross-checked to eliminate human bias in the result interpretation?"
Now this same exchange could have happened between any two students with the basic alignments of "objective science" versus "natural sciences", but it did seem telling of a certain pure-science tilt on the part of the student from China. To push that further, one could say that the "objective science" angle lacked a certain "intellectual humility" in the inquiry, with an emphasis on the correctness of the machine results, and an assumption that better math will produce "winning" output. No real evidence, but that was an impression at that moment.
I think it has more to do with that China's strength at the moment is being utilitarian. Their working theory is to increase capacity. That is why they are build a lot of high speed rail, cities and power plants. While in the West we talk about tweaking cars to being self-driving or interest rates in the mortgage market. Because we don't really have that urgency to do things at scale tomorrow.
Out of interest, what do you mean by the difference between "objective science" and "natural sciences" in this context? Is it the "formal science" vs "natural science" distinction (i.e. pretty much maths vs everything else).
Interesting article but I think this is really a broader topic: not just how AI can transform tutoring/education but also how AI will impact all areas of work, how human productivity and job satisfaction can be increased, and how we will deal with “not enough work for people to do.”
Society has always made sure that most people remain in the rat race.
During agrarian times, it was back-breaking work. During the industrial revolution, it was in sweat shops. Once office became computerised, it became office drudge work. Industrial revolution came with promises of leisure, but they did not materialise (see "In Praise of Idleness" by Bertrand Russell. [1])
Incomes of the majority, and the debt due to consumerist needs of the majority are kept in balance so that no one has leisure time to fulfil their true human potential, and you need to work 3 decades to subsist.
I somehow tend to think that the coming A. I. revolution, if it does come at all, will leave things as they are in a new form - the more things change, the more they remain the same.[2]
I find the conversation about leisure time throughout history to often be mis-characterized or missing valuable insights to support someone's perspective of "wanting more control of their time.
If were were to look at the working hours in a few European countries, it becomes clear that the weekly work hours has diminished from 65-70hrs in the 1870s, to 35-40hrs in 2000s [1]. This, coupled with technology solutions that helps decrease household shores, improves healthcare and decreases time to acquire consumer goods (The list goes on), it becomes clear that we live in unprecedented times in terms of leisure.
It is also interesting to see hunter-gatherer societies, which is estimated to have worked from 2.8 to 7.6 hours. Did the increase in leisure time lead to 'true human potential' for these societies? If not, what makes us believe that it will be different today? Is it possible that more leisure time will lead to more boredom, screen swipe, etc?
If we were to assume that [1] data will continue to move in the same direction, then we are likely to have further gains in 'leisure time' led by a new technological evolution.
Or we could get free space and experiment on large scale of scarcity is eliminated or vastly reduced. AI won't provide it, of course, we need other technologies.
Leisure is not the ultimate end. I'd personally prefer a society with some productive work instead, and for the disadvantaged, finding what they fit into.
I recommend reading "Accelerando" by Charles Stross which is very imaginative if not in-depth enough... Or potentially the "Culture" series. People there don't just laze forever.
The "not enough work for people" is mostly a myth from what I've seen. The reality is rote work will be automated but higher-level type jobs will increase. Also the idea of "basic income" will probably come indirectly - not from a state funded program - but simply from cost of goods dropping so low that buying the basics to live will be so cheap that you don't need much money to begin with.
> simply from cost of goods dropping so low that buying the basics to live will be so cheap that you don't need much money to begin with
Basic income gives you not just money, but time. If you receive basic income, you can work less. If people live in a culture where everyone must remain in full-time employment until age 65+ and the vacation time they are allotted is meagre (e.g. the North American situation), then it is cold comfort if the money they make allows them to buy basics for more cheaply.
Secondly, while the basics of food or clothing have become cheaper in developed countries (though often only if you are comfortable sacrificing some quality of the product), housing prices are soaring. While a basic income, too, would have a problematic interaction with housing prices, housing costs would remain an issue even in the alternative future you posit.
I don't have the numbers. You would need to go to the history books to find it but you can see the trend in action to a certain extent for certain occupations. A prime example is farming. In aggregate, total employment in farming has declined significantly. The people that are left are doing higher level work, not tilling the field with a donkey and plow, they're driving a tractor or doing soil analysis etc. The field laborers moved to cities and now they do light manufacturing or similar jobs with maybe a few weeks of training.
There may be plenty of unrealistic prerequisites before the machine revolution can start distributing its economic surplus to everyone, and give people not enough work to do, but--
"...while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic." --Marx, The German Ideology
Just replace 'society' with 'centralized AI'. To keep busy, maybe the reality if you don't need to create real value anymore is that life becomes a half-empty series of paint nights, cooking classes, and other such role-play.
> maybe the reality if you don't need to create real value anymore is that life becomes a half-empty series of paint nights, cooking classes, and other such role-play.
More likely is that life for many people becomes a lot more time spent staring at one’s phone. Even in places where people have a decent amount of leisure time, a lot of those old community events and social clubs are struggling now.
I wonder about looking backwards in history for examples of how people dealt with 'no enough work'.
For example, I believe that ballet got started in part so that Louis the XIV could require his aristocrats to observe the proper etiquette that he'd made up, so that he could better control them.
Point being - they had enough food, shelter, and luxuries supplied to them that they didn't need to work.
I don't think people are fearing not having to work, but inequality as a result. If you don't have redistribution your part is only as big as your work or your ownership. Since we are quoting, let's go with Stephen Hawking:
"If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality."
Of course everything old is new, so you could read for example Hannah Arendt. Our societies have mostly evolved out of how to distribute profits from automation. The reason automation is now the boogeyman is because we have changed paths by not thinking about that.
I foresee when more people get AI educated, the nation will be more likely to have free speech and a real legal system, because there will be less people easily get brainwashed that is, so this is good news.
The criticisms quoted in this article are rather poor. First the contention that AI is better for rote tasks. Not true -- this is an old and outdated view. Also the pooh-poohing of mere knowledge. You know, our democracies could benefit from more people understanding statistics and/or chemistry. If you understand first year physics + chemistry + biology + math + econ + psych, you get a pretty useful model of how the world works. If you really get those subjects, you're able to make all kinds of inferences and guesses that turn out to be right surprisingly often.
As a game developer, I have a lot of exposure to the art-school types, and it's amazing how poor their grasp is of how anything works, other than EDIT: s/humans/human emotions/ and human-centric narrative.
The aim of school isn’t to make you remember any given fact or theorem. It’s to make you build a mental index of such facts and theorems that resembles the shape of the space itself, such that you can 1. understand the “positioning” or relevance of a new idea in that space; and 2. know where you’d search, and what questions you’d ask, to look up any fact or theorem you hear about, or kinda-sorta remember exists.
The main problem with public schooling is that the system designers (learning-target criteria setters, curriculum designers, textbook publishers, etc.) don’t properly communicate the fact that surveying the field to build a mental map of it, is the goal of primary (and most secondary) education. So you get teachers, parents, and students all worrying about test scores that show that you don’t understand a particular idea, when the curriculum was never designed to communicate particular ideas, but only the structures containing them.
I think that giving the system designers too much credit and in any case, I think dismissive of fact or conceptual knowledge, which are key to being able to ask questions about topics.
Agree with you there. Also, my experience is atypical because I'm going back to these subjects after 25 years out of school. It's interesting to me that many people will revisit favourite novels, or religious texts, but we don't re-read textbooks. I'm getting whole new layers of meaning from the same old first-year stuff. AI / agents could help schedule revisiting topics to achieve deeper understanding.
If you think there's a lot to get from re-visiting textbooks, try re-visiting the actual sources. For example, Blaise Pascal's vivid accounts of his experiments in fluid dynamics (and his battles with the status quo who believed nature abhors a vacuum) will have you on the edge of your seat like a thriller novel--AND you'll come away with a much better understanding of fluid dynamics than you could achieve in a classroom. Or take Charles Darwin, you'd think "The Origin of Species" would be pretty much sucked dry, but nope, it has vast swathes of low-hanging fruit practically jumping out of the pages, begging for someone like you to make the connections and churn out interesting research papers like you're drawing water from a faucet.
The AI/self-study type platforms could possibly solve this problem since it would allow the learner to progress at their own pace. I agree that "deeper level study" is huge part of getting the bigger picture.
AI self study can certainly get you to the equivalent of a Masters degree, and much faster for anyone who is motivated, unfortunately I would never recommend that to anyone.
The problem is that even after years of working in the field you will likely be underpaid, and it won't be possible to find jobs in the field without a direct connection. Another problem is that if you want to go for a PhD to learn beyond that you will need to sit though years and years of useless classes that you have essentially already taken, just to check some boxes. I don't think anyone self motivated to learn things will sit though 6 years of useless classes to check some boxes or at the very least it will be terribly depressing wasting so much life.
At least that is my experience. The open materials for CS and AI are great though.
I believe that within our lifetime we will see a direct replacement of that traditional model that you are thinking of (only "traditional" universities offer reputable degrees). New model I see occurring is more focused on certification based credentials where you have to prove your competency on a rigorous evaluation by a reputable testing organization. If someone has obtained specialized knowledge then it should not be an issue if you got it from online sources or traditional university sources.