Hacker News new | past | comments | ask | show | jobs | submit login
What Students Do (And Don't Do) in Khan Academy (mrmeyer.com)
58 points by ColinWright on Dec 14, 2014 | hide | past | favorite | 50 comments



I've noticed with most critiques of KA that they set up a straw man of what KA is trying to accomplish, and then they attack that, almost always without giving a better alternative.

The fact is KA is a great tool for learning many subjects, and will only continue to improve. The biggest take aways should be:

- spaced repetition of subject matter is extremely useful for long term memorization - bite sized, repeatable lessons allow students to digest more and "rewind" over parts of lectures they may have missed - "gameification" and small rewards are useful for getting students to keep pursuing a subject - students shouldn't move on to more advanced subjects until they've mastered a particular foundational skill

No teaching tool or method is going to be perfect, but I think KA gives parents, educators and most importantly students a really useful tool for self-paced learning that traditional methods can't easily offer.


(I'm one of many behind Khan Academy)

One thing that can't be said enough — which we internalize deeply at KA but is often confused outside our office — is that KA intends to be a resource for education, not a complete solution.

It's a resource for students who wouldn't otherwise have access to this content and for those having trouble with whatever education they're currently getting. And it's a resource for teachers who want a tool to help make sure students cover core skills at their own pace — so they can spend their class time getting students to work together, using those skills in all sorts of creative ways, and generally adding the highest value teachers can add.

I mostly find Dan's advice (and the article's comments) constructive. But taking a step back, it makes me proud that an educational resource covering this amount of content is free, open to this sort of critique, and constantly improving itself. See the comment from Justin Helps, one of our content creators, on Dan's post.


Just curious, why would KA not want to be a complete solution for education?

I can understand the political incentives not to openly state such an ambitious goal - the educational establishment is massively powerful, and doesn't want an upstart replacing them. But is there a reason why, in principle, KA wouldn't want to be a complete solution to all of education?


+1 to your point of what it gives parents.

KhanAcademy is a powerful force for good in education, especially in instances where a parent wants to help his or her child with homework, but does not possess the educational foundation necessary to do so.

Additionally, when I was a middle school teacher, my students would always have questions about topics we wouldn't have time to get to in class. KhanAcademy and TED were invaluable resources to point them towards to encourage their curiosity.


It should be noted that there are huge vested interests who want to tear down Khan academy and similar sites.


Which "huge vested interests" are you referring to specifically? I can understand that if the Khan academy were to really take off, it'd shake quite a lot of educational institutions. Everything from universities, state-funded schools, teachers unions, the works. And I don't think a lot of them would like that, as I it would appear to the individuals running/manning them that their jobs are on the line.

The reason I say state-funded schools wouldn't like this is because they work on a false premise. Supposedly that education is not free/cheap enough, and that government funding is required to make it work for the masses. You take away that, and public/state schools become glorified daycare so that the parents can stay productive instead of having to babysit their offspring.

One addition: I've just setup a recurring donation to the Khan academy. One small step at proving that society can function without state-coercion.


Kahn Academy's stated mission is to "providing a free world-class education for anyone anywhere." They claim to have content aligned to every standard of the Common Core. This is exactly how Meyer represents their position. I don't see how that is a straw man.

It sounds like your argument is that KA doesn't actually claim these things and that it is not reasonable to evaluate it on those grounds. I don't think that finding that Kahn Academy doesn't do what it claims in these regards qualifies as attacking it, unless critique = attack. Further, finding KA lacking in the types of questions it asks students doesn't require offering a better alternative in order for the critique to be valid.

The rest of your comment is probably not that controversial, but these aspects of KA are not what Meyer was evaluating. Special pleading doesn't give Kahn Academy credit for the claims they make. I'm sure they will continue to improve--as the 800lb gorilla in the room, they have plenty of attention from people who can offer suggestions. The Common Core requires students to think more deeply about the mathematics than most of the state standards that came before it, and the types of questions that KA asks its students are too superficial.


Please for the love of maths do not "align" with the common core standards. It would be the saddest, most disheartening thing imaginable* for Khan Academy to become a feeder to the American standardized testing system.

The author writes: "If one of Khan Academy’s goals is to prepare students for success in Common Core mathematics, they’re emphasizing the wrong set of skills."

I would like nothing better than the academy to publicly state this is NOT a goal of theirs. Being good at math and passing math tests are orthogonal* concepts. In my experience teaching to the test is the fastest way to kill the love of any subject because, at the end of the day, the only answer you can give to the question "Why is this important? or "Why should we do it this way?" is "Because the test says so." It's death to independent thought.

I don't know who his advisor is but please kill this dissertation before it gets out into the world and he uses it for self promotion on Good Morning America.

*intentional hyperbole


(I work at Khan Academy.)

We have no interest in preparing students for tests just for the sake of it, but if you look at the Common Core standards you'll see that they are well thought out (really!) and designing a curriculum around them is a very sensible thing to do. Of course, we won't restrict ourselves to the material covered by the Common Core if/when we feel there are gaps.

We're interested in helping students learn to solve problems, not in teaching them how to take tests.


Khan Academy is obviously a useful tool for learning, and of course it's part of a child's education, not all of it. The point I think Dan is making is that if you 1. say you're aligned to the common core, and 2. provide mastery statistics to suggest a level of comprehension or depth of knowledge tied to those standards then that should predict well a child's performance answering questions aligned to the standards but outside the KA system (e.g. a Smarter Balanced Assessment)

That means until that ideal is reached KA could say "Mastery at X depth of knowledge but note, for this standard Y is required" or allow teachers to write free response questions and have them tag the questions by standard, type, depth of knowledge etc. and grade them by hand in the KA system. Including hand graded questions (by a student's teacher)with an adequate tagging system KA mastery statistics might be significantly more accurate.


Consider a well designed test of an important curriculum. This means the topics are important for $REASONS, and the test is highly correlated with knowledge of the topics. If a student asks "why is this important", you can reply "$REASONS". And you can "teach to the test" secure in the knowledge that you are teaching useful materials.

Now, if you believe the curriculum is poorly designed or the test is poorly correlated with it, I encourage you to make that argument. But explaining why "Know relative sizes of measurement units within one system of units including km, m, cm..." should not be part of the curriculum is a lot harder than making emotional pleas about "love of the subject".

(I've skimmed parts of the common core on a few occasions. It looks pretty good to me. http://www.corestandards.org/Math/Content/4/MD/ )


You make a strong logical argument, but it relies critically on the tests being really great. Have you examined the sample questions as well as the standards? And even if the tests are great (they are not) do you have any idea how well this will play out in real classrooms? I ask because, as a veteran teacher, my guess is that CCSS will be great for the top two deciles and a disaster for the rest. The super worst thing is that no one knows because this top-to-bottom overhaul has not been tested/proven anywhere. That is just malpractice, on an enormous scale.


I can't speak for OP, although I basically agree with you and OP simultaneously.

However, look at it as an optimization problem. CC testing standards and format may be ideal testing tools for 49 minutes 5th period after lunch 5 days a week in a 40 person classroom, performed in perfect lock step by all 40 students, only failure mode is repeating the entire class or developing a hatred of the topic, aka standard public school experience.

However, what does "alignment" WRT KA mean when the educational experience has completely changed? Its analog, you can watch a video 50 times will you get it, jump back and forth, all that. I'd propose the "best" testing technique for that environment probably looks a lot like KA and not so much CC because you need instant feedback and simple questions. Its not quite as good as personal 1 on 1 tutoring, but its better than sitting in class, more importantly the experience is inherently different than sitting in class...

The proposal in the linked article seems to be that KA epic fails, solely in "alignment", because it doesn't test like CC, which is trivial to test and prove and analyze statistically. Factually looks like a good study. Maybe it fails the relevance test because "alignment" is fuzzy to me (maybe it has a special jargon meaning among CC teachers?) I think MAYBE OP and I are concerned that KA aligns pretty well to a decent curriculum (CC) WRT topics and order of topics and presentation standards, and I hope KA doesn't ruin the KA assessment portion to better match CC assessments solely to get a checkbox somewhere.

Maybe we can abstract out KA from the discussion. I'm tolerably good at math and have tutored people. If you define alignment with CC as slavishly following their assessment technique at all times even though I'm 1 on 1 in an ongoing human conversation, such that everything I speak must statistically match the CC assessment percentages in style, tone, and technique, that's ridiculous. If you defined alignment as I'm talking about sets of linear equations with infinite solution set the same general time, order, and way the CC talks about it, thats OK, even if I assess a tutored student the "wrong way" by asking the wrong type of questions (which may very well be the right question for that kid at that moment while also being on average the wrong way to teach all American Kids in a 40 person classroom)

Its seems reasonably non controversial that CC curriculum and assessment are local maxima on society wide averages for teaching math. It seems reasonably non controversial that KAs assessment tools seem to be a local maxima for computerized delivery of help to kids that need extra help. I (and possibly OP) don't think it reasonable to trash talk KA because the local maxima of effectiveness for society as an average whole in a classroom setting doesn't necessarily match KAs local maxima of effectiveness for "kids who need help at home". The danger is screwing up KA enough to make it useless for the kids who need it after school, just to achieve a marketing checkmark.


So from what little I've seen, KA is reasonably well aligned with common core already. Most decent curriculums do. Also, I think you are misunderstanding what CC is. CC is just a list of topics students should know - there is no "CC testing standards and format".

If KA doesn't align with CC, it's because there is some particular topic that KA forgot about. E.g. maybe common core requires students to "Solve problems involving addition and subtraction of fractions by using information presented in line plots", but Khan forgot about that one.

I also don't think it would be harmful if KA made sure it covered the basic topics of AIEEE or the IIT-JEE either (important Indian standardized exams). A lot of people want to pass those exams and it would be a marginal amount of extra work for KA to help them.


"there is no "CC testing standards and format"."

I see two. One is called PARC, one is called Smarter Balanced. And they want to use them to close schools and fire teachers, so look for everyone to teach to whatever sample tests they can get their hands on.


PARC and SBAC are indeed tests intended to measure students against the Common Core State Standards. They are not from, or a part of, the CCSSI. People definitely do want to misuse these tests to punish schools and teachers, but none of this invalidates the Common Core. And while people will teach to the sample tests, that's on people who want to use these to destroy schools, not on the Common Core itself.


Ah, but if you check out the Common Core page they go on and on about student performance being comparable between states, which is why the two standardized tests in effect are inseparable from CCSS. btw, I know what you meant by the rhetorical "people...do not want", but the problem is precisely that "accountability" is the other dog whistle never far from any CCSS discussion.

I am afraid your healthy view of CCSS is not widely shared, and the discussion would be much better if CC was offered purely as PD. Teachers love anything that works.


If a curriculum, like KA, is "aligned" to a set of standards, like the Common Core standards, it means that the standards accurately describe what the curriculum teaches. Often, as is the case with the Common Core State Standards, the standards do not address how the items of a particular content standard is taught, preferring to leave such things up to the professional judgement of the curriculum designers. So the Common Core doesn't mandate a "style, tone, or technique." And defining "alignment" as talking about a topic at the same time as other teachers in a certain order is perfectly reasonable. You can even assess a student on that topic any way you want. But don't forget that the ways we assess students also teach them what we care about and what "doing mathematics" means.

One thing that is a big change from most states' previous standard, is that the Common Core includes 8 Mathematical Practice standards--ways that we want students to think about and do mathematics. These are things like: don't give up when solving problems, know and use the right mathematical tools for the right job, be precise, make and critique arguments, and pay attention to patterns. As with everything we learn, students learn to engage in these behaviors by practicing them, and they get practice by engaging in tasks and interactions that require them to draw conclusions, cite evidence, and apply concepts to solve non-routine problems. In a traditional classroom, these standards of mathematical practices should be woven in with all of the lessons that we teach, and we should assess students on their learning on these standards as well as the content standards.

While the Common Core might facilitate an expansion of standardized testing by giving a common set of expectations to assess, many people erroneously conflate the standards with the assessment of the standards. This is understandable --Meyer does not seem to spend any time on drawing the distinction. The CCSS doesn't mandate or recommend any particular test. And while it makes sense to speak of assessment of the Common Core, it doesn't make sense to speak of the Common Core assessment.

Having said that, any tool that claims to assess the Common Core should also assess these mathematical practices. One such tool is the Smarter Balance Assessment Consortium's test. Meyer claims that KA doesn't prepare students to be successful on the SBAC test, because students spend most of their interactions with Kahn Academy doing exercises and problems that do not require the depth that the Common Core calls for. This assertion seems supported by his data.

You may be right that KA represents the current local maximum for computerized delivery of mathematics, but Meyer is evaluating KA as a curriculum, not as a resource for kids who need extra help. In this role, Meyer's data indicates that KA's interactions with students falls short of depth of knowledge required by other assessments of the Common Core. His hypothesis that KA focuses on these types of questions because they are easier to be scored by a computer seems pretty reasonable to me. I don't think it is trash talking KA to evaluate the product it delivers against the claims they make for that product.

Finally, suppose KA found a great way to have students engage in higher order thinking skills: maybe require students to explain their thinking or make conjectures or solve multi-step problems that require them to apply concepts in non-standard ways. I don't think this would make it useless for kids who use it for extra help, do you?


I interpreted the article as making exactly the opposite of your point: because Khan Academy is limited to automated grading, they can only ask standardized-test style questions: solve an equation, select a multiple-choice answer, etc. Meanwhile, the Common Core standards emphasize argumentation (i.e. proofs), formulating equations, and other high-level skills that are harder to test directly but are much more important to actually getting a deep understanding.

Common Core is not a set of tests; it's just a curriculum that sets some basic standards for what students ought to learn. It's reasonable to be concerned about standardized testing as a means for assessing whether those standards are being met, but it doesn't sound like you actually have any objection to the Common Core curriculum.


This response shows a willful ignorance of the stated goals of Common Core, especially with relation to the Common Core math curriculum. Common Core's primary pedagogical shift in math is to remove the historical focus on math as a set of mindless procedures, and instead teach children math as a broader, more holistic approach to logical & numerical reasoning. Common Core teaches children to solve problems many different ways, make arguments about why both tactics work, etc. This is about as far as one can be from the traditional "teaching to the test" downsides.


Only the worst teachers failed to motivate math procedures. Granted, the curve may have been skewed to worst. :) But CCSS does not suggest a little professional development to get teachers to explain the "why" behind the procedures, it wants kids to invest a ton of time in proving procedures multiple ways before then, well, going ahead and mastering the procedure.

CCSS is untested (unless you count its imposition on 40+ states a test) so we'll have to wait and see if it moves the US up the meaningless PISA test rankings.


What do you oppose in the Common Core? Sure, they might make it easier to develop widely used standardized tests, but the assessment of the Common Core is not the Common Core. Teaching students to be successful with Common Core mathematics doesn't necessarily mean teaching them to pass tests. But the Common Core does ask students to engage with math at a higher level of thinking than did most of the state standards that preceded it. Common Core is actually pretty well thought out--at least the K-8 portion is.

Designing a curriculum around a set of standards does not have to be "teaching to the test" and a cursory reading of the CCSS makes it clear that the answer to questions like "Why is this important?" or "Why should we do it this way?" should never be "Because the test says so." Students should "construct viable arguments and critique the reasoning of others." How does this kill independent thought?


Agreed with your sentiments, but Khan is financed by Bill "Common Core" Gates and in May partnered with the College Board (hard core CCSS folks) so look for KA to get closer to CCSS, not further. Money talks and CCSS, as bad as it is pedagogically, has a surface plausibility that fools folks who have never taught. In this sense it is like the New Math of the Sixties, which tried to improve math learning by starting at the most fundamental concepts (eg, set theory in the first grade) when in fact those concepts were worked out over centuries by the brightest minds working on math. They confused the logical starting point with the pedagogical starting point.

The good news is that these catastrophes are dumped even faster than they come into ascendance and CCSS is confirming that trend. We just need to remember this the next time academic geniuses decide we are doing it all wrong and -- instead of offering a little professional development -- decide to offer all new standards, textbooks, and standardized exams.

I too have a soft spot in my heart for bold strokes, I just want them to come from folks who know what they are doing. The CCSS authors did not.

ps. I do not think thesis says what you think it says.


I'm wondering what your pedagogical objections are to the CCSS math standards. From my reading of the CCSS and supporting documents, it sure seems like the designers were research-informed. What did they get wrong, and what would you do differently? It really isn't a rehashing of New Math. They think very carefully about the pedagogical starting point, in fact.

Please don't assume that people who think the CCSS are on balance a good thing are people who are inexperienced in the classroom. There is room to disagree about aspects of the CCSS, but the discussion will be more productive if our assertions are supported by evidence, rather than name-calling.


(a) my analogy with the New Math was not meant to equate the two, only their conceptual cart before procedural horse qualities (which would also be my pedagogical concern); (b) don't get me wrong: I agree with the "deeper understanding" goal of CCSS, and it was how I taught. The concerns I expressed were with the whole program's implementation, (un)testing, and effectiveness with less than exceptional students -- more of a systems thing than pedagogy. In fact, CCSS would have been great as a professional development "surge", if you will. All this testing, book-rewriting, and saber-rattling over accountability -- well, the proponents get an "F" for change implementation.


When you are talking about the "program's implementation," are you talking about how the CCSS were developed? The authors were informed by the best existing state standards, teacher feedback, and public input, and they aimed for the standards to be research and evidence-based more than any other standards document I've read. Every change of standards must face the charge of "untested", though I think the authors of the Common Core mitigate this by basing their work on research and work that is tested.

I am unfamiliar with any large-scale failures of the CCSS with struggling students, but I have seen struggler excel with an approach that emphasizes understanding. Of course, anecdotes are not data, and I'm open to hearing how the Common Core fails our weakest students. I don't think it is perfect, but I do think it is better than any of the alternatives I've taught with.

If we're being honest, in many places in the US, the emphasis on misusing standardized test scores to shut down schools and fire teachers was well under way long before the CCSS. And to be clear, I don't think this makes any sense. But I don't think it's reasonable to pin this on the Common Core, it's just that it provides an enemy that people from around the country can hate on.


By "implementation" I mean everything that followed the authoring of CC. (But I do have a big problem with the few authors and absence of meaningful feedback from educators.) The thing is untested, mandated, used as a threat, and more untested. I also think the absence of feedback from real teachers was an act of arrogance and ignorance: teachers are actually pretty good at what they do, and the best teachers could have totally schooled the CC authors. They never really asked. CC reminds me of what Yeltsin said about Communism: "It was a beautiful dream."


> good at math and passing math tests are orthogonal concepts.

What good is being good at math yet being unable to answer any questions about it? Would you get on board an airliner with a pilot who flunked the flight certification tests but assures you that he's really good at flying anyway?


There are different ways to be "good" at math. I have a few friends who are very good at contest type math (e.g. well-ranked on the Putnam) and I also have friends doing math PhDs at top schools. They are mostly not the same people. The contest winners are very quick; they know a lot of tricks and general problem-solving strategies and can think very efficiently about how to apply them and come to a clever solution. The researchers are not necessarily as quick, but they're more patient, willing to spend days or weeks focusing on a single problem and learning/developing the machinery necessary to solve it, and they have more of a pure intellectual curiosity about math -- they approach it as a source of new and interesting questions, rather than just a set of techniques to be learned or problems to be solved. Sure they all did well on the usual standardized tests (SAT, etc), but if you were going to use a test like the Putnam to identify who you wanted to hire as a mathematician, you'd make a lot of poor choices.

That said I basically agree with you that it's not reasonable to call someone "good" at math if they're unable to do simple calculations (with the caveat that lots of great mathematicians screw up simple calculations on a regular basis -- but they're still at least usually better than the average person). But there are certainly people who memorize enough formulas to pass the test, yet never acquire or don't retain any conceptual understanding. In that sense, passing a test is a necessary but definitely not sufficient condition for being good at math.


Reading this article reinforced a perception I walked away with at an event where parents were complaining about common core related topics.

The professional educator dialect of English is not very approachable for the layman. To many people it sounds like somebody throwing out $10 words to make the arguments sound more intelligent.


> The professional educator dialect of English is not very approachable for the layman.

I think Dan's blog is largely pitched at other professional educators. All professions develop their own technical vocabulary.

As an analogy, I recently enjoyed an article about the implementation of the v8 garbage collector by Jay Conrod: http://jayconrod.com/posts/55/a-tour-of-v8-garbage-collectio...

Here's a sample paragraph:

"Distinguishing pointers and data on the heap is the first problem any garbage collector needs to solve. The GC needs to follow pointers in order to discover live objects. Most garbage collection algorithms can migrate objects from one part of memory to another (to reduce fragmentation and increase locality), so we also need to be able to rewrite pointers without disturbing plain old data."

I think the article is an excellent piece of technical writing, but it would also be reasonable to say "the professional compiler engineer dialect of English is not very approachable to the layman."

That said, good communicators need to be able to change their presentation based on the audience, and it's too bad that the people at the meeting you mention couldn't find a common vocabulary.

If you want to see something by the author of this post that is pitched at the layman, he made a very nice TedX presentation a few years ago:

http://www.ted.com/talks/dan_meyer_math_curriculum_makeover?...


One sophistry problem is we will "ALL" theoretically take and pass Algebra, so the teacher does not have the luxury of specialized jargon, the teacher MUST be able to present it to the entire general public. A general education teacher is expected to present effectively to the general public as the core of their job, its not a nice to have or wouldn't it be interesting if it were possible.

On the other hand, only maybe 1% of the population has to suffer thru garbage collector theory in CS classes, and of that 1%, maybe only 0.0001% ever implement a GC "in the real world" and can understand that post. There just aren't that many GC programmers and there are a lot of humans in the general public. In that case jargon is perfectly acceptable, even expected. Also the core of a programmers job is rarely making public presentations.


I didn't mean that teachers get a pass on opaque vocabulary when they're teaching students. I meant that, just like every other profession, they have a technical vocabulary that they use to discuss their own profession with other teachers.


Agreed. I've found that a lot of education officials hide behind that jargon, that's all. Totally get the domain specific jargon. I've seen my wife glaze in about 30s when I start muttering about tech stuff!


> "the professional compiler engineer dialect of English is not very approachable to the layman."

We compiler guys do that on purpose in order to keep our salaries high!


Every field has its own jargon. It's shorthand for commonly understood ideas that you don't want to waste a paragraph reiterating in common English. Just consider how computer programming talk looks to an outsider.


Khan Academy isn't perfect. But it improves every year. I like it for what it is, and have high hopes for what it might become.


Can Khan Academy ever become something other than Khan Academy?

That's a profound question for me.


The author writes, in the interesting blog post submitted here, "We should wonder why Khan Academy emphasizes this particular work. I have no inside knowledge of Khan Academy’s operations or vision." Actually, here on Hacker News we have had public discussions blessed with direct participation by Khan Academy developers, so I think I have some sense of what Khan Academy intends to do as it continues to develop online lessons. I've even had some off-site email interaction with Khan Academy developers after threads here in which I have "met" them while we discuss our common interest in mathematics education. I certainly hope that Khan Academy staff will join the discussion of this post.

The author also writes, "I find it more likely that Khan Academy’s exercise set draws an accurate map of the strengths and weaknesses of education technology in 2014. Khan Academy asks students to solve and calculate so frequently, not because those are the mathematical actions mathematicians and math teachers value most, but because those problems are easy to assign with a computer in 2014." I think this statement is correct. The Khan Academy developers have devoted a lot of thought to how to computerize problems rather than exercises, and they are still working on building new problem types for the mathematics lessons. I'm actually quite impressed with some of the problem types that show up in more advanced mathematics lessons on Khan Academy that go beyond the eighth grade level implied by current United States curriculum standards. But the author's point is well taken that the Common Core, and good mathematics curricula around the world, expect reasoning and explanation from pupils in mathematics lessons already at elementary school age. That currently is indeed difficult to automate.

The author's comments on development of Smarter Balanced Assessment (SBC) as a new platform of online mathematics instruction is of course especially interesting. I will be curious to see how this program continues to develop, and I would be glad to hear from developers of that program in this thread too.

(Disclosure: I am a teacher of voluntary-participation prealgebra mathematics classes to advanced pupils of third-, fourth-, and fifth-grade age, using the Art of Problem Solving prealgebra textbook by Rusczyk, Patrick, and Boppana as a core text with many supplementary materials from around the world. I do not view Khan Academy as a competitor to my classes, but rather as a helpful supplement for many mathematics learners around the world. My daughter, who is of a younger age than eighth-grade age, is currently doing many of the same Khan Academy lessons as those just reviewed by the blog post author. I recommend Khan Academy to my friends, and it will probably stay in the mix for the mathematics studies of my three minor children for quite a few years yet.)


(I work at Khan Academy.)

> The Khan Academy developers have devoted a lot of thought to hope to computerize problems rather than exercises, and they are still working on building new problem types for the mathematics lessons. I'm actually quite impressed with some of the problem types that show up in more advanced mathematics lessons on Khan Academy that go beyond the eighth grade level implied by current United States curriculum standards.

Thank you.

> Common Core, and good mathematics curricular around the world, expect reasoning and explanation from pupils in mathematics lessons already at elementary school age. That currently is indeed difficult to automate.

This is definitely true. By no means is Khan Academy a complete solution for mathematics instruction, nor is it likely to become one in the next few years. Our site should ideally be used alongside other materials that test reasoning and explanation in a non-automated way, as well as projects that allow students to apply their knowledge. We have early experiments that take a much more interactive approach to teaching certain concepts and encourage more experimentation, but we're probably years away from developing them into a shipping product.

> The author's comments on development of Smarter Balanced Assessment (SBC) as a new platform of online mathematics instruction is of course especially interesting. I will be curious to see how this program continues to develop, and I would be glad to hear from developers of that program in this thread too.

If I'm not mistaken, many of the SBAC problems (especially free-response ones) are hand-graded, which can work if you charge for each test taken, but can't work for us when we have millions of problems done every day, all graded automatically. That said, we're exploring strategies like peer grading in other subject areas (particularly our programming curriculum) and would be excited to bring them to math as well.


> that test reasoning and explanation in a non-automated way

> early experiments that take a much more interactive approach

> we're exploring strategies like peer grading

Are you guys working on any genetic / learning algorithm implementations of AI sufficient enough to evaluate responses to arbitrary prompts? It sounds both like something that should be feasible with modern tech, especially when time to response can be large (a student can wait hours for an answer to a complex essay response) and the scale can be arbitrary (as many server clusters as "its worth" yet also feasible (we can see similar scale literary processing in other spaces).

I can imagine such an algorithmic implementation where you take some common query given to students across hundreds of schools, with all the teacher graded responses, and maybe after a few years it can get within an accuracy confidence threshold to start grading the prompt, and after another few years providing critique on that grade. That would be the most revolutionary educational tech since the Internet itself, even if it were pigeonholed in several years studying the same single problem for now. Sounds awesome either way.

You can even use that peer grading tech to build up a library of problem / solutions to build your causal relations table from. I just hope you guys have someone in a back room working on this, because to me it seems as real world usable computational linguistics happen more often, there arise a ton of useful applications even in the minimum case.


False positives or false negatives in this type of fuzzy correction could be very detrimental to students. Khan Academy is doing an excellent job at what it is currently aiming to do. Doing less well by trying to do more would decrease its value.


No, not currently.


While I like the idea of Khan Academy, there are problems with some of the contents of the lessons. For example, the lesson on Hooke's Law definitely has issues:

http://www.leancrew.com/all-this/2012/12/khan/


I visited KA for the first time in a while. It used to drop me into a table of links to videos on various academic topics. Very simple, useful, fast, and the Obviously Right Way (TM) to do it. At least in the default case, and for the casual or first-time visitor.

However just now it required me to signup/login. Signing up asked for PII, which is intrusive and strictly unnecessary for letting me see educational videos. I gave a fake email, along with the other fields. It then complained it couldn't send an email to it. When I pressed their button to fix the email address, it then popped up a window for me to enter it, however it was auto-forcing it down into an obviously stupid position such that it was impossible to hit its Submit button. You can enter a new address but not hit Submit because too close to the screen bottom's edge.

I love the orig vision, and the core content (Sal's lectures and his style) but looks like they're now falling down at the simple stuff along the edges of the problem space. Failing over inside the unnecessarily added complexity (GUI/JS shenanigans, control fetish, privacy intrusion, etc), rather than in the essential inescapable parts (text, images, embedded canned video, links, curricula design, effective communication, etc.)


I think all these sites (treehouse, code academy etc) are going to be hit with class action lawsuits for implying that their app/whatever will make you proficient at coding, and even land you a job. There are no shortcuts to becoming good at coding. It takes much longer and is much harder than these sites make it seem.


yay down-votes. I'm not gonna delete the comment. I'll just make a new account


in case you missed why your comment didn't add to the conversation: Khan Academy and the article are both not about coding or learning to code. This article is about math.


Khan Academy did start to include a programming portion at some point.


yes, Khan Academy has lessons on a huge number of topics. no, nothing in the posted article has anything to do with "implying that [Khan Academy] will make you proficient at coding"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: