I agree that matching the variety of possible assignements by some generic method is very difficult, though in my humble opinion, we already have a lot of the tools required. There is a ton of improvements to do(lots and lots of usability), but the base is there - and a project of such a scale would not be unheard of.> Its hard to communicate a fraction to a computer> "Explain whether 4/3 or 3/4 is closer to 1, and how you know."You just did that. Twice. Square root ? "(3/4)^(1/2)" or maybe "sqrt(3/4)". There's no complexity in parsing that. I do agree it is not as natural as on paper but maybe tablets will find a way to improve that. Thats what innovation is here for after all.>"Explain whether 4/3 or 3/4 is closer to 1, and how you know."I am not familiar with the domain, but dont we have some automatic theorem-proving tools? Validating the answer to such a question would look like a perfect use case to me.[edit : clarified]> [the description example]Im not sure about this one. On the one hand i used to have a project in college about reconstructing a picture from an incomplete description - and its hard. On the other hand we are expecting a perfect description. Hence it would be pretty much isomorphic to the code of a program used to draw the picture. Matching the two images is also doable. Pseudocode would actually be the best way to transmit this image .

 I agree it is not as natural as on paper it's only unnatural because you did not spend 12+ years doing it that way. I spent enough time with my old ti86 that it became vary natural to express math on it despite the poor interface.`````` I have no problem using this machine to say: abs(4/3 - 1) = abs(3/3 + 1/3 - 1) = abs(1/3) abs(3/4 - 1) = abs(4/4 - 1/4 - 1) = abs(-1/4) abs(1/3) > abs (-1/4), so 3/4 is closer to 1 than 4/3.``````
 People complained just as hard when transitioning from slide rules to calculators and they have some valid points, but the net gains where also clear. It's easy to make a bad interface but there are plenty of great Math interfaces that keep a listing of all previous steps above what your working on both how you enter it and show you how things would look like on a blackboard. There are a also plenty upsides like the ability to cut and paste lines so you can avoid a lot of stupid mistakes like dropping signs. But, far more important from and educational perspective is a students improved comfort using a computer to do advanced math vs. the near phobia that you and many others apparently have.
 Oh, it's not a phobia. Just skepticism. ;)I don't doubt that one can, and that we will, build a computer system for manipulating higher mathematics that's so much better than a stack of paper, a pencil, and a decent eraser that you won't even own the paper. What I doubt is that it's done yet. But I haven't exactly been looking for it, so maybe I'm wrong. Certainly, once it's done there won't be any problem selling it to me (except for the sad fact that I no longer manipulate equations on an everyday basis).
 He is writing about math education. The characters (3/4)^(1/2) make sense to all of us who have already learned math and know some programming languages, but that syntax is pretty confusing to students who are just developing a real understanding of exponents.There is plenty of room for automation in math education. But in a really good math education, the automated tools need to be balanced with more socially-oriented approaches to education. Students need to talk to each other and to good teachers about their work. Students need to see each other's approaches and hear each other's ideas, and have face-to-face conversations about math.
 One could make the argument that any mathematical syntax is equally confusing for the novice--so why not start them on something they'll be using later anyways?I think we presume a great deal in suggesting that a simple flat array of characters and operators is somehow less understandable than a nicely typeset equation (especially when you've never written one before!).
 I was thinking the same thing. It might be better to learn this stuff at a console from the get-go.One advantage would be that you could try invalid syntax and operations (i.e. x/0) and see the errors that result in real time as opposed to an hour or a day later after the teacher marks up your test. Then you're more likely to stick with it until you get it right, which in turn means the answer is more likely to stick with you.
 I'm not saying that Matlab, Maple, and LaTeX have the whole "How do we represent numbers to computers well?" thing down, but we sort of do. A simple subset of LaTeX for use in classrooms could be useful (something like MathML or the like)."Pseudocode would actually be the best way to transmit this image ."I wholeheartedly agree. A graphics routine or a LOGO program would do a great job of describing that.The author seems to have an irrational dislike of people trying to use computers in this fashion, which I find strange. I certainly agree that something like a geometric proof (in absence of a good modeling language) is difficult to automatically check, but at the same time I question whether or not the human element would be any more useful here. Math teachers, especially at lower levels, are not infallible.I would almost venture that a better test, one that examines both critical thinking and ability to logic about a problem, would be a battery of small programming problems to solve some kind of geometric or graphical challenge. It's a bit all-or-nothing, but it would show that the student can both interpret a problem and also describe the steps to solving it.
 This is the exact attitude that the blog post is saying is wrong. For mathematics, computers are tools. Computers don't create the answers, they assist the user in finding the answer. They're time-saving and error checking devices, which are useful after the student learns the concepts inside and out. They are not supposed to solve the problems directly.>>"Explain whether 4/3 or 3/4 is closer to 1, and how you know.">I am not familiar with the domain, but dont we have some automatic theorem-proving tools? This would look like a perfect use case to me.Theorem proving tools would work if the students wrote their answers in a format that the tool would work in. In this case, it would be a natural language proof instead of a formal proof, which simply isn't possible to parse right now. Perhaps it will be in the future.
 > What does a student learn from this? They're learning the tool, not the process of solving the problem.They will learn the process if the tool is only used behind the scenes to validate their answer
 I misread your post, and did a ninja edit. Sorry about that!
 > Theorem proving tools would work if the students wrote their answers in a format that the tool would work in.Theorem proving tools would work if you were writing formal proofs. At the level he's talking about, students are not writing formal proofs -- they're writing explanations.
 Right. It seems to me that the despair is that it is very hard, from knowledge like "student checked box B in this test, which was correct" to deduce "student actually understands the concept".Perhaps a partial solution to this problem can be found in... wait for it... programming! That is, it is really hard to write a program to find general answers to problems unless you understand the basic idea. This definitely doesn't work for everything, but I wonder how big the domain is where this is a really good way to mechanically assess understanding, and whether the language that'd be required would itself become a bigger component of the measure than what you're trying to assess -- mathematical reasoning.
 You're asking people to do math in a non-mathy way. How would you feel if schools insisted, for the purposes of learning, that all programs be written out by hand in plain English?Or, heck, just think of how annoyed some developers get when asked to write code on a whiteboard during an interview. "That's not how we code! If you want me to write code, give me a computer."
 I see that question as being useful because it assesses several different understandings at the same time; the key ones I see are the understanding of fractions (including the idea of improper fractions) and the ordering of the values represented.A human instructor asks for the explanation to explore a bit if the answer was just a guess or if it was reasoned out, and whether the reasoning was correct. An automated evaluation system would have an easier time factoring in results from previous evaluations (it can perfectly remember an arbitrary number of tests across an arbitrary number of students...) and could check understanding by presenting several questions (it won't get sick of looking at the answers).I guess if the proposition is that human instruction can be replaced by automated systems that is crazy. Really, I expect most people are looking to supplement it and to make it more effective.
 On the interface part, this one works quite well.

Search: