It reminded me of how when AI won at Go, my first thought was, "Oh, now we just need to train it to teach humans to improve."
I hadn't before thought about how training the AI to teach would introduce bias. I think it would be fascinating to train an AI based on the best human teachers and then track the results on real kids to train it to get better over time -- but then who decides what "better" means?
I'm sure at first it would be an augmentation helping new teachers improve, but then who trains it what a better teacher means?
Fascinating. Thanks again.