Papert, Kay, etc, are all world-class smart, capable, and inventive people. They designed these systems in their own image, probably with a more or less conscious desire to build something they would have wanted in their own childhoods.
An average human with average abilities doesn't have the cognitive horsepower or the creativity to get much from these systems. They can play with them in a fairly superficial way, but only a relatively small proportion - I'd guess less than 25% - will get excited by the ability to imagine and build their own systems.
Without that excitement there's no motivation, and without the motivation any kind of programming becomes an exercise in pointlessness.
So - you can move a pointer around and draw lines? By typing words? Cute. But so what? Why should I care?
The real challenge - much harder than designing computer software - is working out political and educational strategies that reach the (nominal) 75% who don't find that kind of problem solving particularly interesting.
I've literally just been reading Kay's mini-book on the history of Smalltalk. Here's a quote on literacy:
"Literacy, for example, is being able to fluently read and follow the 50 page argument in Paine's Common Sense and being able (and happy) to fluently write a critique or defense of it. Another kind of 20th century literacy is being able to hear about a new fatal contagious incurable disease and instantly know that a disastrous exponential relationship holds and early action is of the highest priority. Another kind of literacy would take citizens to their personal computers where they can fluently and without pain build a systems simulation of the disease to use as a comparison against further information."
This sounds perfect, but I think the experience of almost everyone who has been to school knows that a fair proportion of their age cohort not only can't think at this level, but has absolutely no interest in being able to.
You might - possibly - be able to do something about this with a historically unique social engineering program supported by a media campaign that made abstract skills appear desirable and heroic. (A version of this happened during the Cold War, kind of.) But even then, it's not obvious from history just how far you could take that.
Either way, technology on its own is never going to be a solution. It may support policy, but it's never going to be powerful enough to enlighten the surrounding culture.
What actually happens is that economic and political pressures steer technology towards offering simple - often poor, but understandable - solutions to common problems. Worse wins out over better, and over time the possibilities shrink instead of expanding.
But one of Kay's central arguments is that literacy (meaning reading/writing media, especially print) did precisely what you are saying isn't possible. If computing is such a medium, we can't even imagine what kinds of enlightenment will be possible -- the same way oral cultures cannot imagine the ways of thinking of literate cultures.
Papert would intentionally pick the worst-performing students in a regular school, put them in his "Papertian" math class (i.e. LOGO playground with very little teacher time), and see them perform on-par or better than the top students by the end of the year.
It's all in the book. Please read the book.
The point about literacy to begin with offering this kind of leap is important.
But another thing about systems is that, designed by smart or by mediocre men, they have the problem that their difficulty increases much more rapidly with their size than does that of, say, a novel.
A literate world could exchange one essay for another. A computer programmer couldn't as easily write a rejoinder for a given system without writing much than is involved with the original system (that sounds clunky but you have to admit code-size growth is whole different situation than the grow of books).
Disheartening but true.