
Bret Victor: Learnable Programming - siavosh
http://worrydream.com/LearnableProgramming/
======
bretthopper
There's already two comments here about being "harsh" or "ungracious" towards
Khan Academy which is ridiculous.

The usual HN article that contains criticisms is usually limited to that. Some
rant that took 10 minutes to write and contains nothing constructive.

Bret Victor put an insane amount of time into this (I can only assume) and is
truly advancing mindsets about programming tools and learning. We should all
be thankful that he put this much effort into a "criticism" piece.

~~~
mattmcknight
I just found the analogy of a microwave with blank buttons turned me off as
being an unnecessary rhetorical straw man. I know what fill means, and I know
what ellipse means, and I know what numbers are. Discovering what those words
mean in a different context is not analogous to blank buttons.

~~~
chipsy
Okay, but what is ellipse(50,25,32,5)?

"ellipse" and numbers are only hints, just as blank buttons and dials are only
hints.

Programmers rely a lot on IDE support now to see the parameters...which is
exactly what Bret is talking about!

~~~
pacala
The comparison is still shallow. It is regrettable that most programming
languages don't support named parameters, but it's fairly trivial to imagine a
language and api:

DrawEllipse(x=50, y=25, width=32, height=5)

No real-time rendering and tracing of values required (I hope!). Furthermore,
a decent IDE will bring in the method doc blurb, so the programer has a richer
context to make an informed decision.

I appreciate the work Brett put into the article and I'm half way pre-
convinced myself. At the end I was somewhat disappointed because most of the
stuff he talks about is already present in a decent IDE, perhaps in a less
polished form, but present nonetheless.

The deeper put-off is that the article seems to advocate not thinking, instead
performing random walks using shiny tools. If someone can't think about their
code, no amount of visualizations is going to help. That being said, the
potential of enhancing one's thinking abilities by properly chosen
visualizations is tantalizing, but I don't think the article makes a good case
for it and I haven't yet seen a better approach than old school pen and paper.

~~~
gcr
> If someone can't think about their code, no amount of visualizations is
> going to help.

Most of the article tries to make the point that if someone can't visualize
what their code is doing, they can't think; they can't reason about it.

~~~
pacala
Exactly! There are several levels of learning. At the master level one does
his job using muscle memory. Graphics help at the early levels, but to be
proficient one needs to master the domain. At that level graphics are slowing
down the process, not helping it.

Building the right visualisations that hold at master level as well as at
novice level holds the promise to expand one logical abilities far beyond what
our current ones.

------
scott_s
_Programmers, by contrast, have traditionally worked in their heads, first
imagining the details of a program, then laboriously coding them._

I don't think this describes most real work done by programmers. Rather, what
he says we should do,

 _To enable the programmer to achieve increasingly complex feats of
creativity, the environment must get the programmer out of her head, by
providing an external imagination where the programmer can always be reacting
to a work-in-progress._

Is exactly what most programmers already do. We usually don't have a nice,
interactive environment to do so; it's usually a combination of coding,
inspecting results, thinking some more about new issues, coding, inspecting
results, on up until the problem is solved.

In other words, I think that programmers _do_ tend to solve problems by
"pushing paint around." I rarely start with a full appreciation of the
problem. But in order to gain that understanding, I have to start trying to
solve it, which means starting to write some code, and usually looking at
results. As I go through this process, the domain of the problem comes into
focus, and I understand better how to solve it.

We already do what Bret is talking about, but not at the granularity he is
talking about it. For beginners, I can understand why this difference is
important. But I certainly solve problems by pushing paint around.

In general, I think this is a fantastic piece for _teaching_ programming, but
I don't think (so far) that all of it carries over to experienced programmers.
The examples of having an autocomplete interface that immediately shows icons
of the pictures they can draw is great for people just learning. But that's
too fine-grained for experienced programmers. Chess masters don't need to be
shown the legal moves on a board for a bishop; their understanding of the
problem is so innate at that point that they no longer play the game in such
elementary terms. Similarly, experienced programmers develop an intuition for
what is possible in their programming environment, and will solve problems at
a higher level than "I need to draw something." _That_ is the reason we put up
with existing programming environments.

~~~
dxbydt
Bret has written an _amazing_ article, but the world he inhabits is
soooooooooo far away I can't ever imagine getting there in my lifetime. As it
stands, programming is barely 2-3 levels of abstraction above shoving bits in
registers...sometimes even those few layers are slowing us down and we have to
resort to bit shifting operators and native code every once in a while.
Whereas he is talking about 20-30 layers of abstraction. He wants to visually
depict code, and then visualize functions, visualize data structures,
visualize the connections between functions, and actually visualize how the
program is running while its running !!! Whereas the practical programmer of
2012 is still buried neck-deep in textual stack-traces.

~~~
neilk
I don't think that Bret is advocating that this is the way all programming
should be. That would be strictly impossible, as some functions have totally
non-linear effects on their output, so you couldn't easily connect one to the
other with handy arrows and highlighted stuff.

The geometry example is chosen because it's easy to make a mapping between the
space of function inputs and visual outputs. And each parameter is independent
of each other.

Khan Academy has _already_ implemented some of this for JavaScript, running
right in the browser.

[http://www.khanacademy.org/cs/drawing-bonus-
rotation/9064481...](http://www.khanacademy.org/cs/drawing-bonus-
rotation/906448125)

Try clicking on the coloration functions to see previews, or sweeping with the
mouse to change the values.

As for the more advanced features, many languages exist today which make this
quite possible, at least for teaching tools. Even well-commented Java has the
kind of typing and documentation culture that would allow you to implement a
lot of this today.

~~~
ygra
Java's documentation, while allowing to document each parameter, still doesn't
make the actual _result_ of changing a parameter obvious to a program.

In Bret's video, where hovering over a parameter would show what exactly would
change – that requires either machine-readable metadata (i.e. _x_ position of
top-left corner of the shape) or additional programming to make available.
Javadoc as it exists is just a semi-structured and very thin wrapper around
HTML. Not really much a computer could do anything with.

The general problem I see with Bret's approach, while it works very well for
restricted programming environments intended for beginners and learners, it
falls short for more complex things. But then again, those of us who know half
the language framework by heart anyway won't probably need as much guidance or
fiddling around. Still, it requires augmenting each and every function with a
piece of visualisation code or enough metadata so a development environment
can apply the visualisation itself.

~~~
faceyspacey
he's suggesting we as tech startup entrepreneur type guys go out and imagine
the actual solutions that can work across a number of problems based on the
examples he's provided.

I'm absolutely so surprised by the lack of imagination here at HN. This is
startup ideas gold right here, and none of you seem to have the mental
capabilities to dig it out and do something with it. You all want to find
something wrong with it.

Ever since I've been tracking Bret Victor, my head has been spinning with tons
of practical ideas we can implement today to make us more productive at
programming. Light Table is a fantastic start. What's the problem here, guys!
You guys just don't want to admit the way you've been coding thus far will
soon be obsolete and you've been wasting your time. Yes, your skills will
eventually be worthless.

------
gfodor
There is nothing new here. Before you downvote, this is actually a huge
complement to Bret. As he's said before he is inventing on principle not
inventing for you to download and install his latest hack. His principles have
been consistent (and, imho, right), and this is another view into them. But,
if this opened up some huge new insight for you then you haven't been paying
close enough attention.

He's always been right and I hope he continues to have patience while he
continues his conversation with the world as the world misunderstands his
ideas. Unfortunately many people are going to latch on to the examples in his
demo movies, and the important parts of the essay will fly over their heads.
(The most important part of this essay being, of course, to read Mindstorms.)

All of his creative output points to the same core message: programming today
is broken because it is not _designed_. His various essays, talks, and so on
are just alternative "projections" of this thesis. This is a sign of clear
thinking.

He's given us all the tools we need to solve this problem. These tools are the
mental framework he lays out, not the specific concrete flavor he demoed in
his latest talk or essay.

The hard part is not building IDEs or visualizations, it's having the guts to
_throw everything out_ and start over again, knowing it's going to be a mess
for a long time and it will take years before things start to make sense
again. It's convincing yourself that most of what you know is useless and that
many of your assumptions are wrong.

Why do that when you can just download the latest whiz bang framework and
start hacking shit together taking advantage of the life-long skill you've
acquired at firing bullets with a blindfold on?

It's scary to be a newborn again, especially when you're in a place where few
have been before (and those that have, are largely not around anymore.)

~~~
stdbrouw
I don't understand what's wrong with people building on his ideas, taking the
subset they think they can implement and, yes, sometimes bastardizing his
lofty ideas into quick hacks because something is better than nothing. That's
how ideas spread. That's how people get shit done.

------
greggman
Wow! What an awesome critique. I'm in awe.

First off, rather than just saying Khan Academy missed the point, Mr. Victor
goes over in extreme detail with full examples with ideas on how to do it
better.

Second, he really went into some detail about how to think about things. Not
just the solutions but ideas and ways of thinking to come up with better
solutions.

Third, he's set the bar for critiques higher than I've ever seen. Next time I
want to critique something I'm going to feel at least some responsibility to
give good and clear examples of both what I think is wrong and what I think
would be better with reasons.

Fourth, even if I never write software to help learning programming or help
programming advance in general I'll certainly be influenced in my API and
system designs by this post.

Thank you Mr. Victor

~~~
adambratt
I totally agree. Check out the rest of his site too, he is a genius.

~~~
dirtyaura
I wasn't aware of Bret Victor before this post hit HN. I feel like I've found
a hidden goldmine of interface design thinking. For example, his Magic Ink
article about avoiding user interaction for information software is a
contrarian gem: <http://worrydream.com/MagicInk/>

------
gojomo
Beautiful and inspirational, _and yet..._

Sometimes becoming able to hold the 'invisible state' in memory is _the_ skill
to learn.

Consider the 'dual N-back' drilling which seems to improve working memory, and
then also (by some studies) other testable fluid intelligence. The whole point
is holding more and more hidden state in your mind. (To 'show the state' would
defeat the purpose of the game.)

Similarly, sometimes struggling with the material is the best way to
internalize it.

Consider some studies that show _noisy, hard-to-read text_ (as on a poorly-
printed copy) is better remembered by readers than clean typography.
Apparently, the effort of making out messy printing also strengthens the
recall.

So I'd be interested in seeing comparative testing of the vivid 'Victor'
style, and other styles that are more flawed in his analysis, to see which
results in the most competence at various followup dates.

We might see that the best approach is some mix: sometimes super-vivid
interactivity, for initial explanation or helping students over certain humps,
but other times _intentionally-obscurantist_ presentations, to test/emphasize
recall and the ability to intuit/impute patterns from minimal (rather than
maximal) clues.

~~~
Ygg2
Having an environment such as one Victor styled for big systems (big as in
millions of lines of code), would prove unfeasible (hell auto-complete has a
hiccup when lines start getting into hundreds of thousands).

Those tools he proposes seem to be very beginner and RAD oriented (even if he
claims otherwise). I've seen IDE's choke on smaller code bases and this not
only has auto-complete, auto-update but also state/frame/time tracking built
into. There is no way in hell it can work for existing languages. Maybe some
kind of VM that remembers all it's previous states, all function call times,
orders and then updates them as programmer changes them.

~~~
apu
Why do systems need millions of lines of code?

If you can build a complete operating system + major programs in 20,000 lines
of code, what system should need a million?

<http://www.vpri.org/pdf/tr2008004_steps08.pdf>

(They're not quite down to 20,000 yet, but they're getting there)

~~~
Ygg2
No one said they need millions lines of code. However a large "enterprise"
application have a way of growing like cancer. This tool won't be particularly
useful for such codebases.

~~~
drewwwwww
if you had this tool, maybe you wouldn't built applications that way anymore.
it's at least a possible outcome.

------
gfunk911
Couple random thoughts:

1\. Is Bret Victor now the Linus of cutting-edge programming environments?

2\. I don't have enough experience with Light Table or the Khan Academy
environment to know whether Khan is just a first step on the way to something
like Bret's vision, or more of a diversion. I was fairly impressed with the
Khan env in my limited time with it.

3\. I HATE telling people they shouldn't speak their mind and/or say what they
think is the truth, and I don't think Bret shouldn't have written anything.
But it's difficult not to seem ungracious. Josh Resig clearly knows what he's
doing, at least in the general sense, and he was gushing with praise for Bret,
while this reply basically says John did everything wrong.

If Bret feels that way, I truly believe he should say it, but that doesn't
make it fun. This is the essay equivalent of cringe humor I guess.
Hilarious/Informative while making you feel bad.

~~~
paulhodge
> 1\. Is Bret Victor now the Linus of cutting-edge programming environments?

Linus actually implements his solutions to things that he rants about, and
releases his code, so that analogy isn't quite right. Bret gives us nice big-
picture ideas and leaves the implementation for others.

I think many of the specific ideas mentioned by Bret will quickly fall apart
when trying to actually implement something non-trivial. But that's okay, it's
useful to have a really inspiring big-picture vision.

~~~
rossjudson
Or you could check out his site and notice "tangle - explorable explanations
made easy", which is a Javascript library that implements the very sorts of
interactions he's talking about.

<http://worrydream.com/#!/Tangle>

------
ChuckMcM
I really enjoyed Bret's article. I don't necessarily agree with all of it but
the main argument is quite sound.

Bret writes: "People understand what they can see." which is true for some
people but not true for all people. I've got one daughter who is very verbal,
one very visual. They learn differently. This in a minor nit though, his
exploration of the 'code' / 'example' model is good.

I particularly liked the commentary on something like:

    
    
       ellipse(60, 50, 50, 100) 
                \   \   \    \
                 \   \   \    +- What does this mean?
                  \   \   +----- Or this,
                   \   +-------- Or this,
                    +----------- Or this?
    

(We'll see how that comes out in the formatting)

TOPS-20 had a really interesting macro language for programming commands, it
was the inspiration for a lot of self-describing command line interfaces like
the ones made popular on Cisco gear. Basically you could write it like

    
    
       DRAW ellipse AT X=60 Y=50 THAT IS 50 HIGH, 100 WIDE
    

But all of the 'fill text' was really unnecessary for the parser so if you
wrote:

    
    
       ellipse 60 50 50 100
    

It would be just as intelligible. The point being that the training wheels got
out of your way when ever you wanted them too, and if you were ever stuck you
could type ? and it would tell you what your choices were.

Not enough learning environments put this sort of dynamically sizing help into
the system where it is needed such that it helps novices and doesn't slow down
experts.

~~~
UnFleshedOne
Ah, but doesn't this have the same problems all NLP systems have? It gives
illusion of flexibility that gets shattered every time user allows himself to
believe in it. What if I prefer to write them in different order? Or if some
version of named parameters is in use, what if I like some other synonym for
height? Strict syntax is a good thing as long as it is also brittle.

Now, an IDE that generates a training text like that on the fly and allows you
to fill in the values without actually storing the training text would be
nice. Something like intellisence popups, but inline and expanded.

~~~
Too
> Now, an IDE that generates a training text like that on the fly and allows
> you to fill in the values without actually storing the training text would
> be nice. Something like intellisence popups, but inline and expanded.

That's the best idea i've heard in a long time. The IDE already has the
information but hovering every function call with the mouse to get parameter
information is a PITA and breaks the flow of reading. A hotkey to inline them
on all your code at once would be brilliant.

------
andolanra
As far as _learning_ is concerned, I think this is a wonderful idea. I say
this in part because I myself learned on Logo before I taught people
everything from Java to Scheme, and even the simplest visualization tools
could help immeasurably. For example, we had a tool called the Replacement
Modeller that would visualize evaluation and substitution in pure-functional
Scheme snippets, which was great for stepping through complex code and showing
a student what was happening, and it was rocks-and-sticks next to the things
Victor is proposing here.

I'm interested, though, in what the ramifications are for advanced, complex
programming. I am personally a Haskeller, and advanced Haskell regularly deals
with incredibly abstract, difficult-to-visualize structures. Monads are
infamous in this regard, and monads are embarrassingly trivial next to, say,
comonads or iteratees. I have difficulty imagining this sort of model expanded
beyond elementary visualization techniques, and certainly cannot imagine how
one might represent and interact with these code structures.

Victor seems to believe that visual, interactive systems such as these should
become standard for _all_ programmers, c.f. the section 'These Are Not
Training Wheels.' The idea seems powerful, but: how?

~~~
mamcx
I'm toying in build a new language (more in the "find ideas" than really doing
it), and tough: Why I can't have events on functions? ie: Why I can't attach
listener to the entry/exit of a function, in a transparent way (from
<https://gist.github.com/3777791>, where is still ugly as hell):

def startDef: self.cache['start'] = now

def endDef: performance.register(self.function.__name,'time', now -
self.cache['start'])

hook(sample,pre = startDef, post = endDef)

Now with that ability, is possible to log with a graph the flow of the data in
the program, in realtime. Still will lack the instant play but is a good
start...

~~~
andolanra
You'll be wanting to look at defadvice in Common Lisp and elisp, then, which
let you attach code to the entry and exit of a function. Python has
decorators, as well, which are similar, but the entire purpose of defadvice is
to do exactly what you're talking about.

------
dkarl
_Alan Perlis wrote, "To understand a program, you must become both the machine
and the program." This view is a mistake, and it is this widespread and
virulent mistake that keeps programming a difficult and obscure art. A person
is not a machine, and should not be forced to think like one._

This is nothing but prejudice, and, ironically, it is contrary to how we work
as human beings. In any field, we celebrate sympathy between an expert and the
matter of his or her expertise. If we say that a pianist "becomes" the piano;
we do not regret the dehumanization of the pianist. If we say that a rider has
learned to "think like" a horse, we do not believe the rider has become less
intelligent thereby. If we say a fireman thinks like a fire, it's a
compliment, not a statement that his mind can be modeled by simple physical
laws. Sympathy is an _expansion_ of one's understanding, not a reduction. For
example, the wonderfully named "Mechanical Sympathy" is a blog that will
improve your grasp of the connection between performance and hardware
architecture without dehumanizing you one bit. Heck, here's a guy who says he
has to "think like a maggot," and he doesn't seem ashamed or degraded in the
least: <http://www.bbc.co.uk/news/uk-england-17700116>

Is it reasonable to ask a programmer to think like a machine? Of course. We
find it natural and admirable for people working with pianos, horses, fires,
or maggots to identify themselves with the subject of their expertise, and
there's no reason why we should make an exception for computers. It's true
that when it comes to usability, for a long time we've known we have to take
very strong negative emotions into account. It isn't an overstatement to say
that some people loath and fear computers. However, as a general principle, it
seems to me that any educational philosophy grounded in the assumption that
the learners find the subject uniquely distasteful or unworthy is unlikely to
be effective. If someone learning programming finds computers so inherently
distasteful that they are put off by the idea of achieving a more intimate
sympathy with them, then the long-term plan should be to overcome their
aversion, not to try to teach them to understand and control something they
are fundamentally alienated from. Human beings just don't work that way.
Alienation and understanding don't mix.

~~~
szx
Computers are programmable. Pianos, horses, fires are not. Some (lower level)
tasks absolutely require the programmer to think like a machine. Most do not.

We have the power to make it easier for ourselves, and lower the barrier of
entry for others. They might develop that sympathy you speak of later on, but
there's no reason why that should be a prerequisite.

------
adrusi
If such an environment ever existed, it would be amazing.

But I can't think of any way that such an environment could exist without
having to program a new environment for every problem. Not from scratch, of
course, a lot of core concepts could be abstracted out, but even with all the
abstractions in place, and all the libraries presenting a standard interface,
I'd imagine few thousand lines for the features described just for the
environment which is limited to 2d graphics.

You could need a new "plugin" for the environment for every different kind of
problem. You would run into two problems: first, how could these be composed
in a usable way, you don't usually solve problems that are _just_ about 2d
graphics or _just_ about parsing text, you're working with 4 or 5 of these at
once; second, the whole idea behind this is to enhance imagination, but
doesn't the dependence on existing tooling to help you solve a specific set of
problems limit you not by the extent of your imagination, but by the power of
your tools? Currently our imaginations aren't getting much help, but they're
our only limit (that and the speed of the computer of course).

I'd rather the only real limit to what I can design be myself, not my tools.

------
aaronblohowiak
This is a fascinating response to the Khan academy's curriculum. Some of the
things he is raising here are faults of programming languages; I'm still
against the idea of positional parameters. Khan Academy's curriculum is
Android to Brett's iOS: you can copy some of the features, but it isn't a
cohesive whole because the ideology was not as thoroughly internalized.

~~~
wmf
Yeah, I was surprised that he kept positional parameters and added scaffolding
to explain them. Why not named parameters?

~~~
tree_of_item
I think the goal was to show how _JavaScript_ (and by extension, Khan Academy)
could do the things he's describing. He did show how languages like Smalltalk
get something better via a sort of "named parameters".

------
confluence
> _A live-coding Processing environment addresses neither of these goals.
> JavaScript and Processing are poorly-designed languages that support weak
> ways of thinking, and ignore decades of learning about learning. And live
> coding, as a standalone feature, is worthless._

Woah come on Bret, we're getting there give them a break! I distinctly
remember that this was the work of a couple of interns with the help of Resig.

A couple of things - I still don't have live coding for the vast majority of
my programming environments - so that little text box is about 10x better than
the vim/eclipse + run loop with print statements that most of us use.

Second javascript is brilliant - lazy ways of thinking are brilliant - you
will not believe how motivating it is to just get shit on the screen as a
learner. I myself have wasted inordinate amounts of time setting up compilers,
interpreters, environments, graphics/audio etc. when all I want to do is
bloody program the thing in my head. Who cares where the files are? Who cares
where the images are? The environment should be designed to get out of my way
- not the other way around.

Most importantly of all - javascript is the most forgiving language I have
ever seen - and this is gold. There's a reason Google started with python,
Twitter with rails, and Facebook started with PHP - no one gives a shit about
"strict thinking" or "brutal languages" - that stuff should come way - way
later when you actually need it.

Strict languages for learners are a case of premature optimisation. My little
brother absolutely loves the new Khan Academy coding environment/system
because of the fact that it isn't strict.

------
jules
This is really neat. It does paint a far too optimistic picture, however. The
mini-IDEs that he presents are highly problem specific. That's great when you
are teaching programming and you control exactly what the problem is and what
the IDE does for that problem. But this is presented as a solution for
programming in general (see the section "These are not training wheels", e.g.
"Maybe we don't need a silver bullet. We _just_ need to take off our
blindfolds to see where we're firing.").

The control flow visualisation works great for toy problems when learning
programming, but quickly breaks down in the real world. The iteration counts
become too big to see anything. If you are working with functions that can be
sensibly plotted when the iteration counts get too large that's great, but
99.9% of code is not like that. You're working with billions of seemingly
random integers, or with strings, or even more complex data structures. How
are you going to visualize that over time? Probably for each problem you can
come up with an adequate mini-IDE, but that doesn't really help because
implementing that mini-IDE is more work than solving the original problem in
the first place. To make this practical you need general purpose tools with
easily customisable visualisations (and IDE interactions in general).

Another example is the UI for the bouncing ball. Displaying the trajectory of
the ball faded out like that works great for an animation or a very simple
game where a single thing change over time, but how about a more complicated
game where the entire screen changes every frame (as in most 3d games and even
side scrollers). That's not even considering GUI applications!

This type of visualisation is also highly specific to single imperative loops,
yet the author agues against that. How do you visualize a program structured
functionally? You can try to do something with an unfolded expression tree,
but that quickly gets out of hand too.

All the examples in the post fall into the category "drawing a very simple 2d
scene with at most a singly nested loop". How big a subset of the field of
programming is that? It's also no accident that the author chose _that_
subset: it is the easiest case for this kind of visual interaction. Don't fall
into the trap of extrapolating the results to all of programming, and thinking
we are almost there and the problems lies just in implementing this kind of
IDE. While this is superb work, 99% is still to be discovered.

------
th0ma5
I keep thinking a lot of Bret's points are absolutely wonderful food for
thought, but blur the line between tool and use of a tool so much that they
will never be practical. It is as if he's taking the outcome and suggesting
the language should have known the outcome, but the point of coding is to
enable all kinds of possible outcomes, and that set is not quantifiable before
the fact.

~~~
aaronblohowiak
except the computer can and does run the code, and can then provide super-
textual information

~~~
th0ma5
but is that helpful? i mean, i love lots of various reference sites, and have
enjoyed autocomplete and inline "labeling" of functions in IDEs, but I keep
thinking about his talk that went big a bit ago, and keep thinking of these
things he's developing as just analytic or test harnesses to zero in on a goal
that he's actually already programmed. so, to ask for our tools to have these
qualities built in assumes it knows anything about the total picture. for
instance, processing's language is a small subset, so maybe it isn't the best
example, but someone new to programming in processing might not realize that
specifying fill 30 times means that only the last one is the effective one for
the subsequent drawing. anyway, there's a lot of uninformed things you can do
with any language, and i don't know that blurring the lines between input and
interface give us any new insights for the actual act of coding. i think it is
great for analysis of coding after the fact, however. like a JIT compiler or
other compile time optimizations are great, but it assumes a complete and
observable solution that has been expressed already.

another example that is maybe in the neighborhood of a valid response to your
reply is like the heap size in java. if i know i what i'm doing and why, i
could set it rather high to achieve a goal, but mostly i don't mess with it
because its a great reminder i'm making a bigger mess maybe than i should be
for whatever problem i'm working with... however, it seems that victor's
ideology is that the heap should always know how big it should be given
anything i might want to express, and that somehow the halting problem
wouldn't apply.

~~~
thedudemabry
Regarding your fill() example, I believe his point was that uninformed
decisions are a reflection of the interface exposed. Since the fill color is
an implicit global variable, and not exposed anywhere in the interface, there
is no way to discover its existence or behavior except via trial and error or
reading the documentation. If the fill color were either exposed via the
programming interface, or were explicit, then no meta-knowledge would be
required to use it.

Similarly, the heap size limits in Java are global limits implicit to the
system, not to the program being designed. His solution to the larger
problem—data growing beyond available resources–might involve providing a
better visualization of the data. Something more responsive than the feedback
loop provided by heap limits. Getting an OutOfMemoryException doesn't help you
to understand how the data grew beyond expectations; it is always followed by
a heap dump. What if you could better understand the patterns of allocation
during design instead?

------
skybrian
This article is quite thought-provoking. However, I disagree with the notion
that there's something wrong with "unlabeled" programs outside a learning
environment. To learn a language, you need to associate unfamiliar words with
concepts. A text written in French isn't broken because the words aren't all
labelled with their English translations, much as that might be a nice UI for
people learning the language. Nor do you think in a language by translating
each word into English first.

We do a lot of programming using API's we're not yet fluent in (and may never
be), and that's why IDE's can be so helpful and this isn't black and white.
But at some point you do need to get some core concepts into your head and
communicate them in a reasonably terse language.

------
perfunctory
"In Processing... There are no strong metaphors that allow the programmer to
translate her experiences as a person into programming knowledge. The
programmer cannot solve a programming problem by performing it in the real
world."

We all learn to read at a very early stage in our lives. What is a real world
metaphor for a letter? For a syllable? Or, taking an example from the article,
what's the metaphor for a timeline? Or for time itself for that matter. I'm
sceptical that having a metaphor matters that much.

Also, I yet have to see any evidence that kids who start with Smalltalk learn
programming faster than those who start with basic, or pascal, or anything
else. There is frustratingly almost no case studies in this area.

------
milkshakes
I really like the last line: _Maybe we don't need a silver bullet. We just
need to take off our blindfolds to see where we're firing._

------
bad_user

        Imagine if the microwave encouraged you to randomly
        hit buttons until you figured out what they did.
    

I don't have to imagine, because that's precisely what I did when I was a
child. As a result, at 6 years old I was the only one in my family able to
configure a video player.

Of course, I've lost my patience since then and I rely more and more on
already known concepts, but that's one reason why children learn faster than
grownups - because they start with a blank slate, they have no preconceptions,
no biases, have no fear of experiments and of failure. You only need to watch
a child learn how to ride a bike, then compare it with a grown-up doing the
same.

Parents are also privileged to watch their children learn how to use everyday
language. The most marvelous thing happens right before your eyes - as your
child does not need to learn the grammar rules, or to read from a dictionary
or to take lessons from a tutor. And during this time all they are doing is
listen, then trying out words by themselves to see how you behave.

And you really can't compare a microwave with a programming language. A
microwave is just a stupid appliance with a bunch of buttons for controlling a
state machine. A programming language on the other hand is a _language_ that
can describe anything. And I don't know what the best method is for teaching
programming, but we could probably take some hints from people that learn
foreign languages, especially children.

------
comatose_kid
What an interesting article.

Bret mentions Rocky's boots at the end of his writeup.

When I was in Grade 5, I taught the kindergarteners how to use logic by
running Rocky's Boots on an Apple II. It was an effective way to learn because
the use of immediate, graphical feedback. The kids had fun learning.

------
brador
The way to read this is not a neg on the Khan academy project but a push into
what Bret sees as the absolute solution to the problem being solved. Ie. a new
programming language, which is outside the scope of Khans project of teaching.

------
yaxu
Bret Victor is annoyed at his ideas being labelled live coding, but that's
what they are.

Live coding environments are pretty diverse, and there is plenty of prior art
for code timeline scrubbing, tangible values, auto-completion, the
manipulation of history, and many of the other features that Bret argues for.

Some examples: Field - <http://vimeo.com/3001412> SchemeBricks -
<http://blip.tv/nebogeo/dave-griffiths-chmod-x-art-3349411> Overtone -
<http://vimeo.com/22798433>

Live coding isn't just about automatic code interpretation.

That said, other than his strawman beating, I otherwise agree with his thesis,
and enjoy his examples. To advance programming, we can change who programs,
how they do it and what they do it for. All of this is up for grabs. However,
I do think that social interaction in programming environments is an important
piece which he seems to be missing.

------
twelvechairs
Its a smartly written piece, but to me this just scratches the surface. A
couple of issues:

1\. If you are making the conceptual jump from 'text' to 'dynamically
annotated text', why not go a step further and just let people draw a
rectangle with visual tools entirely (like in Illustrator,Inkscape,etc.) and
forget the textual representation?

2\. The real difficulty comes with representing things as they may arise
through a dynamic program, not with just the 'initialization' stage as this
shows. This is where code really really gets complex, and also is much harder
to add to with these visual systems.

[EDIT] To clarify further, relating to comments below - The OP's main intent
is "how do we redesign programming?" [for all programmers] and my comments
relate to this, not just to the use of such techniques for students as a step
towards learning traditional code.

~~~
DigitalJack
1)Because that is not learning programming, which is entirely what this essay
is about. 1.b) Learning programming is about learning to reason about
problems. Not about learning to use different tools.

2) I think I agree that dynamic programs would be more challenging to do this
with. On the flip side, this demonstration is so far beyond my reasoning that
it doesn't mean much to me. It's all stunning.

~~~
judofyr
1: That is not entirely what he's saying though:

 _A frequent question about the sort of techniques presented here is, "How
does this scale to real-world programming?" This is somewhat like asking how
the internal combustion engine will benefit horses. The question assumes the
wrong kind of change._

 _Here is a more useful attitude: Programming has to work like this.
Programmers must be able to read the vocabulary, follow the flow, and see the
state. Programmers have to create by reacting and create by abstracting.
Assume that these are requirements. Given these requirements, how do we
redesign programming?_

~~~
twelvechairs
Thanks. This was my reading of the OP also and what I was responding to (its
about changing programming paradigms for everyone, not just about learning for
beginners). I've edited my above post to reflect this.

------
agumonkey
Mandatory 'subtext' (from Jonathan Edwards) links

    
    
      - http://www.subtextual.org/
      - http://alarmingdevelopment.org/?p=680
      - http://en.wikipedia.org/wiki/Subtext_%28programming_language%29
    

I wonder if those two know each other.

------
sktrdie
This is all very inspiring and nice, but I hate it that all of his examples
deal with variables that contain numbers.

With numbers it's easy. You can use sliders to increase and decrease their
value. You can see a little preview of the value contained in a variable.

But most of the time variables contain much more complicated information than
just basic numbers. Maybe they're objects, or strings containing large pieces
of HTML.

This type of data is hard to visualize and obtain "immediate feedback" from.
So I think it's still hard to apply the "show the data" concept in a way that
it works well for all kinds of coding exercises, and not just for coding
canvas elements.

------
ianstallings
Can I ask a question - why do I have to hear about a new platform or language
once a week? Is there some problem existing languages aren't solving?
Seriously. It's like this place falls in love with a new platform once a
quarter. Before it was Lisp, Python, Ruby, etc. Now it's Clojure and anything
else that has less than 1000 people actively using it. Did I somehow wander
into the hipster bar of the programmers? I just don't get it. I can't keep
floating from language to language leaving a cluster f __ck of code in my wake
because I'm onto the next big thing.

------
zmitri
Cool demos, but very long article for what I thought he was trying to express.
Bunch of thoughts, I enjoy thinking about this, so would love some
conversation around any of the points:

If programming is a way of thinking/problem solving, I'm not sure how
supplying the context in line teaches you how to think -- as opposed to
sitting down and figuring out a problem on your own.

My experience has been that the best way to learn to program is to try
something out for yourself. Whether you write it from scratch or use example
code to help you get started. It takes a bit of time, but you get better at
it.

You need to be able to sit down and spend the time thinking to solve a
problem. It's actually quite hard to teach this even in school -- I honestly
believe that the main advantage to taking a CS degree over using the internet
to learn is that you are actually are forced to group together and work on
projects whereas a self study course would not enforce that.

Often the best way to make something more mainstream is to "dumb it down". I
don't think it's because most people aren't intelligent enough, it's because
it needs to have mass appeal, and therefore needs to interest a wide range of
people. Doing this with programming is quite hard if programming is a way of
thinking -- how can you "dumb down" a way of thinking to make it more
appealing when a lot of programming is dealing with detail.

The thing about flow, if/for statements, is that you tend to master them very
quickly. While the visualizations are cool, they have very little usage beyond
the first 3 or 4 lessons.

Very interesting examples, but I don't see these examples helping out much
more than other sites (I agree that sites that CodeAcademy aren't at all how
the press/Mayor Bloomberg/TechCrunch makes them out to be).

~~~
esperluette
I have to disagree ... there are a lot of folks who don't learn well being
thrown in at the deep end of "mess around until you understand it", and for
whom a Bretian visualization of data would be useful over and over again
(especially for bugfinding). And _what_ data needs to be visualized is
different for different people; e.g., I can easily visualize most regex, but a
lot of people love tools like Rubular because they can't. But I have a lot of
trouble intuitively understanding functions like the graphical ones in Bret's
examples.

As to "dumbing it down", I think programming can definitely have mass appeal,
but there's a lot of "I had to learn closures uphill in the snow both ways"
going on among seasoned programmers -- in the same way that current medical
doctors often valorize their hours and hours of being on call as residents. A
trial by fire may seem useful but in the end you just get a lot of burned
people. But unlike MDs, there's no protective guild for programmers ...

~~~
zmitri
I agree, there are a lot of people who don't learn that well like that. Visual
programming makes it a bit more approachable but whether or not it helps learn
beyond the first couple lessons (or is overkill) -- I'm not sure. With a lot
of these early concepts I feel like just getting one or two reference points
can start a snowball effect. Eg. Tell a philosophy student that object
orientation is like Plato's Theory of Forms.

From personal experience however I still would argue that just building
something is the surest way to go because it requires you to follow through.

After about 2 years of programming full time, I started to develop a sense of
why and when things would go wrong up and down the stack, and I don't really
think it's something you can teach. It's something you get from loads of
accumulated practice -- eg. Oh that's how indices work in Oracle vs MySQL,
dynamic proxies on Groovy methods don't work when called from Java, I just
built this but now I see I can refactor and save tons of code next time, what
are __kwargs in python, etc.

I've tested out little one offs like try mongodb, try redis, or interactive js
tutorials, but I forget what I've just learned until I need to build something
on my own.

------
NickPollard
I think this article raises some brilliant points, and is very well written,
but I also feel that it falls short of the mark Bret was aiming for.

As he himself alludes to, most of what he is teach is not programming - it is
individual actions. Just as being taught the meaning of individual words does
not teach you to write, being taught what certain functions or statements do
does not teach you to program.

What is important is not spelling, but grammar - the shape of a program. His
parts on Loops and Functions are better on this - the timeline showing loop
instruction order is pretty awesome. However, it's still not perfect. At no
point is the user instructed what a 'function' is, and how to use it. How do
they know that they should be using it? I agree with other commentators who
have suggested that it looks too much like he knows what he is aiming for, and
the tool is designed to aid that.

In fact, my strongest criticism is in regards to his rebuttal to Alan Perlis:

> Alan Perlis wrote, "To understand a program, you must become both the
> machine and the program." This view is a mistake, and it is this widespread
> and virulent mistake that keeps programming a difficult and obscure art. A
> person is not a machine, and should not be forced to think like one.

I'm sorry Bret, but Alan is right. You _do_ need to be able to think like a
machine. Not necessarily an x86 machine, but an abstract turing machine, or a
state machine, or a lambda calculus machine. If you cannot think like the
machine, you cannot outwit the machine. This is incredible important if you
are relying on the machine to give you feedback on what the system is doing.

In all his examples, very simple things happen, and never go wrong more than
drawing in the wrong place. What happens if he starts causing an infinite
loop? Or creates cycles in a linked list (and remember, sometimes he may in
fact _want_ cycles).

In "Godel, Escher, Bach", Douglas Hofstadter suggests that one of the key
ingredients for intelligence is being able to go 'up' a level of abstraction.
Bret's comment about a circle being made up of small steps, and hence
integrating over a differential function, is part of it. A human can recognise
that sequential steps with a consistently changing angle can be viewed as a
circle. A human can realise that certain relationships are iterative,
recursive, self-referential, in a way that (currently) a computer cannot. This
is what needs to be taught, and I fear that what Bret has shown here would not
help in that element.

However, it's still going to be a better intro than anything we have
currently, so I think that in regards to getting people to dip in and try, it
will be a vast help. I just hope that Bret keeps thinking about bridging the
chasm between setting down series' of instructions, and programming.

~~~
pwang
> I'm sorry Bret, but Alan is right. You do need to be able to think like a
> machine.

I would like to bring in another Alan Perlis quote: "You cannot move from the
informal to the formal by formal means."

Programming is the art of formalizing things to a point where they are
executable. Executable by _what_ is the point of contention here. I think you
are saying (and I somewhat agree) that ultimately your programs and ideas have
to execute on a real machine, and as a programmer you need to understand and
model that machine.

OTOH, perhaps what Bret is arguing is that we _should_ make better machines
and software abstractions.

~~~
NickPollard
I dont think it needs to be a _real_ machine in the sense of a physical one,
just in the sense of an execution environment.

What worries me about Bret's tools is that it looks like they make it easier
for someone to produce something without knowing _why_. When you learn maths
at school, you're normally taught to show your working - getting the answer
isn't enough, you need to understand the process. Having so many sliders and
timelines to pull around is fine, but at the end of the day we need to teach
people functions and variables and recursion and combinators and so forth, and
I'm not sure how one does that in this system. In a sense, it is skipping the
architecture stage - working not just how to build, but _what_ to build in the
first place.

~~~
jeremyjh
I really don't see that at all in what he's showed. Everything there is about
helping people understand how and why the program is working.

------
da02
Sidenote: Dan Ingalls seems to praise Victor's work on making programming
easier: "How I think computers should work and why, said beautifully. Bret
Victor - Inventing on Principle"
<https://twitter.com/daningalls/status/211630799550812160>

------
chefsurfing
"Visualize data, not code. Dynamic behavior, not static structure."

Yes! This reminds me of what Rich Hickey has been enlightening the world about
as well [1]. Bravo Bret! Thank you for writing and sharing these ideas.

[1] <http://www.infoq.com/presentations/Value-Values>

------
ralfn
His interactive demonstrations almost feel like he is reinventing Excel. And i
like it.

This kind of symbiose between IDE and program code isnt just usefull for
teaching, nor large scale software development....

It seems extremely usefull for "explorations" of data. There is a brilliant
application idea hiding behind these ideas.

------
mikecane
How I wish he was teaching JavaScript. THAT is how you teach. Everything I've
seen/tried online is abysmal.

------
JasonFruit
A lot of these are really neat ideas, but as I read them, I thought: I'd never
have bothered learning programming if all that was available to me was what
he's describing. I like the separation between the problem I'm working on and
the background information I need so I can understand and solve it; having the
problem and background information presented together neither appeals to nor
aids me.

That may be because I've grown accustomed to learning from documentation and
applying it to my work, but I don't think so; I think there's something deeper
going on that may have something to do with the way my mind organizes
information. I wonder if Bret Victor, if he were being honest with himself,
would prefer to learn his way, or the way he actually did.

------
scott_weber
Perhaps Bret Victor's ideas are comparable to something like formal methods:
few doubt their enormous power, but the difficulty is in the extreme effort
one needs to implement them in a project. It is tempting to believe that the
level of instrumentation that Bret proposes could be achieved automatically,
just as it was once dreamed that formal methods could be fully automatic. But
experience with formal methods has shown us that while some of their promise
can be implemented by automatic tools, and this is valuable, to realize their
full potential for a complex project requires substantial effort, non-reusable
effort.

------
Morendil
One very deep notion in there is "identity within the system". Some people
learned it from Smalltalk; I learned it from LambdaMOO.

I respectfully submit that people who are focusing overmuch attention on Light
Table "because Bret Victor" are mostly missing the point. As Bret points out,
there's a lot you can learn about these things from existing (even old)
systems.

If you already know Smalltalk, Logo, HyperCard and Rocky's Boots (I'd missed
out on this one but it reminds me of Robot Odyssey which I did play), you
could do worse than go and play with LambdaMOO for a little while.

(ETA: it turns out that Roboy Odyssey was a sequel to Rocky's Boots.)

------
charlieflowers
Sometimes, you come across something with such an astonishing level of
insight, that it is as if it must have been dropped off by aliens ... because
it is so far ahead of the typical thinking in its field.

This is like that.

------
outworlder
Back in the 80's, when you turned your computer on, you were thrown into a
programming environment (usually Basic).

I started learning to program at age 8, I just had no idea that the thing I
was doing even had a name - I just typed commands and the computer responded
(no compile, link, run steps).

Took me a while to figure out what the 'for' did (I was drawing grids one line
at a time). I still remember what it felt when I finally figured it out.

An educational programming environment should be installed in every machine.
You never know who's going to get interested on it.

------
andybak
If you really want to label:

    
    
        ellipse(65,50,60,60)
    

Wouldn't using something like Python's keyword arguments better than some
external labelling?

    
    
        ellipse(radius_x=65, radius_y=50, center_x=60, center_y=60) 
    

More characters but these are just training wheels. Once the learner
understands the basics you can do away with them.

API's and libraries should be designed to allow both forms. Everyone is a
beginner with some aspects of their craft. I'm a beginner when I use a library
I'm not familiar with...

~~~
tree_of_item
I think Bret made it pretty clear that these ideas are not training wheels.
There's a section titled "These are not training wheels". The goal is not to
do away with them but to make programming _about_ them.

~~~
andybak
Ha. I didn't spot that bit...

My point was though that allowing keyword and non-keyword forms is good for
everyone. If you can't remember the parameter order you can just use the
keywords (assuming the naming is memorable enough - which might actually be a
can of worms in itself)

------
Hitchhiker
remarkably insightful bits:

"Programming is a way of thinking, not a rote skill. Learning about "for"
loops is not learning to program, any more than learning about pencils is
learning to draw. " [1]

" Transforming flow from an invisible, ephemeral notion into a solid thing
that can be studied explicitly. "

" The create-by-reacting way of thinking could be stated as: start with
something, then adjust until it's right." ( its funny how lean-startup could
be compressed into this one bit )

" Visualize data, not code. Dynamic behavior, not static structure. "

------
modeless
As much as I love Bret Victor's ideas, I feel this is a bit harsh on Khan
Academy. There's nothing wrong with criticism, but it shouldn't be your only
kind of feedback. Even if Khan Academy did nothing right (which I think is
clearly false) they should at least deserve praise for attacking the problem
at all, when so many people are content to ignore it.

If you're trying to lead a revolution in programming, witholding praise from
your strongest supporters isn't the way to go about it.

~~~
phleet
Speaking as one of the interns that worked on the project, I don't really feel
that this was too harsh.

Had he focused on all the things that were wrong with it and torn it to
pieces, I would be inclined to agree, but he's provided a number of very
specific ways in which the environment could be improved.

Some of these ideas were considered and not implemented for practical reasons,
others were left out due to time constraints, and some we honestly just didn't
think of.

On the particular note of practicality, we were trying to make the best thing
we could make exist now. I hope that Khan Academy Computer Science in its
current state looks laughable in a few years - both compared to what it
becomes and compared to what other people have built.

~~~
zethraeus
I'm curious, were you guys ever in contact with Bret during the project?

------
knieveltech
"We change programming. We turn it into something that's understandable by
people."

No, actually we don't. This has been attempted recursively since the 70's.
It's not a credible or desirable goal. Programming is complex by nature. All
powerfully flexible systems are. Being capable of (much less excelling at)
mentally modelling complex abstract systems is not a trait that "normal"
people posses. This is neither bad, nor wrong. It simply is. Ignoring this is
pure folly.

------
winter_blue
As far as labeling function arguments, I've always used an IDE that supports
some kind-of feature that gives me exactly that. In Eclipse, Java (JDT), C/C++
(CDT), Python (PyDev), Go (Goclipse) -- all support the little pop-up box that
appears when you type the name of a function and shows docs related to that
function. This feature is so crucial, I couldn't use a development environment
without it.

~~~
winter_blue
He mentions the context being needed for a function call, more than once -- it
makes me wonder, doesn't anyone use IDEs? (I don't know vim/emacs are so
popular when they provide no context - much more crucial to me than editing
power. There's a plug-in for vim that uses ctags to provide context, I hear,
but I don't know of many that use it :/)

~~~
brown9-2
IDEs are not as useful for dynamic languages such as JavaScript or Python as
they are for strongly-typed languages like Java.

Imagine a piece of code like

    
    
      function doSomething(callbackFunc) {
          ... 
          callBackFunc(a, b, c);
      }
    

What context or popups can you display for the positional parameters of a
function call that is only resolvable at runtime?

------
gws
I remember being a child and learning basic with a manual in a foreign
language (English) I did not understand and an Italian-English dictionary. It
was thirty years ago (shit, I'm old!) and admittedly my memory is foggy but I
remember it to be funny and easy.

If some kids really need all these hoopla to start programming I wonder if
they should really try...

------
tednaleid
This is the web version of the talk I saw him give at StrangeLoop yesterday
(<https://thestrangeloop.com/sessions/taking-off-the-blindfold>). I'd highly
recommend the video presentation when it's released (later this year?) on
InfoQ.

------
jiaaro
I think the analogy of looking at a book for it's words is somewhat misguided.

Judging a book by the words it contains is wrong if you're judging literature,
but when you're choosing books to teach with, you have to consider the readers
vocabulary.

In practice, choosing a book based on it's words is, in fact, quite common
when you're teaching english.

------
realrocker
I just want to clap with joy after reading this. I learnt Logo when I 9 years
old and those were the best days of my programming life. I still remember the
mad excitement of drawing the first smiley face and the first house. I just
want to ask the new age educators : Why so serious? It was supposed to be fun.

------
PeterisP
Very powerful and insightful article. I'm not sure if I agree with everything,
but it's very inspiring nonetheless and I hope that such a learning
environment will be a reality by the time my kids are old enough to think
about programming.

------
perfunctory
"Processing's lack of modularity is a major barrier to recomposition"

JS has functions and objects. How is that not modular? The fact that many Khan
Academy's example programs "are written as one long list of instructions" is
another story.

------
nrbafna
This is incredible. I can only wonder in awe if something like this was
implemented in online courses provided by say Udacity or Coursera. That would
be a revolution in online education.

------
codingwisdom
Man, I couldn't disagree with this essay more. Once I read it, I went out and
wrote my retort. Enjoy! <http://bit.ly/Sdr9Zl>

------
brady747
As someone who has been spending time this year learning to code, I just
wanted to say this essay is EXACTLY what I have been yearning for. Thanks for
putting this together.

------
simonbrown
This reminds me of Up and Down the Ladder of Abstraction:

<http://worrydream.com/LadderOfAbstraction/>

------
debacle
The ability to program is simply the ability to talk on the same level as an
ignorant, autistic, childish, forgetful, narcissistic asshole.

Namely, the computer.

------
lawn
So when will someone make an environment like that? It sounds like a super
good idea and it might even be a good business!

------
Uncompetative
Why do I get the feeling that everyone here is threatened by this attempt to
make programming easier?

Oh, wait... this is Hacker News.

"Geek Central".

------
namin
Does anybody know what tool he is using to make the little demo samples with
the play button?

------
signa11
although i have cursorily skimmed the article, it seems to be based partly on
his _excellent_ talk "inventing on principle", available here:
<http://vimeo.com/36579366>.

------
mcshaner1
Something bothers me about Bret's writings: He is very big picture (which
isn't bad by any means), but then he often talks in absolutes without
substantiating many of his claims. I suppose speaking in absolutes may be for
rhetorical reasons, something wishy washy is probably less persuasive. He
certainly has some good ideas and a talent for presenting them though.

>Programming is a way of thinking

If teaching programming is meant to teach a way of thinking, how do we ensure
that it transfers to other areas? David Perkins discusses this in his book
"Outsmarting IQ" (pg 224,
[http://books.google.com/books?id=kNbSvy4dQEUC&q=papert#v...](http://books.google.com/books?id=kNbSvy4dQEUC&q=papert#v=snippet&q=papert&f=false)).
Latin was once thought to be a language that taught people how to think, but
the studies didn't show any transfer between learning Latin and other skills.
Obviously, programming isn't Latin, and I'm actually in support of the idea of
teaching programming as a way to teach thinking skills, but any effort to do
is going to have to address the problem of transfer. One way to potentially do
this is motivation, I think Vygotsky advocated showing children why they could
write (and how they might already be attempting to do so), which would then
give them motivation to learn writing. They'd already understand a reason for
using it...

> Alan Perlis wrote, "To understand a program, you must become both the
> machine and the program." This view is a mistake, and it is this widespread
> and virulent mistake that keeps programming a difficult and obscure art. A
> person is not a machine, and should not be forced to think like one.

There are cases where this is true, but putting it another way, "A teacher is
not a student, and shouldn't be forced to think like one". Any time where a
mind is trying to communicate some concept, there has to be some level of
dialogue or shared context. One could argue that learning programming could
help people understand that others may interpret what they say in a different
way (and why that may occur). I think that is a pretty important concept.

Finally, the US military has funded a lot of research on intelligent tutoring
systems. One of the things a lot of the successful programs have is a means of
getting the user to think more like an expert. The tutor programs often do
this in two ways: by prompting the trainee for a response and then getting
them to compare to something an expert would do, and by providing
feedback/hints (at the right level) as needed. Vygotsky discussed the latter
in "Mind in Society", sometimes all a person needs is a little assistance at
the right time, and then they'll understand why and how to do something. As
far as computer based training systems, "Development of Professional
Expertise" has some interesting papers, though it may not be the best source.

------
allforJesse
Well, that was profoundly inspiring. Time to go read Mindstorms.

------
ArekDymalski
I'm waiting impatiently to see these tips implemented.

------
stefek99
I showed this link to dad... Amazingly clear!

------
enos_feedler
Man these long essays really suck up my day.

------
shinta42
this guy thinks top-down, prefer to visualize things and is definitely thinks
in right-side of his brain.

------
pruett
i finally learned how to tie a tie, thanks bret!

------
ludovicurbain
This article addresses GUI design pretty well.

But it does nothing about what's below, what actually makes that GUI possible,
backend and all that.

In that sense, I believe Bret is addressing something on another level
entirely, something that _is_ the future, i.e. a dev environment for people
who design stuff.

Not a failed half-step up from C, like java,lisp, or all the existing
programming language.

An unbreakable waterproof abstraction that will enable the less skilled to
create awesome stuff.

Then we will be able to take all the <insert any 1.x levels of abstraction
language here, from java to ruby> programmers, and send them to a layer where
they belong, instead of leaving them to rot in a system that is both
inadequate for speed and inadequate for productivity.

------
dschiptsov
Programmer _must_ be able to think in terms of how a machine works, how data-
structures are represented and how system is organized, what other processes
are running and what resources they are sharing.

Not thinking about the machine or an underlying OS is a total nonsense and the
cause of problems and suffering.

Imagine a doctor who says "Doctors must not think about what is inside the
body, they must think in terms of temperature measurements, blood testing and
medicine prescriptions.

btw, the SICP book threats the subject exceptionally well, and not mentioning
such fundamental work is an example of ignorance.

 _People understand what they can see_ \- yes, they do. That is why we have
box-and-pointer diagrams. That is why we use parenthesis.

In other words, all this was solved long ago in the Lisp world.

