Hacker News new | comments | show | ask | jobs | submit login
Bret Victor: Learnable Programming (worrydream.com)
1080 points by siavosh 1673 days ago | hide | past | web | 183 comments | favorite



There's already two comments here about being "harsh" or "ungracious" towards Khan Academy which is ridiculous.

The usual HN article that contains criticisms is usually limited to that. Some rant that took 10 minutes to write and contains nothing constructive.

Bret Victor put an insane amount of time into this (I can only assume) and is truly advancing mindsets about programming tools and learning. We should all be thankful that he put this much effort into a "criticism" piece.


I agree, and would add that Brett Victor is pointing out what he perceives to be valid flaws with the method Khan Academny uses to teach programming (live coding environments, javascript, and processing languages, which other teaching products happen to use as well). He is not criticizing Khan Academy as a whole directly, as the quality or lack thereof of Khan Academy itself is never brought into question or even discussed.


Totally. People are too quick to jump on the "he's ranting" band wagon.


> Brett Victor is pointing out what he perceives to be valid flaws

Where are the numbers? Where's the testing? Where's the proof that his proposed changes are more effective in teaching a new programmer? Validity comes through numbers, proof, and evidence.


That's a positivist perspective on research and knowledge. While your perspective on validity might be fitting for quantitative research, for an exploratory expose and kinds of qualitative research some look for other measures of validity and quality. There is an enormous amount of research (papers) on this topic, just look for "qualitative research validity". As it is more a matter of perspective, there often seem some incommensurability creeping into these discussions.

In this case, I believe that a professional/experienced educator who has tried teaching introductory programming can, given their own experiences, relate, understand, and build on the concepts in the article to improve their own teaching and understanding of teaching/learning introductory programming. This is sometimes called communicative generalization (see Smaling: http://www.ualberta.ca/~iiqm/backissues/2_1/html/smaling.htm... ).

For example, to me, as a computer science teacher in high school, I recognize a lot of the issues discussed in the article. Reading the article I am able to link these issues and solutions to my experiences in class and decide to what extent it is credible and trustworthy. Furthermore, and more importantly, I can more easily apply results this kind of research in my daily practice than I can results of (rigorous) quantitative research as that almost always is taken out of context and ' clinically' due to fixing variables that aren't fixed in real life situations (i.e. the classroom, the students (holistic), environment).


I'm well aware of qualitative research, but he doesn't even have that. And, while I agree that you can translate and interpret the major themes, all you have done at that point is recognize that you have a similar problem and have formed a hypothesis that needs to be tested. You cannot assume that your method of fixing the identified problem will actually work until it is tested.


True. On the other hand, it allows the professional/expert to have a good idea if it might work. And, if so, if it is worth testing out or not. In my opinion the value of this kind of research and/or informed opinion pieces is local in scope and should be treated as such. Don't base your educational policy on this kind of research, for that you'd better look for quantitative and, preferably, longitudinal studies.


You could say say something similar about UI design of things like iPhone.

Some times you have to show people what they want. And then when they see, they would know how awesome things are.

Not all projects are started after research and data. That way many projects would never start.


This doesn't make sense, because the UI design of the iPhone WAS tested. They did a ton of research into their solutions. They did exactly the type of testing I'm advocating.


I often hear people talk up how the iPhone was never tested in usability labs with non-apple employees. Then I hear that the iPhone was heavily tested.

What actually happened? Does anyone know definitively whether or not it was tested by non-Apple employees before launch?


You can find proof in numerous publications from related fields like cognitive psychology and pedagogy. But don't expect the author to gather them for you. He already did amazing job.


I disagree with the idea that proof of those concepts translates perfectly across all situations. You have to test something before you know if it works or not.


It's amazing how many people have a problem with the idea of testing a solution. I'm simply saying that assumptions need to be challenged and assertions need data to back them up. I'm not saying any of this ideas are inherently wrong, I'm saying they're unproven.

If you down-vote simply because I believe a person should offer proof when they make a claim, you're falling victim to your own biases.


You are not being downvoted because "people have a problem with the idea of testing a solution". You are being downvoted because your tone suggests you categorically dismissed a groundbreaking project because its introductory article wasn't a scholarly work bristling with citations.

"Numbers, proof, evidence" tend to be marks of an excellent article. But an incredible amount of work clearly went into this article, and it presents powerful ideas in a novel and original light; it is highly valuable and fascinating in its own right, even if it is weak in certain areas that academic papers tend to be strong in.

Yes, citations would help tease apart questionable assumptions from reliable ones. Yes, thorough testing would help lend more credence many of the ideas. Yes, the article is imperfect and could be better in a multitude more ways. Still, the article is highly valuable and fascinating.

Have you heard of the term "strawman argument"? That's what you're doing when you say "It's amazing how many people [are completely unreasonable]". Most people are quite reasonable. If a large community is downvoting you, you should strongly consider the possibility that you, in fact, are the one being unreasonable.


First, this isn't Bret's introductory article on this topic. As far as I know, his first talk about this came in January of this year (watch the Inventing on Principle video on Vimeo). Granted, 7-8 months isn't a super long time, but he has never built/tested his ideas, yet asserts them as if they were proven facts.

Also, I know why I'm being down-voted. Bret is an inspiring person. His Vimeo talk, specifically, is very inspiring. He acts on principle, he believes strongly in his ideals, and there are many people who agree with him. So, the people who like Bret are people who align themselves with his ideals. So, when I challenge his vision by asking for data, people who read it feel like I'm challenging them, personally - like I'm challenging their beliefs (because, well, I am - I'm asking them to prove their beliefs). People respond to that kind of challenge in a negative way. It'd be like if you tried to argue the ideas of Steve Jobs with an Apple fan - they're just not going to listen to you, no matter how solid your arguments are.

I, myself, agree strongly with Bret's principles. He's right that developers need better tools and we need a better way to teach programming to people. However, I try to divorce myself from those emotions and still look for facts and evidence. If someone claims A is better than B, I will look for proof, even if my "gut instinct" tells me that they're right. I very well believe Bret could be right, but that doesn't mean we should all just assume he is without evidence.

My point here is that asking someone to give evidence for their arguments is completely acceptable. But, when you challenge the ideas of someone feel emotionally invested in (because their beliefs and ideals are being challenged), you receive negative feedback. Thus, the down-votes. It's actually a very interesting study in how people can be loyal to someone even without evidence of that person being right.


Russel your problem here is that we are looking at something that is about to change technology forever and you are taking the least productive path to discuss it. It's very clear to the majority of us that this is big! It's not tested simply because it's so new and innovative. We are fortunate enough to be the first ones to have this insight shared with. It's now our job to make amazing projects that put it to use and inherently test/prove it. That's what Brett is trying to do here.

I don't think you really do grasp the magic of what he has come up with. Yes you saw his video 8 months ago. But if you did truly grok what he is putting forth, you wouldn't make this generic rebuttal about whether its tested or not. That can be applied to anything.

We come to hacker news to get the kernels of the newest innovation that is happening around the world before anyone else. If you require ideas to be tested before you do anything with it, you will be late to the party. Your loss.

Now Russel can you please stop standing up for this point. It's unfortunately the first thread on this awesome piece and a waste of our energies to discuss.

Lets discuss how we are going to implement this watershed idea for innovation!!


>"I'm simply saying that assumptions need to be challenged and assertions need data to back them up. "

I hope you have a study to backup that assertion.. :p

(Hint: not all knowledge is empirical)


If I were a developer from KA, I would be very grateful for this kind of crticism, no matter how harsh it is. Bret basically did a lot of free research work for them. Some of the ideas are hard to implement, but there are quite a few things that can easily be added to their products.

Inovative ideas are rare and fragile. We should be thankful someone sharing them.


It's important to point out that was Bret Victor that created this concept from which Kahn Academy took inspiration from.

https://vimeo.com/36579366 http://ejohn.org/blog/introducing-khan-cs/


As Bret Victor says in his essay, live coding is not a new concept and he did not create it.

The current trend in live coding seems to have spawned from a discussion between two computer musicians (Fabrice Mogini and Julian Rohrhuber) in 2001, with masses of related prior art going back to Lisp machines, Self, Smalltalk, corewar etc.


Responsive live programming (not just code reload), which is closer to what Bret Victor is doing here, goes back to various visual languages of the 80s and 90s.


Like the ones I mentioned.


You didn't mention any VPLs. I would throw in a few...ThingLab, AgentSheets, LabView, Prograph, Fabrik, and so on. These were the original live languages.


Self is certainly one. http://selflanguage.org/documentation/published/programming-...

But why do you think liveness requires visual programming? I see them as pretty much orthogonal.

Separately, when I see "visual programming language" I see "unusual programming language". When language features become normal (like 2D syntax in Python/haskell), then we stop calling them visual. A great deal of visual features are just alternative ways of structuring text, ways of constraining syntax, or ways of constructing high dimensional (and therefore non-visual) graphs. To a large extent, all programming languages are visual.


Why would you call Self a visual language? It has an IDE, but the language is most definitely textual. Its only when you throw in Morphic does the line begin to blur, but you are back to text again if you want to write any code. Many languages we think of as visual are more structured or graphical (graphics baked around text); compare ToonTalk to Scratch! Most languages fall somewhere in a spectrum between heavily visual and heavily textual, but this is my own classification system and there is hardly much consensus on the topic.

Liveness doesn't require visual, of course, but that's where liveness first shows up in history (SketchPad, along with directness). I go into some of the history in my own paper on the topic (http://lamp.epfl.ch/~mcdirmid/mcdirmid07dead.pdf).


I haven't actually used Self, but from reading the paper, discussion of embodiment, tangible objects, and direct manipulation makes it interesting from a visual perspective.

Personally I think all computer languages are textual by definition, and vary in the amount of extra visual support for syntax and more often secondary notation. The Reactable perhaps pushes beyond this limit by making proximity and orientation part of the syntax, but still is basically about composing textual symbols.

If we're doing that thing, I wrote a chapter on this in my thesis :) http://yaxu.org/thesis/

[ For the benefit of any others, here's a link to Sean's preprint: http://lamp.epfl.ch/~mcdirmid/mcdirmid07live.pdf ]


But that aspect of Self comes mostly from Morphic. Anyways, Self/Morphic were a huge inspiration on my own work. There are deeper distinctions between textual and visual; think a Van Goh painting vs. a book by Steven King. They say "a picture is worth a thousands words," but actually, the worths are not very comparable. Text is much better at expressing abstractions than visuals are, which is why most VPLs mixin some text (though their are pure VPLs like SketchPad and ToonTalk).

I skimmed your thesis a few months ago; lots of good work there! But I think we are a bit out of whack on our world views (arts vs. PL), which is why we might not be talking on the same page.

Thanks, I forgot about the DigitalLibrary link :p. If someone is just looking for related work, better to try http://lamp.epfl.ch/~mcdirmid/mcdirmid07dead.pdf (smaller download as it doesn't embed quicktime movies).


Yes we are probably talking past each other here a little, but I enjoy and respect your work on SuperGlue. Not having used Self I don't understand the distinction between it and Morphic.

There clearly is a distinction between textual and visual, but at the same time they support one another: text is made out of pictures, pictures can be described with text (e.g. svg). For me the aim of visual programming language research shouldn't be to disparage or get away from the composition of symbols, but to supplement that with mental imagery. As I see it, mental imagery and language are most valuable in combination.

I'll look into sketchpad and toontalk properly though, "truly" visual language is a fascinating area.


Self is a language, Morphic is a GUI toolkit/framework/library. You can play with Morphic implementation in some Smalltalks (I did it in Pharo) and even in JavaScript: http://www.chirp.scratchr.org/blog/?p=34


I agree. There is a lot of constructive criticism in the article, and I'm sure Khan Academy would love this kind of feedback to help improve the platform. Hell, even I learned a couple of syntax usages just watching videos of the learning interface!


He could have delivered the "this is something to build on" message without the "almost worthless" value judgement.


I just found the analogy of a microwave with blank buttons turned me off as being an unnecessary rhetorical straw man. I know what fill means, and I know what ellipse means, and I know what numbers are. Discovering what those words mean in a different context is not analogous to blank buttons.


I disagree. You know what buttons are, you know what a microwave is, you know you need to put the food inside and then push some buttons to indicate the time, then press a start button.

The thing that you don't know in the programming example is what those numbers do. I've written processing.js code before and I'd be half-guessing if you asked me what each parameter to elipse() does.


Okay, but what is ellipse(50,25,32,5)?

"ellipse" and numbers are only hints, just as blank buttons and dials are only hints.

Programmers rely a lot on IDE support now to see the parameters...which is exactly what Bret is talking about!


The comparison is still shallow. It is regrettable that most programming languages don't support named parameters, but it's fairly trivial to imagine a language and api:

DrawEllipse(x=50, y=25, width=32, height=5)

No real-time rendering and tracing of values required (I hope!). Furthermore, a decent IDE will bring in the method doc blurb, so the programer has a richer context to make an informed decision.

I appreciate the work Brett put into the article and I'm half way pre-convinced myself. At the end I was somewhat disappointed because most of the stuff he talks about is already present in a decent IDE, perhaps in a less polished form, but present nonetheless.

The deeper put-off is that the article seems to advocate not thinking, instead performing random walks using shiny tools. If someone can't think about their code, no amount of visualizations is going to help. That being said, the potential of enhancing one's thinking abilities by properly chosen visualizations is tantalizing, but I don't think the article makes a good case for it and I haven't yet seen a better approach than old school pen and paper.


> If someone can't think about their code, no amount of visualizations is going to help.

Most of the article tries to make the point that if someone can't visualize what their code is doing, they can't think; they can't reason about it.


Exactly! There are several levels of learning. At the master level one does his job using muscle memory. Graphics help at the early levels, but to be proficient one needs to master the domain. At that level graphics are slowing down the process, not helping it.

Building the right visualisations that hold at master level as well as at novice level holds the promise to expand one logical abilities far beyond what our current ones.


It'll be interesting to read his subsequent comments on graphics state, as extending this example to draw arbitrary ellipses without coordinate changes could make for a simple visual demonstration of function composition, assuming the goal is not "Learn PostScript in floor(π^e - 1) Days."


I found myself strongly agreeing with him about fill() specifically. Fill what? It's a verb that performs no action, and is really shorthand for setFillColor, which sets invisible global state. It's not at all obvious from the name what fill() actually does, nor what it's parameters are. Only through previous familiarity with graphics APIs and RGB triplets can I infer the meaning, and in a learning environment it's silly to expect that familiarity.


His argument really seems to be that the relationship between the input and output is not obvious, not that you don't understand the words themselves. The blank button is a fitting analogy to the uncertainty of the arguments.


Sure you do, but I send my kids to KA. To assume a student knows what an ellipse is and what it means to fill it is a leap.


Here's my issue with the "advancement" that Bret is doing. It's all conjecture. He has no product. He has no users. He has theories. Theories that sound good to you and me, but, ultimately, theories that are untested and unproven. There's no evidence that any of these changes would actually help anyone learn to program any better. Who is to say that a person new to coding wouldn't be overwhelmed by the amount of information Bret's system is asking them to process? There's a pretty strong case for cognitive overload when you have lines of code, different code hints that appear not just over every line, but every function and number, a time bar with many different symbols on it, etc. But, again, no benefits or faults can be found with Bret's proposed approach because it's untested and unproven. Conjecture, hypotheses, and assumptions.

This is where Khan Academy exceeds. They have users. They have real people really trying to learn how to code. This means they can test their assumptions and modify their approach. An imperfect solution delivered today is better than a perfect solution delivered next week.


Khan Academy and Bret Victor are apples and oranges -- they're not even playing the same game. In order to make any major strides, we have to reconcile the need for PRODUCTS, NOW! with the conceptual, exploratory approach taken here.

Real world innovation often makes very small tweaks on proven formulae, whereas bold new ideas can have a long incubation period with seemingly little payoff at first. It's roughly the difference between business and research. While it's not possible to "test" these bare assertions in the sense you imply, they can certainly be argued for and against. And he makes a rather compelling case for it, certainly a more nuanced one than much of the criticism here on HN.

In any event, he's begun sharing some of his code (I believe) and products like Light Table are emerging that appear to draw from very similar inspiration, if not his work directly. It would appear these ideas are gaining some traction.


I disagree that real-world innovation isn't tested, iterated on, and improved. Look at flight and the Wright Brothers. They didn't just run numbers and present ideas. They built prototypes and tested their assumptions. That's what Bret needs to do. He needs to build, test, iterate, and improve upon his ideas.


I think you're misreading my reply. It's not that real world innovation isn't tested and iterated on, quite the opposite. The Wright Brothers are exactly that model and I don't dispute that at all.

What I'm drawing a distinction between is this approach to innovation, which is necessarily iterative and based on incremental improvements to prior art, and another approach which makes larger conceptual leaps and may not have obvious initial practicality. I described these two approaches as being roughly comparable to "business and research", although I admit that's a somewhat crude dichotomy. It is the latter camp that I think Victor falls in.

Both are relevant. For every Wright Brothers you also have a Tim Berners Lee, whose invention was developed in an international research laboratory (not a startup) and was largely ingored by the public for several years. It took the perfect storm of the late 90s Internet boom for the web to develop into the thing we recognize today.

In any case, it's not as though Victor has been doing nothing but writing papers and photoshopping mockups. If you look for them, you can find videos of him giving talks where he is demonstrating real, live software that implements his ideas. He has been refining them in practice for several years now, and others (such as Light Table's Chris Granger) are doing the same. The question is whether they will catch on, and the jury is still out as far as that's concerned.


You're right, I did misread your reply - I originally thought you said it was innovation that took laboratory time. Though, now that you've explained it further, I'm still not sure I agree with the concept. Pure research follows a specific method - you form ideas, form a hypothesis, create an experiment, and test your hypothesis, then revise and test again. There is nothing, ever, that is not tested. Though, you can certainly make the argument that a few of Bret's projects (such as Tangle) work as small-scale experiments for testing his larger ideas. I may actually concede that point; though it is not exactly the big picture he's talking about, it does test the tools he suggests using to achieve that big picture.

And, I have seen the talks where he demonstrates his software. I just wish he would make it available (open source, sell it, whatever) so that others could use it and test it. Wouldn't you like to try out those tools he shows?


you again, Russel ;). come on man. don't you get it: his point is to inspire the likes of us. It's up to you and I to do something with this. If you're not the one up to the task, fine, but stop putting forth a mindset that won't inspire others to do something with this, e.g. TEST IT!

also, the stuff will work. It's dead obvious that it's the future. Take a step back, take a breath, open your mind and stop trying to backup your initial point about "where's the evidence?" and just agree this is magical and some big things will come out of it.

if you can't see that, and can't allow yourself to be wrong for a second, then fine, you're a regular guy--not someone we expect to see making the amazing startups that will put Bret Victor's stuff to use.


You're reading an imagined reply no one is giving. No one is disagreeing that testing ideas is important. Until you test, you never know if there are problems with your ideas that never would have occurred to you no matter hard you thought about them, and that makes testing invaluable.

But sometimes some of the problems that testing would reveal, would also be revealed if you just thought and discussed your ideas more, and would cost vastly less time and effort (which is a very high bar when what you're testing is the efficacy of pedagogy for teaching people programming for the very first time). So presenting your ideas in a compelling way and opening up discussion can also be extremely valuable, and that's exactly what Bret Victor did.


I am a user of his ideas. I read through his article without taking a break, which rarely happen these days.


He's also being naive about live coding as a whole: http://toplap.org/?p=212


Yes. His approach presents a proper critique with a path forward. I love what KA is doing, but it is going to be iterative (I presume), and I'd hope they are listening to well thought out ideas like this.


Programmers, by contrast, have traditionally worked in their heads, first imagining the details of a program, then laboriously coding them.

I don't think this describes most real work done by programmers. Rather, what he says we should do,

To enable the programmer to achieve increasingly complex feats of creativity, the environment must get the programmer out of her head, by providing an external imagination where the programmer can always be reacting to a work-in-progress.

Is exactly what most programmers already do. We usually don't have a nice, interactive environment to do so; it's usually a combination of coding, inspecting results, thinking some more about new issues, coding, inspecting results, on up until the problem is solved.

In other words, I think that programmers do tend to solve problems by "pushing paint around." I rarely start with a full appreciation of the problem. But in order to gain that understanding, I have to start trying to solve it, which means starting to write some code, and usually looking at results. As I go through this process, the domain of the problem comes into focus, and I understand better how to solve it.

We already do what Bret is talking about, but not at the granularity he is talking about it. For beginners, I can understand why this difference is important. But I certainly solve problems by pushing paint around.

In general, I think this is a fantastic piece for teaching programming, but I don't think (so far) that all of it carries over to experienced programmers. The examples of having an autocomplete interface that immediately shows icons of the pictures they can draw is great for people just learning. But that's too fine-grained for experienced programmers. Chess masters don't need to be shown the legal moves on a board for a bishop; their understanding of the problem is so innate at that point that they no longer play the game in such elementary terms. Similarly, experienced programmers develop an intuition for what is possible in their programming environment, and will solve problems at a higher level than "I need to draw something." That is the reason we put up with existing programming environments.


Bret has written an amazing article, but the world he inhabits is soooooooooo far away I can't ever imagine getting there in my lifetime. As it stands, programming is barely 2-3 levels of abstraction above shoving bits in registers...sometimes even those few layers are slowing us down and we have to resort to bit shifting operators and native code every once in a while. Whereas he is talking about 20-30 layers of abstraction. He wants to visually depict code, and then visualize functions, visualize data structures, visualize the connections between functions, and actually visualize how the program is running while its running !!! Whereas the practical programmer of 2012 is still buried neck-deep in textual stack-traces.


Bret has written an amazing article, but the world he inhabits is soooooooooo far away I can't ever imagine getting there in my lifetime.

True, he doesn't seem afraid to take big leaps. Brilliant, visionary essay. But I don't think it's unrealistic. In my day to day (game dev) I spent very little time optimizing (and very rarely need to write any sort of bit shifting), but instead a huge chunk of development is spent slingshotting variables through loops into abstracted rendering directives. The hardest part in my job is by far understanding and visualizing program flows in my head. From the essay:

Wait. Wait a minute. Were you trying to answer those questions by doing arithmetic in your head? The computer somehow drew that picture, so the computer must have calculated all those scaleFactors itself. Are you seriously recalculating them in your head?

Sadly, yes, very often I am. Depending on the mood I also use a convoluted combination of notebook + pencil, breakpoints, printf statements, and/or isolating the problem by working on a separate program with the problem simplified. And having aids - like some of the great solutions suggested like the timeline and other state/flow displays - would give me tremendous gain today, if they could be integrated in my toolset. I'll give you that I imagine it being quite a bit more challenging technically to make it work with my C/C++/Objective-C environment, but with other more dynamic languages - like Javascript or as suggested: Clojure - I imagine it to be much easier.

This article is by the way, most likely the best argument I've heard so far for me to move to a higher level language. I like how Bret Victor talks about language (and API - see the autocomplete argument) choices as enablers to better environments; I think we instead usually think of language choice as something that mitigates how hard this work is to begin with with the tools we have (see the time and energy we spend in in this community on language choice). If I had access to tools offering this level of abstraction but had to adopt a language that I didn't particularly like, I think the latter would then become be the least of my concerns.


As a counterexample to shoving bits I can point to two things mentioned by Bret: Morphic and Smalltalk. It just so happens that Morphic (referenced in it's original, Self implementation) was ported to/implemented in Smalltalk (Squeak, Pharo). These are things that exist and work today, I had a pleasure to work with them and have to say they live up to expectations. It's worth trying them out.

Another curious example is Erlang, which lets you update the code responsible for some behavior (I mean of an object) live, without any hassle at all. Then there's ClojureScript, which lets you do the same in browser (I think, don't know it very well).

Foundations of what Bret says are not new, and they are not purely theoretic either. The programmer of 2012 could have been using them for years if he wanted to.

I think Bret is right when he writes that technical possibilities are not a problem - programmers mentality is. Both Lisp and Smalltalk programmers would welcome - I imagine - Bret ideas without a shred of hesitation, because they are working in a similar way since times immemorial. It's just that there are so few of them and Bret wants to influence programming at large - that's why it seems to be difficult or novel.


I don't think that Bret is advocating that this is the way all programming should be. That would be strictly impossible, as some functions have totally non-linear effects on their output, so you couldn't easily connect one to the other with handy arrows and highlighted stuff.

The geometry example is chosen because it's easy to make a mapping between the space of function inputs and visual outputs. And each parameter is independent of each other.

Khan Academy has already implemented some of this for JavaScript, running right in the browser.

http://www.khanacademy.org/cs/drawing-bonus-rotation/9064481...

Try clicking on the coloration functions to see previews, or sweeping with the mouse to change the values.

As for the more advanced features, many languages exist today which make this quite possible, at least for teaching tools. Even well-commented Java has the kind of typing and documentation culture that would allow you to implement a lot of this today.


Java's documentation, while allowing to document each parameter, still doesn't make the actual result of changing a parameter obvious to a program.

In Bret's video, where hovering over a parameter would show what exactly would change – that requires either machine-readable metadata (i.e. x position of top-left corner of the shape) or additional programming to make available. Javadoc as it exists is just a semi-structured and very thin wrapper around HTML. Not really much a computer could do anything with.

The general problem I see with Bret's approach, while it works very well for restricted programming environments intended for beginners and learners, it falls short for more complex things. But then again, those of us who know half the language framework by heart anyway won't probably need as much guidance or fiddling around. Still, it requires augmenting each and every function with a piece of visualisation code or enough metadata so a development environment can apply the visualisation itself.


he's suggesting we as tech startup entrepreneur type guys go out and imagine the actual solutions that can work across a number of problems based on the examples he's provided.

I'm absolutely so surprised by the lack of imagination here at HN. This is startup ideas gold right here, and none of you seem to have the mental capabilities to dig it out and do something with it. You all want to find something wrong with it.

Ever since I've been tracking Bret Victor, my head has been spinning with tons of practical ideas we can implement today to make us more productive at programming. Light Table is a fantastic start. What's the problem here, guys! You guys just don't want to admit the way you've been coding thus far will soon be obsolete and you've been wasting your time. Yes, your skills will eventually be worthless.


This is what he is suggesting for learning to program. I don't think he is suggesting that general purpose programming languages be built with these aids ...only an idealized learning language


Read the end of the article. He definitely thinks this is the future of programming.


Well, it certainly is the past, I see no problem with it becoming the future. At last.


Oh yeah you are right, my mistake!


And the more abstract your knowledge and understanding, the harder it becomes to visualize. It's relatively easy to visually explain classical mechanics, but not so for quantum mechanics.


There is nothing new here. Before you downvote, this is actually a huge complement to Bret. As he's said before he is inventing on principle not inventing for you to download and install his latest hack. His principles have been consistent (and, imho, right), and this is another view into them. But, if this opened up some huge new insight for you then you haven't been paying close enough attention.

He's always been right and I hope he continues to have patience while he continues his conversation with the world as the world misunderstands his ideas. Unfortunately many people are going to latch on to the examples in his demo movies, and the important parts of the essay will fly over their heads. (The most important part of this essay being, of course, to read Mindstorms.)

All of his creative output points to the same core message: programming today is broken because it is not designed. His various essays, talks, and so on are just alternative "projections" of this thesis. This is a sign of clear thinking.

He's given us all the tools we need to solve this problem. These tools are the mental framework he lays out, not the specific concrete flavor he demoed in his latest talk or essay.

The hard part is not building IDEs or visualizations, it's having the guts to throw everything out and start over again, knowing it's going to be a mess for a long time and it will take years before things start to make sense again. It's convincing yourself that most of what you know is useless and that many of your assumptions are wrong.

Why do that when you can just download the latest whiz bang framework and start hacking shit together taking advantage of the life-long skill you've acquired at firing bullets with a blindfold on?

It's scary to be a newborn again, especially when you're in a place where few have been before (and those that have, are largely not around anymore.)


I don't understand what's wrong with people building on his ideas, taking the subset they think they can implement and, yes, sometimes bastardizing his lofty ideas into quick hacks because something is better than nothing. That's how ideas spread. That's how people get shit done.


> His various essays, talks, and so on are just alternative "projections" of this thesis. This is a sign of clear thinking.

See http://vimeo.com/36579366 for a well-known presentation about "inventing on principle". Fast forward to the 10:45 or 23:00 mark to see some interesting examples of this principle he follows.


Wow! What an awesome critique. I'm in awe.

First off, rather than just saying Khan Academy missed the point, Mr. Victor goes over in extreme detail with full examples with ideas on how to do it better.

Second, he really went into some detail about how to think about things. Not just the solutions but ideas and ways of thinking to come up with better solutions.

Third, he's set the bar for critiques higher than I've ever seen. Next time I want to critique something I'm going to feel at least some responsibility to give good and clear examples of both what I think is wrong and what I think would be better with reasons.

Fourth, even if I never write software to help learning programming or help programming advance in general I'll certainly be influenced in my API and system designs by this post.

Thank you Mr. Victor


I totally agree. Check out the rest of his site too, he is a genius.


I wasn't aware of Bret Victor before this post hit HN. I feel like I've found a hidden goldmine of interface design thinking. For example, his Magic Ink article about avoiding user interaction for information software is a contrarian gem: http://worrydream.com/MagicInk/


Beautiful and inspirational, and yet...

Sometimes becoming able to hold the 'invisible state' in memory is the skill to learn.

Consider the 'dual N-back' drilling which seems to improve working memory, and then also (by some studies) other testable fluid intelligence. The whole point is holding more and more hidden state in your mind. (To 'show the state' would defeat the purpose of the game.)

Similarly, sometimes struggling with the material is the best way to internalize it.

Consider some studies that show noisy, hard-to-read text (as on a poorly-printed copy) is better remembered by readers than clean typography. Apparently, the effort of making out messy printing also strengthens the recall.

So I'd be interested in seeing comparative testing of the vivid 'Victor' style, and other styles that are more flawed in his analysis, to see which results in the most competence at various followup dates.

We might see that the best approach is some mix: sometimes super-vivid interactivity, for initial explanation or helping students over certain humps, but other times intentionally-obscurantist presentations, to test/emphasize recall and the ability to intuit/impute patterns from minimal (rather than maximal) clues.


Having an environment such as one Victor styled for big systems (big as in millions of lines of code), would prove unfeasible (hell auto-complete has a hiccup when lines start getting into hundreds of thousands).

Those tools he proposes seem to be very beginner and RAD oriented (even if he claims otherwise). I've seen IDE's choke on smaller code bases and this not only has auto-complete, auto-update but also state/frame/time tracking built into. There is no way in hell it can work for existing languages. Maybe some kind of VM that remembers all it's previous states, all function call times, orders and then updates them as programmer changes them.


Yes, in a large system, many of Victor's techniques might only be usable in isolatable areas, like well-mocked unit-tests.

Regarding "some kind of VM that remembers all its previous states", check out 'Omniscient Debugging': http://www.lambdacs.com/debugger/


Why do systems need millions of lines of code?

If you can build a complete operating system + major programs in 20,000 lines of code, what system should need a million?

http://www.vpri.org/pdf/tr2008004_steps08.pdf

(They're not quite down to 20,000 yet, but they're getting there)


No one said they need millions lines of code. However a large "enterprise" application have a way of growing like cancer. This tool won't be particularly useful for such codebases.


if you had this tool, maybe you wouldn't built applications that way anymore. it's at least a possible outcome.


We just need to design better IDEs and languages. Ya, large code bases suck but not that much. We can actually compile code pretty quickly, and we are only just beginning to study memoization techniques (which you are alluding to at the end of your post). Also, language is a very big deal: you can't really expect as much from the IDE on C++ as you can from C#.

It can scale, we have plenty of smart people to make it scale. The important problem here is "scale what?" I think that is the genius of Bret Victor's innovations.


"Consider some studies that show noisy, hard-to-read text (as on a poorly-printed copy) is better remembered by readers than clean typography. Apparently, the effort of making out messy printing also strengthens the recall."

In the book 'Thinking, Fast and Slow' studies are presented, that the number of right answers increases if the question is harder to read.

The conclusion is, that if you need more effort and consciousness to comprehend a question, than you are already in a mode of thinking which makes it easier to reach the right answer.


Thanks for your post. I was about to write something similar. While it is surely very impressive (the post), I am not sure if I learn better when I click on things. The visualizations make it nice to see what is going on, but making code that does not work error free is always (and was) the most effective way to learn.


The ability of holding an invisible state is important, but at the same time you are mixing things in the learning process, and that makes learning hard. Try learning to speak a new language while learning juggling ... this is definitely difficult. By taking smaller steps, the learner is able to make better progress. First the tools... then the imagination.


Couple random thoughts:

1. Is Bret Victor now the Linus of cutting-edge programming environments?

2. I don't have enough experience with Light Table or the Khan Academy environment to know whether Khan is just a first step on the way to something like Bret's vision, or more of a diversion. I was fairly impressed with the Khan env in my limited time with it.

3. I HATE telling people they shouldn't speak their mind and/or say what they think is the truth, and I don't think Bret shouldn't have written anything. But it's difficult not to seem ungracious. Josh Resig clearly knows what he's doing, at least in the general sense, and he was gushing with praise for Bret, while this reply basically says John did everything wrong.

If Bret feels that way, I truly believe he should say it, but that doesn't make it fun. This is the essay equivalent of cringe humor I guess. Hilarious/Informative while making you feel bad.


1. I think Victor is the Engelbart of our generation

2. The Academy's program just makes "guess and check" easier, it does not fundamentally present additional information (intermediary computational state) to the user

3. He said that he had to respond because he was cited as the inspiration. It is as if I said "this post is inspired by gfunk911" and then filled it with things that you disagree with.


> 1. Is Bret Victor now the Linus of cutting-edge programming environments?

Linus actually implements his solutions to things that he rants about, and releases his code, so that analogy isn't quite right. Bret gives us nice big-picture ideas and leaves the implementation for others.

I think many of the specific ideas mentioned by Bret will quickly fall apart when trying to actually implement something non-trivial. But that's okay, it's useful to have a really inspiring big-picture vision.


Or you could check out his site and notice "tangle - explorable explanations made easy", which is a Javascript library that implements the very sorts of interactions he's talking about.

http://worrydream.com/#!/Tangle


Light Table should be an interesting implementation to follow then. It seems to be guided by many of big-picture ideas of Bret Victor but at the same time has to somehow make money as a real product.


Not to take away from Bret's ideas, because they're great, but here are my responses.

1. I'd say that there's a significant difference between Bret and Linus. Specifically, when Linus believes that developers need better tools, we get git. When Bret believes developers need better tools, we get... blog posts and videos showing faked functionality. Now, that's not to say this won't change in the future. But, as of right now, Bret is starting to look like an "idea guy" in a world of "doers."

3. The reason I'm irked by this post is that I'm under the impression that Bret was specifically asked by Khan Academy to consult on the project (I'm under this impression because this is what I was told). He chose not to consult. So, all of these things he's pointing out now, he had the opportunity to affect them and change them before they were ever released to the public. He had the opportunity to completely revolutionize how new developers learn and he chose to turn away from it. If he's so passionate about the topic, why would he turn it down? Regardless of the reason, to me, it's a "speak now or forever hold your peace." He had the chance to speak, he chose not to, turning around after the fact and saying what it could have been is him doing too little too late.


Linus needs his RMS and vice versa. Don't frame this as 'empty ideas' vs 'noble industry'. Both are necessary.

There are many reasons, none of which need to be justified to the public, as to why a person may turn down an opportunity.

His article made a short criticism of KA missing the forest for the trees and then provided a very long and detailed analysis of ways to improve it. For free. So everyone benefits.

I suspect the videos in his article were not dummies, mocks or faked, but actual working code. If you visit his website you will see a mind numbing amount of examples and demonstrations in the same vein and length as the article. The effort required to fake these would be truly enormous. Again, his reasons for not releasing code don't need to be justified to us.


I don't see this as ungracious. It's not as if he's saying Khan Academy went in the wrong direction. He's just noting improvements that could be made on their current environment. Every new "beginner" programming environment that I've seen has similar flaws. Bret's just pointing them out.


I really enjoyed Bret's article. I don't necessarily agree with all of it but the main argument is quite sound.

Bret writes: "People understand what they can see." which is true for some people but not true for all people. I've got one daughter who is very verbal, one very visual. They learn differently. This in a minor nit though, his exploration of the 'code' / 'example' model is good.

I particularly liked the commentary on something like:

   ellipse(60, 50, 50, 100) 
            \   \   \    \
             \   \   \    +- What does this mean?
              \   \   +----- Or this,
               \   +-------- Or this,
                +----------- Or this?
(We'll see how that comes out in the formatting)

TOPS-20 had a really interesting macro language for programming commands, it was the inspiration for a lot of self-describing command line interfaces like the ones made popular on Cisco gear. Basically you could write it like

   DRAW ellipse AT X=60 Y=50 THAT IS 50 HIGH, 100 WIDE
But all of the 'fill text' was really unnecessary for the parser so if you wrote:

   ellipse 60 50 50 100
It would be just as intelligible. The point being that the training wheels got out of your way when ever you wanted them too, and if you were ever stuck you could type ? and it would tell you what your choices were.

Not enough learning environments put this sort of dynamically sizing help into the system where it is needed such that it helps novices and doesn't slow down experts.


In some languages like perl, there's a named parameter form that's becoming more common. If I were to rewrite the example in perl, it would look like this:

  ellipse(x=>60, y=>50, height=>50, width=>100);
Yes, it is more typing, but you don't need fancy auto-complete to give you hints when reading.

There's also nothing from stopping us from making a "training wheels" interface in javascript:

  draw-ellipse({x:60, y:50, height:50, width:100});

  function draw-ellipse(o) {
    ellipse(o.x, o.y, o.height, o.width);
  }


Ah, but doesn't this have the same problems all NLP systems have? It gives illusion of flexibility that gets shattered every time user allows himself to believe in it. What if I prefer to write them in different order? Or if some version of named parameters is in use, what if I like some other synonym for height? Strict syntax is a good thing as long as it is also brittle.

Now, an IDE that generates a training text like that on the fly and allows you to fill in the values without actually storing the training text would be nice. Something like intellisence popups, but inline and expanded.


> Now, an IDE that generates a training text like that on the fly and allows you to fill in the values without actually storing the training text would be nice. Something like intellisence popups, but inline and expanded.

That's the best idea i've heard in a long time. The IDE already has the information but hovering every function call with the mouse to get parameter information is a PITA and breaks the flow of reading. A hotkey to inline them on all your code at once would be brilliant.


The problem with making the "training wheels" optional, is people will leave them off. All of a sudden your code is not useful or readable to anyone unfamiliar with your code. Even yourself. Sure you might know what the parameters in `ellipse 60 50 50 100` do right now, but what if you haven't worked on drawing code for 6 months?

It seems to me like what he's talking about is less a bicycle with training wheels than it is a self-balancing segway. A bicycle with training wheels is annoying to anyone but a beginner... what he's talking about is something that isn't annoying to a pro... even if they don't lean on it as much.


As far as learning is concerned, I think this is a wonderful idea. I say this in part because I myself learned on Logo before I taught people everything from Java to Scheme, and even the simplest visualization tools could help immeasurably. For example, we had a tool called the Replacement Modeller that would visualize evaluation and substitution in pure-functional Scheme snippets, which was great for stepping through complex code and showing a student what was happening, and it was rocks-and-sticks next to the things Victor is proposing here.

I'm interested, though, in what the ramifications are for advanced, complex programming. I am personally a Haskeller, and advanced Haskell regularly deals with incredibly abstract, difficult-to-visualize structures. Monads are infamous in this regard, and monads are embarrassingly trivial next to, say, comonads or iteratees. I have difficulty imagining this sort of model expanded beyond elementary visualization techniques, and certainly cannot imagine how one might represent and interact with these code structures.

Victor seems to believe that visual, interactive systems such as these should become standard for all programmers, c.f. the section 'These Are Not Training Wheels.' The idea seems powerful, but: how?


I'm toying in build a new language (more in the "find ideas" than really doing it), and tough: Why I can't have events on functions? ie: Why I can't attach listener to the entry/exit of a function, in a transparent way (from https://gist.github.com/3777791, where is still ugly as hell):

def startDef: self.cache['start'] = now

def endDef: performance.register(self.function.__name,'time', now - self.cache['start'])

hook(sample,pre = startDef, post = endDef)

Now with that ability, is possible to log with a graph the flow of the data in the program, in realtime. Still will lack the instant play but is a good start...


You'll be wanting to look at defadvice in Common Lisp and elisp, then, which let you attach code to the entry and exit of a function. Python has decorators, as well, which are similar, but the entire purpose of defadvice is to do exactly what you're talking about.


"Aspect Oriented Programming"


Alan Perlis wrote, "To understand a program, you must become both the machine and the program." This view is a mistake, and it is this widespread and virulent mistake that keeps programming a difficult and obscure art. A person is not a machine, and should not be forced to think like one.

This is nothing but prejudice, and, ironically, it is contrary to how we work as human beings. In any field, we celebrate sympathy between an expert and the matter of his or her expertise. If we say that a pianist "becomes" the piano; we do not regret the dehumanization of the pianist. If we say that a rider has learned to "think like" a horse, we do not believe the rider has become less intelligent thereby. If we say a fireman thinks like a fire, it's a compliment, not a statement that his mind can be modeled by simple physical laws. Sympathy is an expansion of one's understanding, not a reduction. For example, the wonderfully named "Mechanical Sympathy" is a blog that will improve your grasp of the connection between performance and hardware architecture without dehumanizing you one bit. Heck, here's a guy who says he has to "think like a maggot," and he doesn't seem ashamed or degraded in the least: http://www.bbc.co.uk/news/uk-england-17700116

Is it reasonable to ask a programmer to think like a machine? Of course. We find it natural and admirable for people working with pianos, horses, fires, or maggots to identify themselves with the subject of their expertise, and there's no reason why we should make an exception for computers. It's true that when it comes to usability, for a long time we've known we have to take very strong negative emotions into account. It isn't an overstatement to say that some people loath and fear computers. However, as a general principle, it seems to me that any educational philosophy grounded in the assumption that the learners find the subject uniquely distasteful or unworthy is unlikely to be effective. If someone learning programming finds computers so inherently distasteful that they are put off by the idea of achieving a more intimate sympathy with them, then the long-term plan should be to overcome their aversion, not to try to teach them to understand and control something they are fundamentally alienated from. Human beings just don't work that way. Alienation and understanding don't mix.


Computers are programmable. Pianos, horses, fires are not. Some (lower level) tasks absolutely require the programmer to think like a machine. Most do not.

We have the power to make it easier for ourselves, and lower the barrier of entry for others. They might develop that sympathy you speak of later on, but there's no reason why that should be a prerequisite.


If such an environment ever existed, it would be amazing.

But I can't think of any way that such an environment could exist without having to program a new environment for every problem. Not from scratch, of course, a lot of core concepts could be abstracted out, but even with all the abstractions in place, and all the libraries presenting a standard interface, I'd imagine few thousand lines for the features described just for the environment which is limited to 2d graphics.

You could need a new "plugin" for the environment for every different kind of problem. You would run into two problems: first, how could these be composed in a usable way, you don't usually solve problems that are just about 2d graphics or just about parsing text, you're working with 4 or 5 of these at once; second, the whole idea behind this is to enhance imagination, but doesn't the dependence on existing tooling to help you solve a specific set of problems limit you not by the extent of your imagination, but by the power of your tools? Currently our imaginations aren't getting much help, but they're our only limit (that and the speed of the computer of course).

I'd rather the only real limit to what I can design be myself, not my tools.


This is a fascinating response to the Khan academy's curriculum. Some of the things he is raising here are faults of programming languages; I'm still against the idea of positional parameters. Khan Academy's curriculum is Android to Brett's iOS: you can copy some of the features, but it isn't a cohesive whole because the ideology was not as thoroughly internalized.


Yeah, I was surprised that he kept positional parameters and added scaffolding to explain them. Why not named parameters?


I think the goal was to show how _JavaScript_ (and by extension, Khan Academy) could do the things he's describing. He did show how languages like Smalltalk get something better via a sort of "named parameters".


That isn't fair. Khan implemented a working system, compared to a vague idea.


> A live-coding Processing environment addresses neither of these goals. JavaScript and Processing are poorly-designed languages that support weak ways of thinking, and ignore decades of learning about learning. And live coding, as a standalone feature, is worthless.

Woah come on Bret, we're getting there give them a break! I distinctly remember that this was the work of a couple of interns with the help of Resig.

A couple of things - I still don't have live coding for the vast majority of my programming environments - so that little text box is about 10x better than the vim/eclipse + run loop with print statements that most of us use.

Second javascript is brilliant - lazy ways of thinking are brilliant - you will not believe how motivating it is to just get shit on the screen as a learner. I myself have wasted inordinate amounts of time setting up compilers, interpreters, environments, graphics/audio etc. when all I want to do is bloody program the thing in my head. Who cares where the files are? Who cares where the images are? The environment should be designed to get out of my way - not the other way around.

Most importantly of all - javascript is the most forgiving language I have ever seen - and this is gold. There's a reason Google started with python, Twitter with rails, and Facebook started with PHP - no one gives a shit about "strict thinking" or "brutal languages" - that stuff should come way - way later when you actually need it.

Strict languages for learners are a case of premature optimisation. My little brother absolutely loves the new Khan Academy coding environment/system because of the fact that it isn't strict.


This is really neat. It does paint a far too optimistic picture, however. The mini-IDEs that he presents are highly problem specific. That's great when you are teaching programming and you control exactly what the problem is and what the IDE does for that problem. But this is presented as a solution for programming in general (see the section "These are not training wheels", e.g. "Maybe we don't need a silver bullet. We just need to take off our blindfolds to see where we're firing.").

The control flow visualisation works great for toy problems when learning programming, but quickly breaks down in the real world. The iteration counts become too big to see anything. If you are working with functions that can be sensibly plotted when the iteration counts get too large that's great, but 99.9% of code is not like that. You're working with billions of seemingly random integers, or with strings, or even more complex data structures. How are you going to visualize that over time? Probably for each problem you can come up with an adequate mini-IDE, but that doesn't really help because implementing that mini-IDE is more work than solving the original problem in the first place. To make this practical you need general purpose tools with easily customisable visualisations (and IDE interactions in general).

Another example is the UI for the bouncing ball. Displaying the trajectory of the ball faded out like that works great for an animation or a very simple game where a single thing change over time, but how about a more complicated game where the entire screen changes every frame (as in most 3d games and even side scrollers). That's not even considering GUI applications!

This type of visualisation is also highly specific to single imperative loops, yet the author agues against that. How do you visualize a program structured functionally? You can try to do something with an unfolded expression tree, but that quickly gets out of hand too.

All the examples in the post fall into the category "drawing a very simple 2d scene with at most a singly nested loop". How big a subset of the field of programming is that? It's also no accident that the author chose that subset: it is the easiest case for this kind of visual interaction. Don't fall into the trap of extrapolating the results to all of programming, and thinking we are almost there and the problems lies just in implementing this kind of IDE. While this is superb work, 99% is still to be discovered.


I keep thinking a lot of Bret's points are absolutely wonderful food for thought, but blur the line between tool and use of a tool so much that they will never be practical. It is as if he's taking the outcome and suggesting the language should have known the outcome, but the point of coding is to enable all kinds of possible outcomes, and that set is not quantifiable before the fact.


except the computer can and does run the code, and can then provide super-textual information


but is that helpful? i mean, i love lots of various reference sites, and have enjoyed autocomplete and inline "labeling" of functions in IDEs, but I keep thinking about his talk that went big a bit ago, and keep thinking of these things he's developing as just analytic or test harnesses to zero in on a goal that he's actually already programmed. so, to ask for our tools to have these qualities built in assumes it knows anything about the total picture. for instance, processing's language is a small subset, so maybe it isn't the best example, but someone new to programming in processing might not realize that specifying fill 30 times means that only the last one is the effective one for the subsequent drawing. anyway, there's a lot of uninformed things you can do with any language, and i don't know that blurring the lines between input and interface give us any new insights for the actual act of coding. i think it is great for analysis of coding after the fact, however. like a JIT compiler or other compile time optimizations are great, but it assumes a complete and observable solution that has been expressed already.

another example that is maybe in the neighborhood of a valid response to your reply is like the heap size in java. if i know i what i'm doing and why, i could set it rather high to achieve a goal, but mostly i don't mess with it because its a great reminder i'm making a bigger mess maybe than i should be for whatever problem i'm working with... however, it seems that victor's ideology is that the heap should always know how big it should be given anything i might want to express, and that somehow the halting problem wouldn't apply.


Regarding your fill() example, I believe his point was that uninformed decisions are a reflection of the interface exposed. Since the fill color is an implicit global variable, and not exposed anywhere in the interface, there is no way to discover its existence or behavior except via trial and error or reading the documentation. If the fill color were either exposed via the programming interface, or were explicit, then no meta-knowledge would be required to use it.

Similarly, the heap size limits in Java are global limits implicit to the system, not to the program being designed. His solution to the larger problem—data growing beyond available resources–might involve providing a better visualization of the data. Something more responsive than the feedback loop provided by heap limits. Getting an OutOfMemoryException doesn't help you to understand how the data grew beyond expectations; it is always followed by a heap dump. What if you could better understand the patterns of allocation during design instead?


This article is quite thought-provoking. However, I disagree with the notion that there's something wrong with "unlabeled" programs outside a learning environment. To learn a language, you need to associate unfamiliar words with concepts. A text written in French isn't broken because the words aren't all labelled with their English translations, much as that might be a nice UI for people learning the language. Nor do you think in a language by translating each word into English first.

We do a lot of programming using API's we're not yet fluent in (and may never be), and that's why IDE's can be so helpful and this isn't black and white. But at some point you do need to get some core concepts into your head and communicate them in a reasonably terse language.


"In Processing... There are no strong metaphors that allow the programmer to translate her experiences as a person into programming knowledge. The programmer cannot solve a programming problem by performing it in the real world."

We all learn to read at a very early stage in our lives. What is a real world metaphor for a letter? For a syllable? Or, taking an example from the article, what's the metaphor for a timeline? Or for time itself for that matter. I'm sceptical that having a metaphor matters that much.

Also, I yet have to see any evidence that kids who start with Smalltalk learn programming faster than those who start with basic, or pascal, or anything else. There is frustratingly almost no case studies in this area.


I really like the last line: Maybe we don't need a silver bullet. We just need to take off our blindfolds to see where we're firing.


    Imagine if the microwave encouraged you to randomly
    hit buttons until you figured out what they did.
I don't have to imagine, because that's precisely what I did when I was a child. As a result, at 6 years old I was the only one in my family able to configure a video player.

Of course, I've lost my patience since then and I rely more and more on already known concepts, but that's one reason why children learn faster than grownups - because they start with a blank slate, they have no preconceptions, no biases, have no fear of experiments and of failure. You only need to watch a child learn how to ride a bike, then compare it with a grown-up doing the same.

Parents are also privileged to watch their children learn how to use everyday language. The most marvelous thing happens right before your eyes - as your child does not need to learn the grammar rules, or to read from a dictionary or to take lessons from a tutor. And during this time all they are doing is listen, then trying out words by themselves to see how you behave.

And you really can't compare a microwave with a programming language. A microwave is just a stupid appliance with a bunch of buttons for controlling a state machine. A programming language on the other hand is a language that can describe anything. And I don't know what the best method is for teaching programming, but we could probably take some hints from people that learn foreign languages, especially children.


What an interesting article.

Bret mentions Rocky's boots at the end of his writeup.

When I was in Grade 5, I taught the kindergarteners how to use logic by running Rocky's Boots on an Apple II. It was an effective way to learn because the use of immediate, graphical feedback. The kids had fun learning.


The way to read this is not a neg on the Khan academy project but a push into what Bret sees as the absolute solution to the problem being solved. Ie. a new programming language, which is outside the scope of Khans project of teaching.


Bret Victor is annoyed at his ideas being labelled live coding, but that's what they are.

Live coding environments are pretty diverse, and there is plenty of prior art for code timeline scrubbing, tangible values, auto-completion, the manipulation of history, and many of the other features that Bret argues for.

Some examples: Field - http://vimeo.com/3001412 SchemeBricks - http://blip.tv/nebogeo/dave-griffiths-chmod-x-art-3349411 Overtone - http://vimeo.com/22798433

Live coding isn't just about automatic code interpretation.

That said, other than his strawman beating, I otherwise agree with his thesis, and enjoy his examples. To advance programming, we can change who programs, how they do it and what they do it for. All of this is up for grabs. However, I do think that social interaction in programming environments is an important piece which he seems to be missing.


Its a smartly written piece, but to me this just scratches the surface. A couple of issues:

1. If you are making the conceptual jump from 'text' to 'dynamically annotated text', why not go a step further and just let people draw a rectangle with visual tools entirely (like in Illustrator,Inkscape,etc.) and forget the textual representation?

2. The real difficulty comes with representing things as they may arise through a dynamic program, not with just the 'initialization' stage as this shows. This is where code really really gets complex, and also is much harder to add to with these visual systems.

[EDIT] To clarify further, relating to comments below - The OP's main intent is "how do we redesign programming?" [for all programmers] and my comments relate to this, not just to the use of such techniques for students as a step towards learning traditional code.


1)Because that is not learning programming, which is entirely what this essay is about. 1.b) Learning programming is about learning to reason about problems. Not about learning to use different tools.

2) I think I agree that dynamic programs would be more challenging to do this with. On the flip side, this demonstration is so far beyond my reasoning that it doesn't mean much to me. It's all stunning.


1: That is not entirely what he's saying though:

A frequent question about the sort of techniques presented here is, "How does this scale to real-world programming?" This is somewhat like asking how the internal combustion engine will benefit horses. The question assumes the wrong kind of change.

Here is a more useful attitude: Programming has to work like this. Programmers must be able to read the vocabulary, follow the flow, and see the state. Programmers have to create by reacting and create by abstracting. Assume that these are requirements. Given these requirements, how do we redesign programming?


Thanks. This was my reading of the OP also and what I was responding to (its about changing programming paradigms for everyone, not just about learning for beginners). I've edited my above post to reflect this.


True. I mistakenly thought I had reached the bottom of his essay when I hadn't.


1. Because a big part -- perhaps the biggest part -- of "learning to program" is understanding how to move between the picture in your head and the text that makes it real. From both directions: "What does this code do?" and "What code do I need to write to make this happen?"

Purely visual programming are, IMO, noble lies.* You want to start with an environment that echoes the final state, not one that's totally dissimilar.

e.g., training wheels vs. computer bicycle simulator.

*: The exception to this is when you're teaching children ~13 or younger, in my experience. It's always a question of what mental models a student has at their disposal.

2. By the time a student is at this stage, they have more sophisticated, general models the teacher can rely on. In this essay Brett is working in a world where someone might not have the right mental model for assignment, variables, looping, etc.


1)He shows an example of this. The last one in http://worrydream.com/LearnableProgramming/#react

2)I think the examples with the various timelines, wrapping code in functions or for loops, and replacing constants with variables came a long way in that direction


Thanks. I'd missed the first one.

The timelines are good, but what I mean by 'dynamic' is something with user input that affects objects. Its easy to show how a loop can draw the same shape in lots of different ways, but what happens if the loop might draw different things depending on what the user inputs while the program is running? This gets tricky.


Mandatory 'subtext' (from Jonathan Edwards) links

  - http://www.subtextual.org/
  - http://alarmingdevelopment.org/?p=680
  - http://en.wikipedia.org/wiki/Subtext_%28programming_language%29
I wonder if those two know each other.


This is all very inspiring and nice, but I hate it that all of his examples deal with variables that contain numbers.

With numbers it's easy. You can use sliders to increase and decrease their value. You can see a little preview of the value contained in a variable.

But most of the time variables contain much more complicated information than just basic numbers. Maybe they're objects, or strings containing large pieces of HTML.

This type of data is hard to visualize and obtain "immediate feedback" from. So I think it's still hard to apply the "show the data" concept in a way that it works well for all kinds of coding exercises, and not just for coding canvas elements.


Can I ask a question - why do I have to hear about a new platform or language once a week? Is there some problem existing languages aren't solving? Seriously. It's like this place falls in love with a new platform once a quarter. Before it was Lisp, Python, Ruby, etc. Now it's Clojure and anything else that has less than 1000 people actively using it. Did I somehow wander into the hipster bar of the programmers? I just don't get it. I can't keep floating from language to language leaving a cluster fck of code in my wake because I'm onto the next big thing.


Cool demos, but very long article for what I thought he was trying to express. Bunch of thoughts, I enjoy thinking about this, so would love some conversation around any of the points:

If programming is a way of thinking/problem solving, I'm not sure how supplying the context in line teaches you how to think -- as opposed to sitting down and figuring out a problem on your own.

My experience has been that the best way to learn to program is to try something out for yourself. Whether you write it from scratch or use example code to help you get started. It takes a bit of time, but you get better at it.

You need to be able to sit down and spend the time thinking to solve a problem. It's actually quite hard to teach this even in school -- I honestly believe that the main advantage to taking a CS degree over using the internet to learn is that you are actually are forced to group together and work on projects whereas a self study course would not enforce that.

Often the best way to make something more mainstream is to "dumb it down". I don't think it's because most people aren't intelligent enough, it's because it needs to have mass appeal, and therefore needs to interest a wide range of people. Doing this with programming is quite hard if programming is a way of thinking -- how can you "dumb down" a way of thinking to make it more appealing when a lot of programming is dealing with detail.

The thing about flow, if/for statements, is that you tend to master them very quickly. While the visualizations are cool, they have very little usage beyond the first 3 or 4 lessons.

Very interesting examples, but I don't see these examples helping out much more than other sites (I agree that sites that CodeAcademy aren't at all how the press/Mayor Bloomberg/TechCrunch makes them out to be).


I have to disagree ... there are a lot of folks who don't learn well being thrown in at the deep end of "mess around until you understand it", and for whom a Bretian visualization of data would be useful over and over again (especially for bugfinding). And what data needs to be visualized is different for different people; e.g., I can easily visualize most regex, but a lot of people love tools like Rubular because they can't. But I have a lot of trouble intuitively understanding functions like the graphical ones in Bret's examples.

As to "dumbing it down", I think programming can definitely have mass appeal, but there's a lot of "I had to learn closures uphill in the snow both ways" going on among seasoned programmers -- in the same way that current medical doctors often valorize their hours and hours of being on call as residents. A trial by fire may seem useful but in the end you just get a lot of burned people. But unlike MDs, there's no protective guild for programmers ...


I agree, there are a lot of people who don't learn that well like that. Visual programming makes it a bit more approachable but whether or not it helps learn beyond the first couple lessons (or is overkill) -- I'm not sure. With a lot of these early concepts I feel like just getting one or two reference points can start a snowball effect. Eg. Tell a philosophy student that object orientation is like Plato's Theory of Forms.

From personal experience however I still would argue that just building something is the surest way to go because it requires you to follow through.

After about 2 years of programming full time, I started to develop a sense of why and when things would go wrong up and down the stack, and I don't really think it's something you can teach. It's something you get from loads of accumulated practice -- eg. Oh that's how indices work in Oracle vs MySQL, dynamic proxies on Groovy methods don't work when called from Java, I just built this but now I see I can refactor and save tons of code next time, what are kwargs in python, etc.

I've tested out little one offs like try mongodb, try redis, or interactive js tutorials, but I forget what I've just learned until I need to build something on my own.


As a newbie/wannabe programmer this completely resonates with me. It's not so much 'dumbing down' as about beginning with the end in mind and providing frameworks of thinking - it’s a lot easier to put the pieces of a puzzle together when you know what the picture is before you start.


I think this article raises some brilliant points, and is very well written, but I also feel that it falls short of the mark Bret was aiming for.

As he himself alludes to, most of what he is teach is not programming - it is individual actions. Just as being taught the meaning of individual words does not teach you to write, being taught what certain functions or statements do does not teach you to program.

What is important is not spelling, but grammar - the shape of a program. His parts on Loops and Functions are better on this - the timeline showing loop instruction order is pretty awesome. However, it's still not perfect. At no point is the user instructed what a 'function' is, and how to use it. How do they know that they should be using it? I agree with other commentators who have suggested that it looks too much like he knows what he is aiming for, and the tool is designed to aid that.

In fact, my strongest criticism is in regards to his rebuttal to Alan Perlis:

> Alan Perlis wrote, "To understand a program, you must become both the machine and the program." This view is a mistake, and it is this widespread and virulent mistake that keeps programming a difficult and obscure art. A person is not a machine, and should not be forced to think like one.

I'm sorry Bret, but Alan is right. You do need to be able to think like a machine. Not necessarily an x86 machine, but an abstract turing machine, or a state machine, or a lambda calculus machine. If you cannot think like the machine, you cannot outwit the machine. This is incredible important if you are relying on the machine to give you feedback on what the system is doing.

In all his examples, very simple things happen, and never go wrong more than drawing in the wrong place. What happens if he starts causing an infinite loop? Or creates cycles in a linked list (and remember, sometimes he may in fact want cycles).

In "Godel, Escher, Bach", Douglas Hofstadter suggests that one of the key ingredients for intelligence is being able to go 'up' a level of abstraction. Bret's comment about a circle being made up of small steps, and hence integrating over a differential function, is part of it. A human can recognise that sequential steps with a consistently changing angle can be viewed as a circle. A human can realise that certain relationships are iterative, recursive, self-referential, in a way that (currently) a computer cannot. This is what needs to be taught, and I fear that what Bret has shown here would not help in that element.

However, it's still going to be a better intro than anything we have currently, so I think that in regards to getting people to dip in and try, it will be a vast help. I just hope that Bret keeps thinking about bridging the chasm between setting down series' of instructions, and programming.


> I'm sorry Bret, but Alan is right. You do need to be able to think like a machine.

I would like to bring in another Alan Perlis quote: "You cannot move from the informal to the formal by formal means."

Programming is the art of formalizing things to a point where they are executable. Executable by what is the point of contention here. I think you are saying (and I somewhat agree) that ultimately your programs and ideas have to execute on a real machine, and as a programmer you need to understand and model that machine.

OTOH, perhaps what Bret is arguing is that we should make better machines and software abstractions.


I dont think it needs to be a real machine in the sense of a physical one, just in the sense of an execution environment.

What worries me about Bret's tools is that it looks like they make it easier for someone to produce something without knowing why. When you learn maths at school, you're normally taught to show your working - getting the answer isn't enough, you need to understand the process. Having so many sliders and timelines to pull around is fine, but at the end of the day we need to teach people functions and variables and recursion and combinators and so forth, and I'm not sure how one does that in this system. In a sense, it is skipping the architecture stage - working not just how to build, but what to build in the first place.


I really don't see that at all in what he's showed. Everything there is about helping people understand how and why the program is working.


Sidenote: Dan Ingalls seems to praise Victor's work on making programming easier: "How I think computers should work and why, said beautifully. Bret Victor - Inventing on Principle" https://twitter.com/daningalls/status/211630799550812160


"Visualize data, not code. Dynamic behavior, not static structure."

Yes! This reminds me of what Rich Hickey has been enlightening the world about as well [1]. Bravo Bret! Thank you for writing and sharing these ideas.

[1] http://www.infoq.com/presentations/Value-Values


His interactive demonstrations almost feel like he is reinventing Excel. And i like it.

This kind of symbiose between IDE and program code isnt just usefull for teaching, nor large scale software development....

It seems extremely usefull for "explorations" of data. There is a brilliant application idea hiding behind these ideas.


How I wish he was teaching JavaScript. THAT is how you teach. Everything I've seen/tried online is abysmal.


A lot of these are really neat ideas, but as I read them, I thought: I'd never have bothered learning programming if all that was available to me was what he's describing. I like the separation between the problem I'm working on and the background information I need so I can understand and solve it; having the problem and background information presented together neither appeals to nor aids me.

That may be because I've grown accustomed to learning from documentation and applying it to my work, but I don't think so; I think there's something deeper going on that may have something to do with the way my mind organizes information. I wonder if Bret Victor, if he were being honest with himself, would prefer to learn his way, or the way he actually did.


Programmer must be able to think in terms of how a machine works, how data-structures are represented and how system is organized, what other processes are running and what resources they are sharing.

Not thinking about the machine or an underlying OS is a total nonsense and the cause of problems and suffering.

Imagine a doctor who says "Doctors must not think about what is inside the body, they must think in terms of temperature measurements, blood testing and medicine prescriptions.

btw, the SICP book threats the subject exceptionally well, and not mentioning such fundamental work is an example of ignorance.

People understand what they can see - yes, they do. That is why we have box-and-pointer diagrams. That is why we use parenthesis.

In other words, all this was solved long ago in the Lisp world.


Perhaps Bret Victor's ideas are comparable to something like formal methods: few doubt their enormous power, but the difficulty is in the extreme effort one needs to implement them in a project. It is tempting to believe that the level of instrumentation that Bret proposes could be achieved automatically, just as it was once dreamed that formal methods could be fully automatic. But experience with formal methods has shown us that while some of their promise can be implemented by automatic tools, and this is valuable, to realize their full potential for a complex project requires substantial effort, non-reusable effort.


One very deep notion in there is "identity within the system". Some people learned it from Smalltalk; I learned it from LambdaMOO.

I respectfully submit that people who are focusing overmuch attention on Light Table "because Bret Victor" are mostly missing the point. As Bret points out, there's a lot you can learn about these things from existing (even old) systems.

If you already know Smalltalk, Logo, HyperCard and Rocky's Boots (I'd missed out on this one but it reminds me of Robot Odyssey which I did play), you could do worse than go and play with LambdaMOO for a little while.

(ETA: it turns out that Roboy Odyssey was a sequel to Rocky's Boots.)


Sometimes, you come across something with such an astonishing level of insight, that it is as if it must have been dropped off by aliens ... because it is so far ahead of the typical thinking in its field.

This is like that.


Back in the 80's, when you turned your computer on, you were thrown into a programming environment (usually Basic).

I started learning to program at age 8, I just had no idea that the thing I was doing even had a name - I just typed commands and the computer responded (no compile, link, run steps).

Took me a while to figure out what the 'for' did (I was drawing grids one line at a time). I still remember what it felt when I finally figured it out.

An educational programming environment should be installed in every machine. You never know who's going to get interested on it.


If you really want to label:

    ellipse(65,50,60,60)
Wouldn't using something like Python's keyword arguments better than some external labelling?

    ellipse(radius_x=65, radius_y=50, center_x=60, center_y=60) 
More characters but these are just training wheels. Once the learner understands the basics you can do away with them.

API's and libraries should be designed to allow both forms. Everyone is a beginner with some aspects of their craft. I'm a beginner when I use a library I'm not familiar with...


I think Bret made it pretty clear that these ideas are not training wheels. There's a section titled "These are not training wheels". The goal is not to do away with them but to make programming about them.


Ha. I didn't spot that bit...

My point was though that allowing keyword and non-keyword forms is good for everyone. If you can't remember the parameter order you can just use the keywords (assuming the naming is memorable enough - which might actually be a can of worms in itself)


I personally like

  ellipse({axes: {horiz:65, vert:50}, center: {x:60, y:60}})
The function takes a specifiable object, and It's All JSON^tm. It also nests objects in objects in a satisfying, somewhat intuitive way.

Argument keywords are not reified and don't require special forms. Instead, they just happen to be the labels of an object.

I can see why you might not want to start with this concept for pedagogical reasons (you have to know a little about defining a Javascript object). But I would think you would want to move to this soon, so you can also get the programmer's special feeling of creating abstractions that make code disappear.


Named keywords help, but they don't give better overview of what each part represents. In other words they lower the barrier, but not by far (a novice programmer might still scratch his head and say... Radius what? of x? And why center y? Is x vertical? Or horizontal? from where do I calculate? ).

However his examples do strike me as amazingly ambitious. How would one go around implementing a custom way to tell the application "Hey I'm width! And I'm height! Calculate us from this and that point!". I'd pay good money for someone to truly deliver on Victors ideas. LightTable might be a good start, but it's nowhere near even completeing 30% of ideas presented.


remarkably insightful bits:

"Programming is a way of thinking, not a rote skill. Learning about "for" loops is not learning to program, any more than learning about pencils is learning to draw. " [1]

" Transforming flow from an invisible, ephemeral notion into a solid thing that can be studied explicitly. "

" The create-by-reacting way of thinking could be stated as: start with something, then adjust until it's right." ( its funny how lean-startup could be compressed into this one bit )

" Visualize data, not code. Dynamic behavior, not static structure. "


As much as I love Bret Victor's ideas, I feel this is a bit harsh on Khan Academy. There's nothing wrong with criticism, but it shouldn't be your only kind of feedback. Even if Khan Academy did nothing right (which I think is clearly false) they should at least deserve praise for attacking the problem at all, when so many people are content to ignore it.

If you're trying to lead a revolution in programming, witholding praise from your strongest supporters isn't the way to go about it.


Speaking as one of the interns that worked on the project, I don't really feel that this was too harsh.

Had he focused on all the things that were wrong with it and torn it to pieces, I would be inclined to agree, but he's provided a number of very specific ways in which the environment could be improved.

Some of these ideas were considered and not implemented for practical reasons, others were left out due to time constraints, and some we honestly just didn't think of.

On the particular note of practicality, we were trying to make the best thing we could make exist now. I hope that Khan Academy Computer Science in its current state looks laughable in a few years - both compared to what it becomes and compared to what other people have built.


I'm curious, were you guys ever in contact with Bret during the project?


I can see how it can be read as harsh.

What I see is someone who cares very deeply about the message he is trying to get out. He cares so much that when someone falls short while citing his work as inspiration, he feels it's doing the world a disservice not to address the short comings.


"We change programming. We turn it into something that's understandable by people."

No, actually we don't. This has been attempted recursively since the 70's. It's not a credible or desirable goal. Programming is complex by nature. All powerfully flexible systems are. Being capable of (much less excelling at) mentally modelling complex abstract systems is not a trait that "normal" people posses. This is neither bad, nor wrong. It simply is. Ignoring this is pure folly.


As far as labeling function arguments, I've always used an IDE that supports some kind-of feature that gives me exactly that. In Eclipse, Java (JDT), C/C++ (CDT), Python (PyDev), Go (Goclipse) -- all support the little pop-up box that appears when you type the name of a function and shows docs related to that function. This feature is so crucial, I couldn't use a development environment without it.


He mentions the context being needed for a function call, more than once -- it makes me wonder, doesn't anyone use IDEs? (I don't know vim/emacs are so popular when they provide no context - much more crucial to me than editing power. There's a plug-in for vim that uses ctags to provide context, I hear, but I don't know of many that use it :/)


They do not, by default. I guess the issue here is that we have too many different environments, so it would be impractical.

If you do take some effort, you end up with something like:

http://emacsrocks.com/e11.html


IDEs are not as useful for dynamic languages such as JavaScript or Python as they are for strongly-typed languages like Java.

Imagine a piece of code like

  function doSomething(callbackFunc) {
      ... 
      callBackFunc(a, b, c);
  }
What context or popups can you display for the positional parameters of a function call that is only resolvable at runtime?


I remember being a child and learning basic with a manual in a foreign language (English) I did not understand and an Italian-English dictionary. It was thirty years ago (shit, I'm old!) and admittedly my memory is foggy but I remember it to be funny and easy.

If some kids really need all these hoopla to start programming I wonder if they should really try...


This is the web version of the talk I saw him give at StrangeLoop yesterday (https://thestrangeloop.com/sessions/taking-off-the-blindfold). I'd highly recommend the video presentation when it's released (later this year?) on InfoQ.


I think the analogy of looking at a book for it's words is somewhat misguided.

Judging a book by the words it contains is wrong if you're judging literature, but when you're choosing books to teach with, you have to consider the readers vocabulary.

In practice, choosing a book based on it's words is, in fact, quite common when you're teaching english.


I just want to clap with joy after reading this. I learnt Logo when I 9 years old and those were the best days of my programming life. I still remember the mad excitement of drawing the first smiley face and the first house. I just want to ask the new age educators : Why so serious? It was supposed to be fun.


Very powerful and insightful article. I'm not sure if I agree with everything, but it's very inspiring nonetheless and I hope that such a learning environment will be a reality by the time my kids are old enough to think about programming.


"Processing's lack of modularity is a major barrier to recomposition"

JS has functions and objects. How is that not modular? The fact that many Khan Academy's example programs "are written as one long list of instructions" is another story.


This is incredible. I can only wonder in awe if something like this was implemented in online courses provided by say Udacity or Coursera. That would be a revolution in online education.


Man, I couldn't disagree with this essay more. Once I read it, I went out and wrote my retort. Enjoy! http://bit.ly/Sdr9Zl


As someone who has been spending time this year learning to code, I just wanted to say this essay is EXACTLY what I have been yearning for. Thanks for putting this together.


This reminds me of Up and Down the Ladder of Abstraction:

http://worrydream.com/LadderOfAbstraction/


The ability to program is simply the ability to talk on the same level as an ignorant, autistic, childish, forgetful, narcissistic asshole.

Namely, the computer.


So when will someone make an environment like that? It sounds like a super good idea and it might even be a good business!


Why do I get the feeling that everyone here is threatened by this attempt to make programming easier?

Oh, wait... this is Hacker News.

"Geek Central".


Does anybody know what tool he is using to make the little demo samples with the play button?


although i have cursorily skimmed the article, it seems to be based partly on his excellent talk "inventing on principle", available here: http://vimeo.com/36579366.


Something bothers me about Bret's writings: He is very big picture (which isn't bad by any means), but then he often talks in absolutes without substantiating many of his claims. I suppose speaking in absolutes may be for rhetorical reasons, something wishy washy is probably less persuasive. He certainly has some good ideas and a talent for presenting them though.

>Programming is a way of thinking

If teaching programming is meant to teach a way of thinking, how do we ensure that it transfers to other areas? David Perkins discusses this in his book "Outsmarting IQ" (pg 224, http://books.google.com/books?id=kNbSvy4dQEUC&q=papert#v...). Latin was once thought to be a language that taught people how to think, but the studies didn't show any transfer between learning Latin and other skills. Obviously, programming isn't Latin, and I'm actually in support of the idea of teaching programming as a way to teach thinking skills, but any effort to do is going to have to address the problem of transfer. One way to potentially do this is motivation, I think Vygotsky advocated showing children why they could write (and how they might already be attempting to do so), which would then give them motivation to learn writing. They'd already understand a reason for using it...

> Alan Perlis wrote, "To understand a program, you must become both the machine and the program." This view is a mistake, and it is this widespread and virulent mistake that keeps programming a difficult and obscure art. A person is not a machine, and should not be forced to think like one.

There are cases where this is true, but putting it another way, "A teacher is not a student, and shouldn't be forced to think like one". Any time where a mind is trying to communicate some concept, there has to be some level of dialogue or shared context. One could argue that learning programming could help people understand that others may interpret what they say in a different way (and why that may occur). I think that is a pretty important concept.

Finally, the US military has funded a lot of research on intelligent tutoring systems. One of the things a lot of the successful programs have is a means of getting the user to think more like an expert. The tutor programs often do this in two ways: by prompting the trainee for a response and then getting them to compare to something an expert would do, and by providing feedback/hints (at the right level) as needed. Vygotsky discussed the latter in "Mind in Society", sometimes all a person needs is a little assistance at the right time, and then they'll understand why and how to do something. As far as computer based training systems, "Development of Professional Expertise" has some interesting papers, though it may not be the best source.


Well, that was profoundly inspiring. Time to go read Mindstorms.


I'm waiting impatiently to see these tips implemented.


I showed this link to dad... Amazingly clear!


Man these long essays really suck up my day.


this guy thinks top-down, prefer to visualize things and is definitely thinks in right-side of his brain.


i finally learned how to tie a tie, thanks bret!


This article addresses GUI design pretty well.

But it does nothing about what's below, what actually makes that GUI possible, backend and all that.

In that sense, I believe Bret is addressing something on another level entirely, something that _is_ the future, i.e. a dev environment for people who design stuff.

Not a failed half-step up from C, like java,lisp, or all the existing programming language.

An unbreakable waterproof abstraction that will enable the less skilled to create awesome stuff.

Then we will be able to take all the <insert any 1.x levels of abstraction language here, from java to ruby> programmers, and send them to a layer where they belong, instead of leaving them to rot in a system that is both inadequate for speed and inadequate for productivity.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: