
Working Toward Bret Victor's “Learnable Programming” - Permit
http://blog.comealive.io/Working-Toward-Bret-Victors-Learnable-Programming/
======
dkersten
_How do we generalize this solution such that it helps programmers building
compiler platforms like LLVM and Roslyn? How will it help programmers
researching neural networks? How will it help programmers writing line of
business applications? Databases? Apps? Financial backends?_

In many (but certainly not all) of these domains, we already visualise (parts
of) them in 2D (as charts), so I see no reason why this wouldn't apply here
too.

In my opinion, its not so important to solve the issue for every problem
domain one might program in, but rather to do so for as many as are
applicable. Just because a tool doesn't help with X, doesn't mean its a
failure if it improves on Y.

------
Buetol
But most things we see are 2D outputs (webpages, world if you close one eye,
text console,..).

I think the closer we are to the bret Victor's ideal (of seeing what you do
directly) is this:
[https://www.youtube.com/watch?v=xsSnOQynTHs](https://www.youtube.com/watch?v=xsSnOQynTHs)
(react-hot-loader + redux). And this is already practical.

Another nice tool who's the near his ideal is the chrome dev console: visual
interaction, live edit, start/stop debugging,..

The future is bright.

~~~
orodley
But the amount of code that deals directly with producing such output is
usually very low, as a proportion of the total amount of code in the system.
Having a pane on the right say '825' next to the 'printf("%d\n",
do_the_actual_work(foo, bar));' in my code isn't useful. I already get that
from the terminal. The place where the benefit would actually appear is
visualizing the parts of the code that are doing the complex and error-prone
tasks in the system.

If you're a frontend web developer, then sure, it could be useful for you. But
with the kind of stuff I do personally, the 2D output demonstrated in the
article is irrelevant. Most stuff is much more abstract, and has zero
relationship to direct production of visual output. It looks cool and all, but
ultimately I feel like it's focusing on the easy problem rather than the
important one.

There are a few examples in there of visualizing the values in a for loop over
time, but again, I feel like this is unrealistic. The proportion of my code
that only executes for a fixed number of iterations that is easily determined
at compile-time is negligible. As is iteration over a fixed, compile-time set
of values. Most code lives inside functions that can be called with different
combinations of parameters, interspersed with multiple nested loops and
conditionals at different levels in the call stack, etc. Visualizing something
like that in a useful way is much more difficult. If they can show an example
of that, I'd be very interested. Until then, it seems more like a toy. A very
interesting, promising toy, but still a toy.

The unit test example gets a bit closer, but it's still just dealing with the
output of all the code that actually does the work, not the details of that
code itself. A tool like this should be helping you with the hard tasks rather
than the easy ones, because the easy ones are already, well, easy.

I'm not trying to be hostile, it certainly looks cool, and it looks like it
could potentially be very useful in the future. It's just that currently it
doesn't look like it would have any practical utility for me.

------
stillsut
Attempts like these might be barking up the wrong tree.

People who are going to be successful in bootstrapping themselves from 0 to
dev are going to naturally grasp logic, state, or any other general
programming concept. (Not master, but grasp).

What people need is a real problem they can solve with programming: automate
an office process, build a website, crunch some numbers, etc. And this is
usually best done on a stack that has solutions for these problems. That's why
"the future" of learning to program looks more like this:
[http://automatetheboringstuff.com/](http://automatetheboringstuff.com/) than
a noob-IDE.

~~~
ThomPete
Thats not the point of what Bret has been trying to advocate IMO. Instead he
is trying to establish better frameworks for thinking about programming by
providing a better feedback loop.

~~~
seiji
Smalltalk was the pinnacle of programatic pedagogy. Hypercard was the easiest
way for regular humans to use computers for computer-like automations.

But, then you run into the "No Real Programmer" problem. The nerds take over,
hyper-complicate everything, then we're back to nobody understanding anything.
(Also see: "No Real Unix User" when people say "Well, _I_ don't consider OS X
users to be unix users.")

~~~
jowiar
The other major tool I'd put up there with Smalltalk and Hypercard is Excel,
at least when it comes to automation of real-world business problems. To an
extent, it made a large enough set of problems easy enough to solve that tools
have stagnated as a result.

~~~
stillsut
Under-appreciated aspect of excel: You store the data on a spreadsheet.

A dev: hmm...Should it be Mongo-esque NoSQL, in-memory sqlite, MySQL vs
SQLServer? But Postgres has most open source momentum...let's do pros v cons.

Excel dude: Put the data in the grid. Then move on to what you're trying to do
with it.

~~~
bitwize
Unix dude: Fuck it! I'll spit out the data in text/csv and grovel through it
with awk/perl.

------
swah
I think many game developers build those tools into their games, like Eskil's
Love
([https://en.wikipedia.org/wiki/Love_(video_game)](https://en.wikipedia.org/wiki/Love_\(video_game\))
tools.

~~~
corysama
His presi about designing the tools to make an MMO as a solo dev is definitely
worth watching.

[https://vimeo.com/6010060](https://vimeo.com/6010060)

~~~
swah
That's what I'm talking about, great video. I wish Love had gotten more love.

(To be honest I didn't enjoy playing it, but I also don't enjoy Minecraft,
and this one I love the aesthetics).

------
erokar
I think Elm makes good strides toward this goal: [http://debug.elm-
lang.org/](http://debug.elm-lang.org/)

~~~
Permit
I regret neglecting to mention Elm in the original article. I've only played
around with their tooling, but it looks really awesome.

It's probably even more in the spirit of Bret's original article as they've
gone beyond designing tooling and also designed their own language.

------
seagreen

      In the second example how does a live programming
      system synthesize a "Person" object?
    

Most data types should have `example` methods. They're too useful not to have,
for documentation generation as well as live programming. `Person` in this
case should definitely have one.

    
    
      What if there are multiple constructors?
      What if it depends on ten other objects
      when being created?
    

Each of those other objects should also have `example` methods. Person's
example method should use those, possibly modifying them after they're called
so that they make more sense in the context of a `Person`.

    
    
      What if this object only originates from
      a database? Should the user constantly create
      these objects by hand when they want to edit
      any function in their program?
    

Zomg no. Make `example` methods. The idea of having beginners program without
static types and immutability seems misguided to me, because it makes them
keep the whole program in their head instead of just the function they're
working on. The idea of making them keep the whole program in their head
_while also not having an example of what they're operating on_ is just silly.

PS As to the "generating unit tests" point, the place to start looking for
inspiration is QuickCheck: [http://www.stuartgunter.org/intro-to-
quickcheck/](http://www.stuartgunter.org/intro-to-quickcheck/)

EDIT: PPS: If any lispers are reading this, I'd be curious to hear how you
generate example objects at the REPL. Since the lisp community places such a
high priority on REPL programming maybe it's already explored this area in
depth.

~~~
dkersten
_If any lispers are reading this, I 'd be curious to hear how you generate
example objects at the REPL. Since the lisp community places such a high
priority on REPL programming maybe it's already explored this area in depth._

Clojure is my language of choice. What I've been doing so far is keep sample
data in my unit test files. I then use a mixture of running code direct from
my editor [0], importing the sample data from my (non-editor) REPL [1] and
actually running the unit tests.

Its worth noting that this usually happens in reverse: I construct the data in
the REPL while experimenting and then copy it to the unit tests after for
reuse.

    
    
        how does a live programming system synthesize a "Person" object?
    

That just got me thinking. In Clojure, we generally use raw data (primitives,
maps, lists, sets, vectors) rather than hiding behind objects, so synthesizing
them is the same as any other data.

Realistically, there would generally be a constructor function, but _most_
functions are deterministic pure functions. I feel like OO languages may be
harder to make play nice with live programming because OO hides a lot of stuff
and encourages impurity.

[0] I can eval the entire file, the form that the cursor is on, or I can eval
code I type into a REPL.

[1] (require '[my-test-namespace.my-test-file :as alias]) Then access
alias/sample-data

~~~
seagreen
Very slick, thanks for sharing:)

------
bcheung
One of the other problems is that people keep trying to find the one perfect
tool or solution that will work for all cases. This is the wrong attitude.

When manipulating any kind of object in the real world there are a series of
tools. Each has their own feel, way you interact with it, and is used for a
specific purpose. A hammer, saw, sander, file, router, nail, all do different
things. The interface to them is unique.

Programming needs to follow this model. Each scenario is different so they
need to be handled separately. Allow for a comprehensive general framework and
then people create tools for each scenario. The tools can be general purpose
or very specific.

Work with the general purpose AST with tons of annotations (comments, 2D
coordinates for visual layout tools like NoFlow style, etc).

Also, depending on what you are focusing on at the time you may want a
different view of the code. Think how architecture is different. Framers have
their own annotation. Then there are plumbing and electrical overlays. There
are also wireframes and fully rendered views. Top, bottom, orthogonal,
perspective, etc. You get the idea.

Start with model, view, controller. Each is unique. It can be visualized and
written differently. Right now we use the same text based language to describe
each and to read each.

When you have a general purpose AST then the code can be "read" in different
modes (each with their own strengths and weaknesses).

Also, sometimes is is easier to write in one form and read in another. For
example, writing math using post-fix is really easy but reading it requires
too much managing of the stack state in your head (cognitive overload). For
reading, mathematical notation would work better.

------
krosaen
Nice. I think a mention of figwheel / devcards would have been worthwhile:

[http://rigsomelight.com/2014/06/03/devcards-taking-
interacti...](http://rigsomelight.com/2014/06/03/devcards-taking-
interactivity-to-the-next-level.html)

~~~
ch4s3
very cool. I'm constantly jealous of what you folks have in clojure land.

------
bcheung
> How do we generalize this solution such that it helps programmers building
> compiler platforms like LLVM and Roslyn? How will it help programmers
> researching neural networks? How will it help programmers writing line of
> business applications? Databases? Apps? Financial backends?

I've given this idea quite a bit of thought. I've been wanting to do something
in this space but haven't had the time to work on it yet.

The idea I have proposed but haven't seen implemented anywhere is to follow
what I call the "workbench model".

The problem is we are "programming blind". Instead of thinking of programming
as a series of abstract steps in a recipe, just start with some sample input
and manipulate it towards the output.

When you want to do woodworking you use a hammer, saw, sander, etc and
manipulate it directly. It's intuitive and you can see what you are doing. The
result is immediate. Modern programming is like writing G-code to run though a
CNC machine.

Instead of defining things in terms of a series of steps and trying to figure
out how to visual that, flip it around and do it the other way. Pass in some
parameters and let the user manipulate it however they want in real time.
Record the steps they used to go from input to output (like a macro). There is
your function. This also works quite well, but is not limited to, a postfix
environment like Factor (or the HP-48 calculator RPN language if are more
familiar with that).

Now obviously there are multiple input scenarios that need to be handled
uniquely. How do we do that? Simple, run each case separately and use pattern
matching to qualify your actions.

Ideally, this would be combined with "first-class citizen tests" (another
pattern I came up). Basically, when you define a function you give it some
sample inputs for the different scenarios and edge cases. You specify the
outputs for each corresponding input. When working on the function you choose
an input and then work towards the output. When the output is correct it turns
green automatically. This happens in real time as you are manipulating the
input values. It also tests the other input scenarios as well. If one or more
of the other inputs don't match the required output then you add more pattern
matching. When all tests are green your function is done.

------
phantarch
Personally, I would be wary of any IDE that attempted to interpret what my
code is doing for me. Each person probably has a different mental image or
interpretation of how their code works, and enforcing a singular
interpretation of code via IDE might only enable people to 'learn' how to
program within that environment. Teach a man to fish.

~~~
seanmcdirmid
You have a computer right in front of you, why not use it to figure out what
your code is doing rather than do the mental gymnastics in your head! This
reminds of a story, I think it was dijkstra or maybe Knuth, who was lamenting
that their students were being too sloppy because they got to program on
interactive terminals and not the punch cards of yesterday. Programming
without an IDE is the same...it's using punchcards when something much better
is available.

~~~
jrbrtsn
Kernel developers don't use and IDE for the Linux kernel, and they are
arguably some of the best programmers in the world. Great programmers use vim
or emacs, because the imagery in their mind is far more powerful than anything
an IDE could display. Besides, any screen real estate used for "interpretive"
purposes is just an annoyance.

~~~
ch4s3
Are they the best developers, and if so how do you measure that?

> Great programmers use vim or emacs This another form of No True Scotsman,
> and not a valid argument.

> because the imagery in their mind is far more powerful than anything an IDE
> could display

How do you know, and how do you measure this? Surely, you would admit that Vi
and Emacs are better than pen and paper, or punch cards, right? So does it not
follow that Vi and Emacs could be improved upon? Or are they the pinnacle of
inputting instructions into a computer? If they are which one is better? Why?
How do we measure that?

~~~
pekk
The steering wheel could be improved upon in some theoretical sense, but the
chances are that any new car steered by, say, an iPad is much worse and
definitely isn't going to be used in races any time soon.

------
solomatov
It looks impressive, but only in these superficial simple examples. If we have
really complex problem, i.e., designing distributed algorithm, we need to
create domain specific visualizations and these bells and whistles won't help
us.

------
jasode
I may understate Bret Victor's thesis and may overstate Josh Varty's
counterargument but I believe JV's criticisms are not applicable to BV's
particular essay.

Some key quotes from each for context:

 _> Bret Victor: "How do we get people to understand programming?"_[1]

 _> Josh Varty: "Problems getting to Learnable Programming [...] However, we
need to stop and think deeply about how this system would handle typical code.
How do we generalize this solution such that it helps programmers building
compiler platforms like LLVM and Roslyn? How will it help programmers
researching neural networks? How will it help programmers writing line of
business applications? Databases? Apps? Financial backends? The vast majority
of functions in today's codebases do not map nicely to a 2D output."_

I see BV's essay focused on _learning_ programming for people unfamiliar with
programming. For non-programmers, even simple syntax such as "x=0; x=x+1"
looks strange and beginners can't hold in their head what it does. So instead
trying to teach beginners the LOGO programming language to move turtles around
on the screen or BASIC language of "10 PRINT "HELLO" \n 20 GOTO 10", Brett
shows a visualization where code syntax update its 2d output in a realtime
feedback loop. This can help novices make the leap from abstract syntax in a
text editor to the concrete changes in the output.

Josh Varty is _going beyond the scope of newbies learning programming
concepts_. He's trying to _generalize_ it to _working practitioners who
already understand programming_ and make the "Learnable Programming Model"
work for any arbitrary code to any type of visualization (beyond 2D if
necessary). The "live coding for everything" is an interesting concept to
pursue but I don't believe Brett's essay had this wide of a scope.

I don't believe working programmers who have already mastered how "programming
syntax maps to changing machine state" needs visualizations for every line of
source code. However, JV's generalized scope is applicable for working
programmers to _verify_ code for correctness and do sanity checks on what they
think the code is doing.[2] However JV's ideas are not necessary for working
programmers to _understand_ programming.

In one case, JV uses this as example:

    
    
      var result = DoTaxes(person);
    

The professional programmer (not the "learning programmer") doesn't need a
clever realtime visualization to "understand" DoTaxes() in the BV sense. The
programmer already has some idea that it probably has some code that performs
multiplication and addition. Possibly even a lookup table for different
countries or taxing authorities. The realtime visualization would help the
professional programmer _verify_ what the code actually does. Since realtime
isn't available today, programmers get by with watch windows in visual
debuggers, or manually insert printf()/console.log() statements. However, the
programmer doesn't need BV visualizations to _grok the compsci topics_ of the
changing machine state inside DoTaxes().

tldr: BV and JV are tackling 2 different issues.

[1][http://worrydream.com/LearnableProgramming/](http://worrydream.com/LearnableProgramming/)

[2]See how JV's ALIVE demo visualization serves a different cognitive function
from BV's. Those little red annotations are helpful for professional
programmers to verify behaviors but not for beginners to grok compsci:
[https://embed.gyazo.com/4dc7ac656863cbd02a8e213598f85a4f.gif](https://embed.gyazo.com/4dc7ac656863cbd02a8e213598f85a4f.gif)

~~~
Permit
I actually wondered the same: Are these ideas meant to apply more broadly to
programming in general, or are they a framework for learning?

So I reached out to Bret on Twitter:
[https://twitter.com/ThisIsJoshVarty/status/63156215172802560...](https://twitter.com/ThisIsJoshVarty/status/631562151728025600)
(I see now he's deleted his Tweet...)

In his response he told me to read the section "These are not training wheels"
near the end of the blog.

Here he says:

>A frequent question about the sort of techniques presented here is, "How does
this scale to real-world programming?" This is a reasonable question, but it's
somewhat like asking how the internal combustion engine will benefit horses.
The question assumes the wrong kind of change.

>Here is a more useful attitude: Programming has to work like this.
Programmers must be able to read the vocabulary, follow the flow, and see the
state. Programmers have to create by reacting and create by abstracting.
Assume that these are requirements. Given these requirements, how do we
redesign programming?

I think the ideas he explores can apply more generally outside of creating
environment for learning. And my understanding is that Bret believes they
should apply to programming in general as well.

~~~
jasode
_> And my understanding is that Bret believes they should apply to programming
in general as well._

Yes I agree, but I think his 2D visualizations that you criticized were
focused on "learning programming".[1]

BV wasn't saying that extending it into "non-learning" scenarios for
professional programmers _must_ be a 2D feedback loop. Consider a new
programming language with specialized syntax or a library of functions for
moving a physical robot in 3D space. The feedback loop could be a live
Bluetooth or wifi connection to a articulating robot arm on the programmer's
desk. I wouldn't think BV would criticize that and say, "no, the robot arm
must be 2d image on the screen".

[1]key phrase of " _presented in context for learning_ " from BV: _" These
design principles were presented in the context of systems for learning,"_

~~~
jules
The problem isn't things that we know how to visualize but do not fit the 2D
model, the problem is the things that we _do not know_ how to visualize well
(which is most of the data in our programs). Visualizing the output of a
program that does a 2D drawing is easy, because it's already visual. The hard
part is doing it for general programming, and the essay is handwaving around
that. I think the answer is that for each domain, or even for each data
structure, visualizing it well is itself a complex research problem.

------
hellofunk
Article suggests that only Alive and Light Table provide this form of
programming interaction. However, many (certainly not all) of the features in
this article are a part of the Swift Playgrounds in recent iterations of
Xcode, which were probably inspired by this work. Just worth pointing out.

Update: I missed the part further down in the article where the Playgrounds
were indeed mentioned. Whoops.

~~~
seanmcdirmid
Here is a fascinating thesis written in 2003 on live programming:

[https://llk.media.mit.edu/papers/ch-
phd.pdf](https://llk.media.mit.edu/papers/ch-phd.pdf)

The ideas are quite old, much older than Bret Victor's work.

~~~
Permit
>The ideas are quite old, much older than Bret Victor's work.

Absolutely. Your work in particular predates Bret's and I've enjoyed reading
it and hope it catches on and inspires more folks at Microsoft and Microsoft
Research! :)

I think what makes Bret's work a little different than the linked work is that
Bret's work managed to escape academia and appeal to an audience that might
not have otherwise been exposed to these ideas in the first place.

~~~
pjmlp
It is great that Bret's work made the ideas more mainstream, but on my
research for Xerox PARC's work I would say the ideas go back at very least to
the interactive coding available on Smalltalk, Interlisp-D and Mesa/Cedar.

Interactive coding is with us since the early 80's, the ideas just failed to
go mainstream.

~~~
seanmcdirmid
Live programming goes way beyond smalltalk fix and continue, morphic
directness, or a LISP REPL. We've caught up to that past, it is now time to go
further. And of course, no one is really quite sure what the lisp machines
ever did really, it's not like they have a video on YouTube :)

It is better to have that argument with Gilad Bracha, I guess.

~~~
pjmlp
Actually there are a few videos. :)

But I guess all fail short how they really were.

[http://www.loper-os.org/?cat=10](http://www.loper-os.org/?cat=10)

[http://www.loper-os.org/?p=932](http://www.loper-os.org/?p=932)

[https://www.youtube.com/watch?v=o4-YnLpLgtk](https://www.youtube.com/watch?v=o4-YnLpLgtk)

~~~
seanmcdirmid
I still am interested in piecing together the experience for history. I hear
stories but never any real details!

~~~
jules
I think most of the amazement came from the time they were in. In that time
this was truly revolutionary. Having a whole OS based on it is still arguably
revolutionary, but from a programming/IDE perspective I don't think they are
anything special any more. As far as I know a modern Lisp + Emacs gives you
roughly the same experience.

~~~
pjmlp
Emacs fails short because it doesn't allow for live editing of inline data
structures, unless it has changed on the last years.

Dr Racket's REPL is probably closer to the experience.

Also very few IDEs enjoy the same edit-continue experience, maybe commercial
Common Lisp environments.

~~~
TeMPOraL
Not sure what you mean by "live editing of inline data structures" here (care
to give an example?), but the interactive debugger in Emacs/SLIME can do quite
a lot of nice things, including modifying arbitrary data on the fly, live.

For instance, if I have a hashtable returned from a function I called in REPL,
I can inspect it and modify its values and properties. Also, within the REPL
itself, text is "smart" and copy-paste tracks references, so I can paste the
"unreadable objects" (i.e. #<Foo 0xCAFECAFE>) directly into REPL calls and
have it work, because SLIME will track the reference linked to a particular
piece of text output.

~~~
lispm
The presentation based REPL in Emacs + SLIME was inspired by the Symbolics
Lisp Machine presentation feature.

But I can assure you, there is a difference of a REPL feature in an editor and
a GUI using it system wide, as on the Lisp Machine. Both in depth of the
features, integration and the feel of the user interface.

~~~
TeMPOraL
Haven't had a chance to experience it myself yet. I'm watching the videos now
though and I think I begin to see the difference.

~~~
lispm
Check out this video (which I made some time ago), which shows the
presentation UI from an application perspective (here a document authoring
system) and as a bonus, the application integrates Zmacs (the Emacs of the
Lisp Machine)...

[https://vimeo.com/83886950](https://vimeo.com/83886950)

Think of the Documentation Examiner a version of Emacs Info. Think of
Concordia as a version of an Emacs buffer editing documentation records. The
listener a version of the Slime listener. You can also a short glimpse of the
graphics editor, IIRC.

~~~
TeMPOraL
Thanks a lot! I'll watch it after work and get back with impressions :).

------
bcheung
> This attitude is a losing one. For all its warts, bumps, pains and bugs,
> programming today works somewhat decently enough.

Sometimes pragmatism is the enemy of progress. As stated earlier in the
article, you don't get the combustion engine by thinking how we can make horse
drawn carriages faster.

Thiel also recommends this approach in Zero to One.

------
kevinSuttle
Chris Granger just released Eve as well. www.chris-
granger.com/2015/08/17/version-0/

------
RogerL
I urge the people working on this to recognize that we all don't think the
same. (I know, you don't claim that we do, but I don't see these counter
points expressed often).

I can't explain how I think, but it isn't visual, and doesn't seem to involve
language much. Probably the closest is to say I think in math and geometry.
That doesn't make any descriptive sense, but it is what I experience.

I fell deeply in love with math, and then programming. A few words and control
structures, and you have Turing completeness! It's as beautiful as language.
Minimizing expression is sometimes far more powerful than unconstrained
expression. I can hold these structures in my head, and so can most
programmers that I know. Pictures are a pale, weak thing in comparison. "The
Illustrated Guide to Kant's Critique of Pure Reason" has never been published
(SFAIK), and for good reason. The simplest rules of grammar allows us to
generate and express extremely complicated and nuanced ideas. Ya, sure, we
could make a nice chart of synthetic/analytic and a priori/posterior, and I
think Kant did that, but beyond that what do pictures get you? I bet there are
visual thinkers reading this that have rebuttals or examples, and that is
great. Einstein was a great admirer of Kant, and was a visual thinker, so I
imagine him raising objections! But I think in the end the visualizations may
have illustrate power, but rarely investigative power. Einstein put the lie to
that with his work in Relativity, but he was an extraordinarily unique thinker
(he did his work visually, and then struggled to get the math to prove his
ideas).

Anyway, my somewhat inarticulate argument is that programming languages was
the great invention. Anything that is Turing complete lets us express anything
computable.

I think if you can find ways to _complement_ that it will be a great
contribution to knowledge. But I don't think you can replace or improve upon a
Turing complete language (unsupported assertion requires citation here!). It
would be great to be proven wrong.

I use computer languages to do math, computer vision, and AI type stuff.
Others uses it for different things. It all works. There is no universal
visual paradigm to replace it. Engineering is optimization in multi-
dimensions. Visualization limits us to 2 or imperfectly, 3 dimensions. So you
can sort of slice out representations of this large, multidimensional space,
but you are now working like the blind men on the elephant. You'll never get
the complete 'picture'. You are just sort of poking at it with a stick.
Whereas with a couple of equations I can describe the entire space AND now
have powerful tools to explore that space, describe it, and determine its
properties. I think back to one of Bret's videos where he uses a live
environment to compute the trajectory for a character in a video game. In math
that is known as the 'shooting method'. It kind of works, for some problems.
There is also a universe of problems for which it doesn't work. How can you
even tell if it works or not visually? The language of math gives us that
tool.

I love pictures, and produce charts all the time for my math code. But they do
not replace the math, they illustrate it. I do my work in math, and in state,
and in sw architecture, and sometimes use visual tools to help check the work.
There is no language of visualizations, and without one you will either be
illustrating the work of math and computer languages (which is not a bad
thing, I'll happily use the tools when appropriate) or severely limiting what
we can do. I don't work or think in 2D or 3D space and cannot be limited to
such a restricted view (pun intended). I work in small spaces usually (R^18 or
so) and visualization is a non-starter except as a great way to learn some of
the concepts. I know plenty of you work in far larger spaces.

tl;dr: the amazing advance was Turing complete languages; visualizations are
not Turing complete; languages are powerful, visualizations do not have a
language and hence aren't analyzable unless your name is Einstein.

~~~
RogerL
I'm going to reply to myself to share some history.

I came up when things like 'structured analysis' and 'Yourdon diagrams' were a
thing. I was repeatedly told that if I wasn't doing this I was "hacking" in
the pejorative sense.

These diagrams were the worst hack that I've ever seen. There is no language,
there is no verification, you can literally draw _anything_. There were case
tools that attempted to balance all your arrow so that all ins had an out, and
so on, but it was just a disaster. Hack, hack, hack.

In contrast, in code I could quickly express up an API design. It was
concrete, it was testable, it was understandable, and it was a 1-to-1 match to
what the eventual code would be. It was wonderfully, powerfully expressive. It
wasn't limited to 2D, I could express complicated relationships without
someone arguing "these lines cross, move this over there to improve the
layout", and other nonsense that had nothing to do with solving the problem.

It was not a language, it didn't have a grammar, and it was untestable. It was
extraordinarily limited. You couldn't show that this module is used by 10
different modules in different situations. You could express impossible
things. You had no way to analyze it for correctness. Sure, there were case
tools that put in things like state diagrams and simulation and such, but it
was all just terrible. It was either impossibly constrained, or impossibly
free-form.

In contrast, my stubbed API's were an exact representation of my ideas. If I
want to test a hypothesis, I'd just implement part of the API, stub out parts
that weren't important, and have running proof of my ideas. I was doing a lot
of concurrent stuff then, and this was important. Visual depictions were
incredibly cumbersome, untestable, and were just terrible, terrible hacks.

I went through more than one project where we spent a tremendous amount of
time generating these things, they collapsed under their own weight (you just
can't reason well about these things once relationships go past 2D), they'd
all get discarded, and then the real design work would begin, in code.

I argue, without proof, that without a language visual types of design will
always have these problems. I also argue that it is not incumbent on me to
provide that proof. The power of Turing complete languages and math has been
proven. A viable alternative needs to prove not only that it is equal to the
existing approach, but is better in some important way.

~~~
carapace
I agree with what you're saying.

Related, you might enjoy [http://iconicmath.com/](http://iconicmath.com/) ;-)

------
arxpoetica
Here's my simplistic (but accurate?) summation of Bret Victor's evangelism:
make the programming world designer automated/friendly.

------
huuu
Currently Alive offers two licenses: Individual and Enterprise. I searched the
site but I can't find any information.

Does anyone know the difference?

~~~
Permit
Individual is if you're purchasing for yourself. (You can use it at work if
you'd like).

Enterprise is if you're purchasing multiple licenses (ie. licenses for
employees).

~~~
dkersten
Going by the wording on a similar licensing structure for another programming
tool:

Individual allows you to use the tool commercially and you (the individual)
owns the license

Enterprise allows you to use the tool commercially and your employer (the
enterprise) owns the license

------
nullz
here's a JS implementation of Bret Victor's "Learnable Programming" demo
[http://www.fullstack.io/choc/](http://www.fullstack.io/choc/)

