In many (but certainly not all) of these domains, we already visualise (parts of) them in 2D (as charts), so I see no reason why this wouldn't apply here too.
In my opinion, its not so important to solve the issue for every problem domain one might program in, but rather to do so for as many as are applicable. Just because a tool doesn't help with X, doesn't mean its a failure if it improves on Y.
I think the closer we are to the bret Victor's ideal (of seeing what you do directly) is this: https://www.youtube.com/watch?v=xsSnOQynTHs (react-hot-loader + redux). And this is already practical.
Another nice tool who's the near his ideal is the chrome dev console: visual interaction, live edit, start/stop debugging,..
The future is bright.
If you're a frontend web developer, then sure, it could be useful for you. But with the kind of stuff I do personally, the 2D output demonstrated in the article is irrelevant. Most stuff is much more abstract, and has zero relationship to direct production of visual output. It looks cool and all, but ultimately I feel like it's focusing on the easy problem rather than the important one.
There are a few examples in there of visualizing the values in a for loop over time, but again, I feel like this is unrealistic. The proportion of my code that only executes for a fixed number of iterations that is easily determined at compile-time is negligible. As is iteration over a fixed, compile-time set of values. Most code lives inside functions that can be called with different combinations of parameters, interspersed with multiple nested loops and conditionals at different levels in the call stack, etc. Visualizing something like that in a useful way is much more difficult. If they can show an example of that, I'd be very interested. Until then, it seems more like a toy. A very interesting, promising toy, but still a toy.
The unit test example gets a bit closer, but it's still just dealing with the output of all the code that actually does the work, not the details of that code itself. A tool like this should be helping you with the hard tasks rather than the easy ones, because the easy ones are already, well, easy.
I'm not trying to be hostile, it certainly looks cool, and it looks like it could potentially be very useful in the future. It's just that currently it doesn't look like it would have any practical utility for me.
Take the CSS editing, incredibly annoying, impractical, opaque behaviour. Or every option using regexes so just to exclude some files you have to muck around with the bloody things. Or the hover to display local value of variables kinda working, but not always, but highlighting usually works, but watch variables are stupidly in the tiny width window to the right so you can't read the end of the variable name which is usually the really important bit.
And don't even get me started about their shitty redesign of the files making it harded to find the local files, the only ones you actually care about, and giving far too much visibility to external libraries.
But then they do genius things like unobtrusively display the method input variables.
Don't get me wrong, dev tools is useful, but quite obviously designed by a programmer who's experimenting with various different UX paradigms. Chrome dev tools is a joke when you put it against a real dev tool like visual studio.
People who are going to be successful in bootstrapping themselves from 0 to dev are going to naturally grasp logic, state, or any other general programming concept. (Not master, but grasp).
What people need is a real problem they can solve with programming: automate an office process, build a website, crunch some numbers, etc. And this is usually best done on a stack that has solutions for these problems. That's why "the future" of learning to program looks more like this: http://automatetheboringstuff.com/ than a noob-IDE.
Von Neuman considered an assembler a waste.
Fortran was considered bad by many because you weren't working with the instructions the computer was running.
Auto-completion, debuggers, syntax highlighting, fast compilation, interpreters, virtual machines and GUI building interfaces have all been derided by someone as terrible because everyone should man up and get closer to the metal.
It has never been on the right side of history to refuse to embrace new tools that make creating software easier.
But, then you run into the "No Real Programmer" problem. The nerds take over, hyper-complicate everything, then we're back to nobody understanding anything. (Also see: "No Real Unix User" when people say "Well, I don't consider OS X users to be unix users.")
A dev: hmm...Should it be Mongo-esque NoSQL, in-memory sqlite, MySQL vs SQLServer? But Postgres has most open source momentum...let's do pros v cons.
Excel dude: Put the data in the grid. Then move on to what you're trying to do with it.
(To be honest I didn't enjoy playing it, but I also don't enjoy Minecraft, and this one I love the aesthetics).
It's probably even more in the spirit of Bret's original article as they've gone beyond designing tooling and also designed their own language.
In the second example how does a live programming
system synthesize a "Person" object?
What if there are multiple constructors?
What if it depends on ten other objects
when being created?
What if this object only originates from
a database? Should the user constantly create
these objects by hand when they want to edit
any function in their program?
PS As to the "generating unit tests" point, the place to start looking for inspiration is QuickCheck: http://www.stuartgunter.org/intro-to-quickcheck/
EDIT: PPS: If any lispers are reading this, I'd be curious to hear how you generate example objects at the REPL. Since the lisp community places such a high priority on REPL programming maybe it's already explored this area in depth.
Clojure is my language of choice. What I've been doing so far is keep sample data in my unit test files. I then use a mixture of running code direct from my editor , importing the sample data from my (non-editor) REPL  and actually running the unit tests.
Its worth noting that this usually happens in reverse: I construct the data in the REPL while experimenting and then copy it to the unit tests after for reuse.
how does a live programming system synthesize a "Person" object?
Realistically, there would generally be a constructor function, but most functions are deterministic pure functions. I feel like OO languages may be harder to make play nice with live programming because OO hides a lot of stuff and encourages impurity.
 I can eval the entire file, the form that the cursor is on, or I can eval code I type into a REPL.
 (require '[my-test-namespace.my-test-file :as alias]) Then access alias/sample-data
So we sort of run into an issue here with "What people should do" vs. "What people actually do".
The creator of such a tool needs to design their tool around the reality of "What people actually do". And in our current target language C#, there is not an existing culture around building example methods. (And often people don't even have unit tests).
The biggest challenges we've faced when building Alive are running into problems like this one, where there isn't a clear, nice solution.
When manipulating any kind of object in the real world there are a series of tools. Each has their own feel, way you interact with it, and is used for a specific purpose. A hammer, saw, sander, file, router, nail, all do different things. The interface to them is unique.
Programming needs to follow this model. Each scenario is different so they need to be handled separately. Allow for a comprehensive general framework and then people create tools for each scenario. The tools can be general purpose or very specific.
Work with the general purpose AST with tons of annotations (comments, 2D coordinates for visual layout tools like NoFlow style, etc).
Also, depending on what you are focusing on at the time you may want a different view of the code. Think how architecture is different. Framers have their own annotation. Then there are plumbing and electrical overlays. There are also wireframes and fully rendered views. Top, bottom, orthogonal, perspective, etc. You get the idea.
Start with model, view, controller. Each is unique. It can be visualized and written differently. Right now we use the same text based language to describe each and to read each.
When you have a general purpose AST then the code can be "read" in different modes (each with their own strengths and weaknesses).
Also, sometimes is is easier to write in one form and read in another. For example, writing math using post-fix is really easy but reading it requires too much managing of the stack state in your head (cognitive overload). For reading, mathematical notation would work better.
I've given this idea quite a bit of thought. I've been wanting to do something in this space but haven't had the time to work on it yet.
The idea I have proposed but haven't seen implemented anywhere is to follow what I call the "workbench model".
The problem is we are "programming blind". Instead of thinking of programming as a series of abstract steps in a recipe, just start with some sample input and manipulate it towards the output.
When you want to do woodworking you use a hammer, saw, sander, etc and manipulate it directly. It's intuitive and you can see what you are doing. The result is immediate. Modern programming is like writing G-code to run though a CNC machine.
Instead of defining things in terms of a series of steps and trying to figure out how to visual that, flip it around and do it the other way. Pass in some parameters and let the user manipulate it however they want in real time. Record the steps they used to go from input to output (like a macro). There is your function. This also works quite well, but is not limited to, a postfix environment like Factor (or the HP-48 calculator RPN language if are more familiar with that).
Now obviously there are multiple input scenarios that need to be handled uniquely. How do we do that? Simple, run each case separately and use pattern matching to qualify your actions.
Ideally, this would be combined with "first-class citizen tests" (another pattern I came up). Basically, when you define a function you give it some sample inputs for the different scenarios and edge cases. You specify the outputs for each corresponding input. When working on the function you choose an input and then work towards the output. When the output is correct it turns green automatically. This happens in real time as you are manipulating the input values. It also tests the other input scenarios as well. If one or more of the other inputs don't match the required output then you add more pattern matching. When all tests are green your function is done.
In very extreme circumstances, portions of an application may need to be rewritten at a lower level for optimizations, but that doesn't negate the value provided by abstractions that allow the developer to be more productive expressing higher level concepts while the computer handles the tedium of how those concepts are translated into machine code.
Is Acme an IDE? Is Emacs an IDE? One has the plumber, the other has the buffer. They're both small abstractions that can be relentlessly scaled to greatly complex and intelligent workflows.
In contrast, I do not consider the standard IDE to be a great leap. They're actually quite static and the visualization tools they provide through graph structures are primitive for any real white-box analysis.
> Great programmers use vim or emacs
This another form of No True Scotsman, and not a valid argument.
> because the imagery in their mind is far more powerful than anything an IDE could display
How do you know, and how do you measure this? Surely, you would admit that Vi and Emacs are better than pen and paper, or punch cards, right? So does it not follow that Vi and Emacs could be improved upon? Or are they the pinnacle of inputting instructions into a computer? If they are which one is better? Why? How do we measure that?
Well, even better programmers like Alan Kay designed and used complete image-based GUI environments like Smalltalk.
And Rich Hickey used IntelliJ to write Clojure.
>Great programmers use vim or emacs, because the imagery in their mind is far more powerful than anything an IDE could display.
How do you measure this? Is this Computer SCIENCE or merely metaphysical beliefs?
These kind of generalizations aren't helpful, simply because they aren't true. Are vi and emacs still going to be in wide use 500 years from now? Likely not. Then it stands to reason that there might be a way to improve upon them.
I use an IDE because I don't have to sink an inordinate amount of time into customizing my environment, since that activity doesn't deliver any value to the folks that pay me. I'd prefer to let a really smart team of engineers set those tools up for me. I do not, however, assume that anybody that doesn't use an IDE must be inferior.
The people who made Visual Studio have not made something that is smarter for everyone's work. Maybe your work just doesn't require any customization. That doesn't mean nobody should ever want customization.
I fail to see where I made that claim. IDE's are also customizable - I would argue that they are, in fact, much more customizable than either vim or emacs, simply because of the breadth of features one may customize. I was addressing the parent's claim that "great programmers use vim or emacs."
Maybe if IDE's were better at helping people see code the way a great programmer does, more people would be great programmers.
Actually, there is explicit interpretation in any IDE. It's just the level of abstraction that Mr. Victor has articulated. Generally speaking, the programming world is tethered to the notion that programming makes the most sense through logical linguistic feats, but under the surface of anything code-driven or lingual is an implicit type of semiotic.
Some key quotes from each for context:
>Bret Victor: "How do we get people to understand programming?"
>Josh Varty: "Problems getting to Learnable Programming [...] However, we need to stop and think deeply about how this system would handle typical code. How do we generalize this solution such that it helps programmers building compiler platforms like LLVM and Roslyn? How will it help programmers researching neural networks? How will it help programmers writing line of business applications? Databases? Apps? Financial backends? The vast majority of functions in today's codebases do not map nicely to a 2D output."
I see BV's essay focused on learning programming for people unfamiliar with programming. For non-programmers, even simple syntax such as "x=0; x=x+1" looks strange and beginners can't hold in their head what it does. So instead trying to teach beginners the LOGO programming language to move turtles around on the screen or BASIC language of "10 PRINT "HELLO" \n 20 GOTO 10", Brett shows a visualization where code syntax update its 2d output in a realtime feedback loop. This can help novices make the leap from abstract syntax in a text editor to the concrete changes in the output.
Josh Varty is going beyond the scope of newbies learning programming concepts. He's trying to generalize it to working practitioners who already understand programming and make the "Learnable Programming Model" work for any arbitrary code to any type of visualization (beyond 2D if necessary). The "live coding for everything" is an interesting concept to pursue but I don't believe Brett's essay had this wide of a scope.
I don't believe working programmers who have already mastered how "programming syntax maps to changing machine state" needs visualizations for every line of source code. However, JV's generalized scope is applicable for working programmers to verify code for correctness and do sanity checks on what they think the code is doing. However JV's ideas are not necessary for working programmers to understand programming.
In one case, JV uses this as example:
var result = DoTaxes(person);
tldr: BV and JV are tackling 2 different issues.
See how JV's ALIVE demo visualization serves a different cognitive function from BV's. Those little red annotations are helpful for professional programmers to verify behaviors but not for beginners to grok compsci: https://embed.gyazo.com/4dc7ac656863cbd02a8e213598f85a4f.gif
The live programming story is a bit more general, it is about merging editing and debugging into one fluid activity allowing you to aim your code like a water hose at a problem, hitting it easily because your feedback loop is continuous. See Hancock's dissertation for that story.
So I reached out to Bret on Twitter: https://twitter.com/ThisIsJoshVarty/status/63156215172802560... (I see now he's deleted his Tweet...)
In his response he told me to read the section "These are not training wheels" near the end of the blog.
Here he says:
>A frequent question about the sort of techniques presented here is, "How does this scale to real-world programming?" This is a reasonable question, but it's somewhat like asking how the internal combustion engine will benefit horses. The question assumes the wrong kind of change.
>Here is a more useful attitude: Programming has to work like this. Programmers must be able to read the vocabulary, follow the flow, and see the state. Programmers have to create by reacting and create by abstracting. Assume that these are requirements. Given these requirements, how do we redesign programming?
I think the ideas he explores can apply more generally outside of creating environment for learning. And my understanding is that Bret believes they should apply to programming in general as well.
Yes I agree, but I think his 2D visualizations that you criticized were focused on "learning programming".
BV wasn't saying that extending it into "non-learning" scenarios for professional programmers must be a 2D feedback loop. Consider a new programming language with specialized syntax or a library of functions for moving a physical robot in 3D space. The feedback loop could be a live Bluetooth or wifi connection to a articulating robot arm on the programmer's desk. I wouldn't think BV would criticize that and say, "no, the robot arm must be 2d image on the screen".
key phrase of "presented in context for learning" from BV: "These design principles were presented in the context of systems for learning,"
Update: I missed the part further down in the article where the Playgrounds were indeed mentioned. Whoops.
The ideas are quite old, much older than Bret Victor's work.
Absolutely. Your work in particular predates Bret's and I've enjoyed reading it and hope it catches on and inspires more folks at Microsoft and Microsoft Research! :)
I think what makes Bret's work a little different than the linked work is that Bret's work managed to escape academia and appeal to an audience that might not have otherwise been exposed to these ideas in the first place.
Interactive coding is with us since the early 80's, the ideas just failed to go mainstream.
It is better to have that argument with Gilad Bracha, I guess.
But I guess all fail short how they really were.
Dr Racket's REPL is probably closer to the experience.
Also very few IDEs enjoy the same edit-continue experience, maybe commercial Common Lisp environments.
For instance, if I have a hashtable returned from a function I called in REPL, I can inspect it and modify its values and properties. Also, within the REPL itself, text is "smart" and copy-paste tracks references, so I can paste the "unreadable objects" (i.e. #<Foo 0xCAFECAFE>) directly into REPL calls and have it work, because SLIME will track the reference linked to a particular piece of text output.
But I can assure you, there is a difference of a REPL feature in an editor and a GUI using it system wide, as on the Lisp Machine. Both in depth of the features, integration and the feel of the user interface.
Think of the Documentation Examiner a version of Emacs Info. Think of Concordia as a version of an Emacs buffer editing documentation records. The listener a version of the Slime listener. You can also a short glimpse of the graphics editor, IIRC.
Actually we didn't. The vast majority of programmers doesn't have access even to those -- and even those that do don't have it in any much advanced way compared to those older environments.
Sometimes pragmatism is the enemy of progress. As stated earlier in the article, you don't get the combustion engine by thinking how we can make horse drawn carriages faster.
Thiel also recommends this approach in Zero to One.
I can't explain how I think, but it isn't visual, and doesn't seem to involve language much. Probably the closest is to say I think in math and geometry. That doesn't make any descriptive sense, but it is what I experience.
I fell deeply in love with math, and then programming. A few words and control structures, and you have Turing completeness! It's as beautiful as language. Minimizing expression is sometimes far more powerful than unconstrained expression. I can hold these structures in my head, and so can most programmers that I know. Pictures are a pale, weak thing in comparison. "The Illustrated Guide to Kant's Critique of Pure Reason" has never been published (SFAIK), and for good reason. The simplest rules of grammar allows us to generate and express extremely complicated and nuanced ideas. Ya, sure, we could make a nice chart of synthetic/analytic and a priori/posterior, and I think Kant did that, but beyond that what do pictures get you? I bet there are visual thinkers reading this that have rebuttals or examples, and that is great. Einstein was a great admirer of Kant, and was a visual thinker, so I imagine him raising objections! But I think in the end the visualizations may have illustrate power, but rarely investigative power. Einstein put the lie to that with his work in Relativity, but he was an extraordinarily unique thinker (he did his work visually, and then struggled to get the math to prove his ideas).
Anyway, my somewhat inarticulate argument is that programming languages was the great invention. Anything that is Turing complete lets us express anything computable.
I think if you can find ways to complement that it will be a great contribution to knowledge. But I don't think you can replace or improve upon a Turing complete language (unsupported assertion requires citation here!). It would be great to be proven wrong.
I use computer languages to do math, computer vision, and AI type stuff. Others uses it for different things. It all works. There is no universal visual paradigm to replace it. Engineering is optimization in multi-dimensions. Visualization limits us to 2 or imperfectly, 3 dimensions. So you can sort of slice out representations of this large, multidimensional space, but you are now working like the blind men on the elephant. You'll never get the complete 'picture'. You are just sort of poking at it with a stick. Whereas with a couple of equations I can describe the entire space AND now have powerful tools to explore that space, describe it, and determine its properties. I think back to one of Bret's videos where he uses a live environment to compute the trajectory for a character in a video game. In math that is known as the 'shooting method'. It kind of works, for some problems. There is also a universe of problems for which it doesn't work. How can you even tell if it works or not visually? The language of math gives us that tool.
I love pictures, and produce charts all the time for my math code. But they do not replace the math, they illustrate it. I do my work in math, and in state, and in sw architecture, and sometimes use visual tools to help check the work. There is no language of visualizations, and without one you will either be illustrating the work of math and computer languages (which is not a bad thing, I'll happily use the tools when appropriate) or severely limiting what we can do. I don't work or think in 2D or 3D space and cannot be limited to such a restricted view (pun intended). I work in small spaces usually (R^18 or so) and visualization is a non-starter except as a great way to learn some of the concepts. I know plenty of you work in far larger spaces.
tl;dr: the amazing advance was Turing complete languages; visualizations are not Turing complete; languages are powerful, visualizations do not have a language and hence aren't analyzable unless your name is Einstein.
I came up when things like 'structured analysis' and 'Yourdon diagrams' were a thing. I was repeatedly told that if I wasn't doing this I was "hacking" in the pejorative sense.
These diagrams were the worst hack that I've ever seen. There is no language, there is no verification, you can literally draw anything. There were case tools that attempted to balance all your arrow so that all ins had an out, and so on, but it was just a disaster. Hack, hack, hack.
In contrast, in code I could quickly express up an API design. It was concrete, it was testable, it was understandable, and it was a 1-to-1 match to what the eventual code would be. It was wonderfully, powerfully expressive. It wasn't limited to 2D, I could express complicated relationships without someone arguing "these lines cross, move this over there to improve the layout", and other nonsense that had nothing to do with solving the problem.
It was not a language, it didn't have a grammar, and it was untestable. It was extraordinarily limited. You couldn't show that this module is used by 10 different modules in different situations. You could express impossible things. You had no way to analyze it for correctness. Sure, there were case tools that put in things like state diagrams and simulation and such, but it was all just terrible. It was either impossibly constrained, or impossibly free-form.
In contrast, my stubbed API's were an exact representation of my ideas. If I want to test a hypothesis, I'd just implement part of the API, stub out parts that weren't important, and have running proof of my ideas. I was doing a lot of concurrent stuff then, and this was important. Visual depictions were incredibly cumbersome, untestable, and were just terrible, terrible hacks.
I went through more than one project where we spent a tremendous amount of time generating these things, they collapsed under their own weight (you just can't reason well about these things once relationships go past 2D), they'd all get discarded, and then the real design work would begin, in code.
I argue, without proof, that without a language visual types of design will always have these problems. I also argue that it is not incumbent on me to provide that proof. The power of Turing complete languages and math has been proven. A viable alternative needs to prove not only that it is equal to the existing approach, but is better in some important way.
Related, you might enjoy http://iconicmath.com/ ;-)
Does anyone know the difference?
Enterprise is if you're purchasing multiple licenses (ie. licenses for employees).
Individual allows you to use the tool commercially and you (the individual) owns the license
Enterprise allows you to use the tool commercially and your employer (the enterprise) owns the license