- Large programs / graphs? I can browse maps and photos of the entire planet from my web browser. ZUIs are well understood now. Also, computers got really big and fast. My graphics card has 8 GB (!) of RAM. Find a VPL that was designed for the era where graphics are cheap.
- Automatic layout is hard? Yes, an optimal solution to graph layout is NP-complete, but so is register allocation, and my compiler still works (and that isn't even its bottleneck). There's plenty of cheap approximations that are 99% as good.
- No formal spec? They sound desperate to come up with excuses why something can't work. How many (textual) programming languages had a formal spec before they got popular? I tried to look up when Ruby got a formal BNF grammar, for comparison, and it seems the answer in 2020 is still "nope, whatever parse.y does (13,000+ lines of C), that's Ruby".
- Difficulty to create editors? This is a legitimate concern, as it's true "there are no general purpose Visual Language editors", but that's an economic issue, not a technical one. Plain text is one of the very few formats where we have general-purpose editors. In most areas of computing, profitability has taken over, and companies refuse to cooperate. There's no one word processing or spreadsheet or database format, either.
- Lack of evidence? Again, they're holding VPLs to a higher standard. When a language like Ruby appeared, nobody asked for proof. Simply creating something that some programmers liked was sufficient.
- Poor representation? That's an issue with all kinds of programs! If I had a nickel for every horribly written program I've had to deal with in my life...
- Lack of portability? This is another symptom of the editor problem, above. Make a standard file format, and actually figure out how to get people to use it, and there's no problem.
These issues are from an article written in the 1980's. The same day as its publication, Apple released the Macintosh IIci, with a 40 or 80 MB hard disk, 1 or 4 MB of RAM, and a 640x480 pixel display. (For all that computing power, the starting price was $6269 -- only slightly less than the $6615 starting price for a Honda Civic that year.) Clearly, visual programming was a non-starter in that day and age. So were 90% of the other things I do on my computer every day. That doesn't mean they're bad ideas.
If you'd have asked me to write down a list of issues with C++ in 1989, I'd have a list that was similarly useless in 2020.
The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo. This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that I developed for The Sims with Will Wright at Maxis and Electronic Arts.
The Sims Steering Committee - June 4 1998. A demo of an early pre-release version of The Sims for The Sims Steering Committee at EA, developed June 4 1998.
Mod The Sims Wiki: SimAntics:
SimsTek Wiki: SimAntics
From Ken Forbus's game design course:
Under the hood of The Sims:
Programming Objects in The Sims:
The Watson language just doesn't provide any other way to do anything. It's a common problem amongst visual programming languages as mentioned in the article.
C32: CMU's Clever and Compelling Contribution to Computer Science in CommonLisp which is Customizable and Characterized by a Complete Coverage of Code and Contains a Cornucopia of Creative Constructs, because it Can Create Complex, Correct Constraints that are Constructed Clearly and Concretely, and Communicated using Columns of Cells, that are Constantly Calculated so they Change Continuously, and Cancel Confusion
Spreadsheet-like system that allows constraints on objects to be specified by demonstration. Intelligent cut and paste. Implemented using Garnet. 1991
Brad A. Myers. "Graphical Techniques in a Spreadsheet for Specifying User Interfaces," Proceedings SIGCHI'91: Human Factors in Computing Systems. New Orleans, LA. April 28-May 2, 1991. pp. 243-249.
C 32 Spreadsheet 1991:
If you enjoyed the full audacious name of C32, then you may also enjoy some of the other acronym names of projects by Brad and his students:
(My meager contribution was "GLASS: Graphical Layer And Server Simplifier".)
Why can't we get people from major tech companies together and have them develop a good on-disk format which supports the features we all need, and then everybody can go back and write their own implementations? Like we do with Unicode, or IP. And like we should do for many other features.
For all the talk about the importance of separation of interface and implementation, operating system designers really suck at it.
How well an operating systems has a separation of concerns between interface and implementation really has very little (unfortunately) to do with business incentives. Even if the Linux kernel had perfect separation of concerns, there's no way to force Microsoft or Apple to support it in their products.
Not to mention the problem with standards: https://xkcd.com/927/
- Here's a standard that kind of works and everybody is using it.
- Hey, we invented this cool new thing that isn't part of that standard but people seem to like it! (XHR, Canvas, ...)
- OK everybody let's put that into the standard as well so we can all agree on how it works. Thanks!
There was no way to "force Microsoft or Apple" to make a web browser, either, and yet they both did, and made it part of their operating systems.
And I think that, “For sharing a filesystem” nowadays your best option is ”the cloud” for most people.
If anyone finds the time to update these claims, please consider our vvvv in your research:
- A visual programming environment for the .NET ecosystem
- Compiles to C# using Roslyn
- Can use code from any .NET library as visual code blocks
- Its language VL augments dataflow with features from OOP and functional programming, supports generics and easy multithreading
- Useful since 2002: https://vimeo.com/371511910
Free for non-commercial use without restrictions (Windows only): http://visualprogramming.net
vvvv definitely doesn't solve all known issues but we hope it raises the bar for visual programming and such discussions.
That said, this screenshot from one of your tutorial videos highlights the type of thing I think people are worried about: https://i.xkqr.org/screenshot_2020-04-27T09:47:46.png
I'm not saying it's bad – it might just be a question of habit whether or not graph edges are better than named identifiers in text, but it is hard to build away, even with best in class tooling.
If there was a visual programming language with anywhere near the popularity of Ruby, I'd be willing to consider that maybe the idea has some merit.
EDIT: Media Molecules’ Dreams has a lot of users using its visual language to implement game logic! It sold about 8.5 million copies. Obviously not all people who bought it are using its visual language, but I imagine a very large percentage have at least tinkered a bit. I doubt Dreams would be as popular if it required textual programming.
I could turn your argument around: If Ruby were anywhere near as popular, widely used, and successful as Excel, I'd be willing to consider that maybe the idea that Ruby is a viable programming language has some merit.
But I won't, because whether or not something is a visual programming language isn't up to a popularity contest.
Can you come up with a plausible definition of visual programming languages that excludes Excel, without being so hopelessly contrived and gerrymandered that it also arbitrarily excludes other visual programming languages?
How is Excel, as widely used, a programming language? Excel sheets in their widely used form are not instructions or behaviour; you can't run them, only manually edit them. Perhaps there is a way to turn Excel sheets into programs, but I highly doubt such a way of using Excel would be more popular than Ruby.
> Can you come up with a plausible definition of visual programming languages that excludes Excel, without being so hopelessly contrived and gerrymandered that it also arbitrarily excludes other visual programming languages?
I'd say a visual programming language is a programming language for which the primary way of making changes to the program is visual. Even if we were to consider Excel a programming language, the primary way in which people make changes to Excel sheets - especially advanced users who are doing more programming-like things - is by editing textual formulae, IME.
Microsoft Excel spreadsheets certainly do contain instructions and behavior. That's the whole point of spreadsheets, that distinguish them from plain CSV files! They continuously run every time you make any change to them. And Excel is several orders of magnitude more popular and successful than Ruby: there is no comparison.
Excel even supports "Programming by Demonstration" through its macro recorder. Programming by Demonstration happens to be one of Brad Myer's favorite areas of research!
Brad wrote the 1989 paper about visual programming that we're discussing, and Visual Programming, Programming by Demonstration, and Programming by Example are closely interrelated topics that he has done a lot of work with and written a lot about.
I think it's important to consider some of his and other people's more recent work, before extrapolating on the technological limitations that he wrote about in 1989, without taking into account 31 years of progress, research, and development.
"Myers is a leading researcher in the field of programming by demonstration and created the Garnet and Amulet toolkits."
"In computer science, programming by demonstration (PbD) is an end-user development technique for teaching a computer or a robot new behaviors by demonstrating the task to transfer directly instead of programming it through machine commands."
Watch What I Do: Programming by Demonstration. Edited by Allen Cypher, Daniel Conrad Halbert, David Kurlander, Henry Lieberman, David Maulsby, Brad A. Myers, and Alan Turransky.
Watch What I Do: Foreword by Alan Kay
>I don't know who first made the parallel between programming a computer and using a tool, but it was certainly implicit in Jack Licklider's thoughts about "man-machine symbiosis" as he set up the ARPA IPTO research projects in the early sixties. In 1962, Ivan Sutherland's Sketchpad became the exemplar to this day for what interactive computing should be like--including having the end-user be able to reshape the tool. [...]
Microsoft Excel Macro Programming History
"History: From its first version Excel supported end-user programming of macros (automation of repetitive tasks) and user-defined functions (extension of Excel's built-in function library). In early versions of Excel, these programs were written in a macro language whose statements had formula syntax and resided in the cells of special-purpose macro sheets (stored with file extension .XLM in Windows.) XLM was the default macro language for Excel through Excel 4.0. Beginning with version 5.0 Excel recorded macros in VBA by default but with version 5.0 XLM recording was still allowed as an option. After version 5.0 that option was discontinued. All versions of Excel, including Excel 2010 are capable of running an XLM macro, though Microsoft discourages their use."
>I'd say a visual programming language is a programming language for which the primary way of making changes to the program is visual.
Your arbitrarily gerrymandered definition turns C++ into a visual programming language, which it clearly is not.
So how is making changes to a C++ program in VI or Emacs not visual, then? People use visual editors to make changes to textual C++ programs all the time.
The "V" in "VI" STANDS FOR "Visual", because it is the "visual mode" of the line editor called ex.
But the actual structure and syntax of a C++ program that you edit in VI is simply a one-dimensional stream of characters, not a two-dimensional grid of interconnected objects, values, graphical attributes, and formulas, with relative and absolute two-dimensional references, like a spreadsheet.
>The original code for vi was written by Bill Joy in 1976, as the visual mode for a line editor called ex that Joy had written with Chuck Haley. Bill Joy's ex 1.1 was released as part of the first Berkeley Software Distribution (BSD) Unix release in March 1978. It was not until version 2.0 of ex, released as part of Second BSD in May 1979 that the editor was installed under the name "vi" (which took users straight into ex's visual mode), and the name by which it is known today. Some current implementations of vi can trace their source code ancestry to Bill Joy; others are completely new, largely compatible reimplementations.
>The name "vi" is derived from the shortest unambiguous abbreviation for the ex command visual, which switches the ex line editor to visual mode. The name is pronounced /ˈviːˈaɪ/ (the English letters v and i).
Which is very much unlike what a program does.
> FYI, the "V" in "VI" STANDS FOR "Visual", because it is the "visual mode" of the line editor called ex. Your arbitrarily gerrymandered definition turns C++ into a visual programming language, which it clearly is not.
I appreciate your condescension, but I'm well aware of the history of vi and in any case quite capable of using wikipedia myself.
How is taking the plain meanings of the words in the definition "arbitrarily gerrymandered"? You tell me what visual means to you and how C++ is not a visual programming language, if that's somehow clear in a way that does not apply to Excel.
>Which is very much unlike what a program does.
You're saying "Program that run continuously every time you make any change are very much unlike what a program does?" That doesn't make any sense to me at all, can you please try to rephrase it?
Speaking of program that run continuously, have you ever seen Bret Victor's talks "The Future of Programming" and "Inventing on Principle", or heard of Doug Engelbart's work?
The Future of Programming
Inventing on Principle
"I'm totally confident that in 40 years we won't be writing code in text files. We've been shown the way [by Doug Engelbart NLS, Grail, Smalltalk, and Plato]." -Bret Victor
Do you still maintain that "Excel sheets in their widely used form are not instructions or behaviour", despite the examples and citation I gave you? If so, I'm pretty sure we're not talking about the same Microsoft Excel, or even using the same Wikipedia.
Your definition is arbitrarily gerrymandered because you're trying to drag the editor into the definition of the language, while I'm talking about the representation and structure of the language itself, which defines the language, not the tools you use to edit it, which don't define the language.
I'll repeat what I already wrote, defining how you can distinguish a non-visual text programming language like C++ from a visual programming language like a spreadsheet or Max/MSP by the number of dimensions and structure of its syntax:
>But the actual structure and syntax of a C++ program that you edit in VI is simply a one-dimensional stream of characters, not a two-dimensional grid of interconnected objects, values, graphical attributes, and formulas, with relative and absolute two-dimensional references, like a spreadsheet.
Text programming languages are one-dimensional streams of characters.
Visual programming languages are two-dimensional and graph structured instead of sequential (or possibly 3d, but that makes them much harder to use and visualize).
The fact that you can serialize the graph representation of a visual programming language into a one-dimensional array of bytes to save it to a file does not make it a text programming language.
The fact that you can edit the one-dimensional stream of characters that represents a textual programming language in a visual editor does not make it a visual programming language.
Microsoft Visual Studio doesn't magically transform C++ into a visual programming language.
PSIBER is an interactive visual user interface to a graphical PostScript programming environment that I wrote years after the textual PostScript language was designed at Adobe and defined in the Red Book, but it didn't magically retroactively transform PostScript into a visual language, it just implemented a visual graphical user interface to the textual PostScript programming language, much like Visual Studio implements a visual interface to C++, which remains a one-dimensional textual language. And the fact that PostScript is a graphical language that can draw on the screen or paper doesn't necessarily make it a visual programming language.
It's all about the representation and syntax of the language itself, not what you use it for, or how you edit it.
Do you have a better definition, that doesn't misclassify C++ or PostScript or Excel or Max/MSP?
Running continuously every time you make any change is very much unlike what a program does. Programming is characteristically about controlling the sequencing of instructions/behaviour, and someone editing a spreadsheet in the conventional (non-macro) way is not doing that.
> Do you still maintain that "Excel sheets in their widely used form are not instructions or behaviour", despite the examples and citation I gave you? If so, I'm pretty sure we're not talking about the same Microsoft Excel, or even using the same Wikipedia.
This is thoroughly dishonest of you. You edited those points and examples into your comment, there was no mention of macros or "programming by demonstration" at the point when I hit reply.
To respond to those added arguments now: I suspect those features are substantially less popular than Ruby. Your own source states that Microsoft themselves discourage the use of the things you're talking about. Excel is popular and it may be possible to write programs in it, but writing programs in it is not popular and the popular uses of Excel are not programs. Magic: The Gathering is extremely popular and famously Turing-complete, but it would be a mistake to see that as evidence for the viability of a card-based programming paradigm.
> Your definition is arbitrarily gerrymandered because you're trying to drag the editor into the definition of the language, while I'm talking about the representation and structure of the language itself, which defines the language, not the tools you use to edit it, which don't define the language.
Anything "visual" is necessarily going to be about how the human interacts with the language, because vision is something that humans have and computers don't (unless you're talking about a language for implementing computer vision or something).
> I'll repeat what I already wrote, defining how you can distinguish a non-visual text programming language like C++ from a visual programming language like a spreadsheet or Max/MSP by the number of dimensions and structure of its syntax:
But you can't objectively define whether a given syntactic construct is higher-dimensional or not. Plenty of languages have constructs that describe two- or more-dimensional spaces - e.g. object inheritance graphs, effect systems. Whether we consider these languages to be visual or not always comes down to how programmers typically interact with them.
> PSIBER is an interactive visual user interface to a graphical PostScript programming environment that I wrote years after the textual PostScript language was designed at Adobe and defined in the Red Book, but it didn't magically retroactively transform PostScript into a visual language
There's nothing magical about new tools changing what kind of language a given language is. Lisp was a theoretical language for reasoning about computation until someone implemented an interpreter for it and turned it into a programming language.
Lisp was designed and developed as a real programming language. That it was a theoretical language first is wrong.
For example, consider all the things that have a physical order and position in a file that is actually irrelevant. Why can't methods (for instance) get displayed as convenient, without worrying about where they are in the file?
Limiting programs to fixed-width, single-font text also seems like a historical artifact. (The Xerox Alto allowed full WYSIWYG text for program code.) Having to draw diagrams with ASCII graphics just seems wrong.
Admittedly, these examples are somewhat trivial, but my point is that program structure is probably a local maximum, not the best possible solution.
You can see an example at 15:00 of this video - some snippets of code and then showing the System Browser.
I've mentioned this a few times on past articles on HN, but I think the issue is we're marrying two different concepts with text based programming languages, namely presentation and representation. Consider - in an idealized context - HTML and CSS. One being the representation, the other being the presentation or more accurately how to present the representation. A user can change how the HTML is presented to them without changing the HTML directly. I think something like this would be useful for programming source code.
So as a thought experiment for this in terms of programming, the source code would be some backing representation that you edit which can be rendered in any way a user desires, including back to regular text as we have now. Tooling (compilers/interpreters, version control, editors, etc) would deal primarily with the representation rather than the presentation.
I think this would also solve a lot of problems that tooling ends up dealing with. Consider version control not having to deal with formatting changes because that's just an artifact of how a programmer chooses to render the representation rather than the representation itself. Autoformatters (gofmt, clang-format, etc) would be less about putting source code in a canonical format for the sake of tooling and more about rendering source code as text in a particular chosen way for individual programmers. Similar for tooling such as IDE code folding or refactoring, document generation, etc.
This is actually something I have been pondering for a decade at this point. I have no idea how a system such as this could be implemented such that it is general and sound enough to be usable. But I still think it would be a neat way to program.
So the value ends up being in giving more people who are unskilled or less skilled in programming a way to express "programmatic thinking" and algorithms.
I have taught dozens of kids scratch and that's a great application that makes programming accessible to "more" kids.
At work, my team abstracts complex feature creation logic into KNIME components that our analysts (who struggle with Python programming) can mix and match and run analysis.
This is something unison is playing with with their codebase manager.
Get displayed as convenient what?
> Limiting programs to fixed-width, single-font text also seems like a historical artifact. (The Xerox Alto allowed full WYSIWYG text for program code.) Having to draw diagrams with ASCII graphics just seems wrong.
I can imagine benefits of better diagrams, for documentation, but how would more fonts in the code help?
More comment would fit on a line with a proportional font.
Headings would be great for breaking up sections of code, and could be collapsible. Maybe linkable as well.
Things that are arrays of arrays or tuples, like enums, could be tables.
When you pass three functions to create an observer, there could be a table with columns "Parameter name, signature, implementation".
Async code like observables and a thread's run method could be in a different font than code that executes right when you interpret it.
Here's an interactive visual programming and debugging user interface to the PostScript programming language, implemented in the NeWS dialect of PostScript:
The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989
Abstract: The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS.
The PSIBER Space Deck is a programming tool that lets you graphically display, manipulate, and navigate the many PostScript data structures, programs, and processes living in the virtual memory space of NeWS.
The Network extensible Window System (NeWS) is a multitasking object oriented PostScript programming environment. NeWS programs and data structures make up the window system kernel, the user interface toolkit, and even entire applications.
The PSIBER Space Deck is one such application, written entirely in PostScript, the result of an experiment in using a graphical programming environment to construct an interactive visual user interface to itself.
Instead of a box-and-lines diagram, I think the stuff that people really want is:
- Structured editing. Valid change operations are obvious (maybe listed in a menu). No invalid states.
- Cross-cutting connections are explicitly displayed (with lines or arrows or etc) instead of being implicit.
- UI does a better job of highlighting just the relevant information and hiding everything else.
- Overall more simplicity. As in, I think some people imagine that their thousand-lines-of-code project could be fully represented as a few boxes and lines with labels. There's some unrealistic expectations tied up in this area. But simplicity is still a good goal.
Those are all good goals and there's different approaches to solve them. A lot of those asks can be solved by keeping the text document interface and adding enough cleverness on top of that (like EVE / LightTable). And there's other visual systems other than boxes-and-lines. People just need to think outside the box. (ha ha, sorry)
C++ is not a visual programming language, no matter how much you colorize and pretty-print and fold it, because it's represented as a one-dimensional stream of characters, one after the other.
Excel is a visual programming language (and one of the most widely used programming languages in the world of any kind, visual or not), because it's represented as a two-dimensional grid of numbers, expressions, strings, relative and absolute two-dimensional references, and it also (though no not necessarily for the definition of a VPL) supports all kinds of graphical attributes, colored text and background, fonts, text styles, formatting, boxes and lines, freezing and grouping and outlining rows and columns, embedded graphs and graphics, etc.
Hardest part of looking at existing code base is to "mentally map" the call stacks and overtime, atleast for the way my brain operates, I tend to build a 2d map in my mind. It doesn't have concrete positions for blocks (functions), but there is a sense of structure in the brain that persists. I can come back to the code base and it is evoked automatically.
We need visual tools for examining codebases just like we have tools for examining database schemas. Perhaps its not that simple though.
Ironically the convoluted programs that people think this would be a solution for make for the most disgusting and hard to follow diagrams.
For visualization types other than dependency graphs or class hierarchies, it's tricky to do any sort of automatic code-to-diagram conversion, so I wouldn't expect there to be a lot of value in that. Even something simple like a state machine might be much simpler to see in diagram form, but it would take quite a bit of sophistication for a visualization program to look at 1000 lines of C and recognize that you're implementing a state machine.
The visual form is more abstract, so you need to start there and have the computer compile it down. Going the other way is always harder. It's 1000 times easier to start with Python code and automatically convert it into machine code, than it is to take machine code and automatically extract a readable Python program from it. That's not because Python is inherently bad for reading or writing. The less abstract form, by its nature, is full of implementation details that don't matter in the more abstract form.
There may be multiple ways to look at something - sequence graphs, data flows - and additionally you don't want to capture all the details in the visualization, you want to be able to cut-off the diagrams at choke points where they're not helpful (e.g. I don't care about package X in the code) which is what all visualization packages try to do. This is directly analogous to geographic maps.
Imagine if you viewed the map for a state/province whatever & had the local road network detailed. That map would be unusable & discarded quickly. Diagramming tools need to be able to take text & generate a visualization that you can zoom into/out of & lose/gain the appropriate level of detail necessary, be able to switch between different aspects (like you can with street view vs regular vs satellite, vs topography). I think the reason we don't have equivalent tools for programming is three-fold:
1. There's a lot of popular languages. Maintaining support for parsing & mapping that to the visualization abstraction is going to be a lot of work. Probably manageable though.
2. As engineers we tend to think in either solving the problem or not. No visualization is going to be good as a hand-drawn diagram in the moment nor is it probably good enough (at least the first version) as a vehicle for making the modification.
3. There's lot of type erasure/dynamism that's impossible to capture in code analysis (e.g. type-erased callbacks like `std::function`, `std::any`). That makes the visualization incorrect or cause it to fail prematurely.
But often I'm in a function and I'd like to know "who calls this", or "what does this call". So being able to get a graph limited to the current function (node) and a couple of levels out on either end could be very helpful. Or maybe being able to mark two functions and see the ways they are connected (if at all).
I guess there might be some tools that do this though, haven't looked.
I’m looking at ways to improve the analysis to get more detail from apps, but it’s tricky. As far as I know, nobody’s made an ontology of computer science or source code patterns yet.
Creating the graph is easy compared to how much you might want to do to interpret it. Then we’ve data flow graphs to consider, and UI component graphs, dependency graphs, supported platform annotations, and ideally, telemetry from code execution in tests and production, and each can supplement or annotate control flow graphs and source code.
It’s a problem we’ll solve as a community but it’ll take a number of years before it’s general purpose enough that step one is load a program and step two is read and modify it... with everything you need neatly linked and annotated...
For more on my thoughts of “understanding code automatically,” see https://github.com/CoatiSoftware/Sourcetrail/issues/750#issu... as an example.
My preference would be just as books, programmers and IDEs build mental models from source code, that eventually we could generically go from a collection of source files to identifying languages, build commands, and app starting points to multiple layers of graphs: control flow, data flow, cross-project references, and eventually — help programmers like an IDE would, but smarter, more like documentation written by a human with annotated examples from test cases and live code, etc.
For instance when you look at hex in a file, you see a bunch of numbers, but it would be nice if there was some inference and cloud database of commonly used tropes/functions and could infer concepts, ideas of what the programmer was thinking.
Currently with code, you get someones thoughts that are then translated into a language, then are then translated into machine code.
It would be nice not to lose information regarding what does what in human readable language, I understand for performance reasons why programming languages as they currently are do what they do.
But c and c++ were really designed when hardware and memory was expensive. So they are "too the metal" in terms of their model.
I think reimplementing a basic machine, aka actual hardware design has to be done at the same time as a compiler is made, and work out the theory concepts behind it.
The fragility of von neuman machines and their code has always bugged me, the machine can only blindly do what the hardware is designed to do, since hardware is all about speed, you want to minimize information, but for coding, it would make sense to have machines developed just for development and have "light code" for when you release it out into the wild.
Either way I think there is plenty of innovation left, it's just a lot of research at the hardware and compiler level so that you have base foundations and abstractions to make real innovations in understanding how programs behave.
One of the massive issues is not being able to understand cause and effect in programming which leads to bugs, because the brain can't make easy models of how code or algorithms will behave.
The graphs are overwhelming for anything beyond a smallest project, but I find them useful as an overview. In other words you can see "what is there" but maybe not understand "how it works."
I also found that when applying good software engineering principles, visual languages can look very clear. Its just most users of visual languages aren’t professional programmers and never learned software engineering principles like abstraction. Here’s the thing: textual programming languages are also impossible to understand and follow if they don’t follow software engineering practices, but that’s often ignored because we’re used to programming in them. Well abstracted textual code is usually compared to badly abstracted visual code and textual languages are declared the “winner” (even if it shouldn’t be a competition at all).
I agree we need visual tools, especially when trying to maintain a large codebase, but only the largest companies can sponsor development of an in-house tool for their codebase, and many of them won't. Maintaining code isn't sexy and doesn't make for new features to sell.
A related point this work makes is this association people make between “visual” and “flow diagram” coding. Not all “visual” coding is flow-based.
I don't pretend that Scratch's drag-and-drop block paradigm is a viable replacement for text-based programming, but I do think it demonstrates that there might be a text-like system we could create that would free us from keyboards, without reducing our source code to piles of spaghetti.
Also, is declarative programming typically mappable to a visual paradigm without severe compromise? Think of all the applications that produce non-executable documents; don't they factor into the visual programming discussion?
GoAnywhere MFT is drag-and-drop and available for commercial use. I enjoyed it. My programs were pretty and powerful, if not well architected.
Tasker programming is visual; it lets you program on a mobile phone with taps and not typing.
The biggest unmentioned one from a learning perspective though, is that with node-based editors, while you can still make logic errors, it’s difficult to actually make syntax errors. When you get used to it, you might not mind forgetting occasionally to match brackets, remember semicolons, use the right keywords in the right places etc, but as a beginner, all of these things can be quite frustrating and opaque, with often cryptic error messages.
With visual node-graphs though, many of these things are just non-issues, which I think helps beginners gain confidence way faster without bouncing off and giving up in frustration. You can’t even connect pins of the wrong type most of the time, so you get some pretty well communicated type safety to boot.
I’d actually love to see an evolution of a fairly advanced visual programming tool like Blueprint turn into a more general-use language, although preferably with files that are still stored as plain text (so at least play nicely with git etc) rather than the binary black boxes that are currently chewing up my LFS...
I want to build it myself, but I have neither the time nor the expertise.
The whole thing was tree based, basically you were gluing together a bunch of domain specific objects. The result was something like a visual abstract syntax tree but not quite. There were general purpose nodes on the tree that handled conditions or arithmetic and specialised ones that mapped data.
The people who grasped it well would have been equally happy writing code. Unfortunately it was often sold as a tool that didn't require programmers, that wasn't the case. If you don't have the ability to decompose your requirements to their constituent parts no tool is going to help.
There was often debate about whether it was programming or not however it was certainly Turing complete. I managed to write a tiny basic interpreter in it.
It was very productive in its narrow niche.
And yes to floating tables. Probably allowing users to also reference other sheets/tables semantically rather than just by row-column.
Also creating explicitly defined (visual) pipelines of how the data is being transformed so users can audit and understand complex "spreadsheets".
Auditing deeply linked calculations by following text formulas is too cumbersome for most users.
Put a bag over Java’s ugly face, that kind of thing.
I don’t want to look at IntelliJ’s “staircase of doom” formatting, I want to see one parameter per line when there is more than one parameter.
I don’t want to trace down data “COME FROM”s (local variables that are only used once and have to be tracked down from whence they came).
Give me an alternate layout which directly tracks between the text and the diagram, similar to a structure layout, but showing details within functions/methods as a tree, as well. Show me clearly how values/expressions funnel or pipeline into each other, but filter out some of the noise.
And auto-display fuglyCaps names as kebab-case, while we’re at it :-). Cuz, science.
The core idea is that the code is just a model (AST, graph...) and your editor projects it to whatever format or syntax you like.
Development then is coding in these representations of the model or building new smart abstractions and model transformations (DSLs) for going from your screen representation to lower-level building blocks and eventually compiling to e.g. something that runs on the JVM.
It quickly becomes just as complicated as it sounds.
The best implementation in my experience is that of JetBrains’ free tool MPS (Meta Programming System). They start with the model (your AST) rather than the code.
The learning curve is steep and it is difficult to adopt it in teams where people are more focused on shipping features than building exotic tooling to help them do it.
The inherent problem is the lack of primitives to build abstractions to "move the langauge toward the problem domain". These primitives can be first class functions, objects, macros, etc . Visual programming environments I've seen also fail to provide any such mechanisms as this article supports.
Having used my fair share of labview i know such examples are reality. Common mistake of the model is the sentiment that because it's visual everybody can do it! Biggest wtf i experienced was when system engineers wrote half completed "reference-implementations" in labiew, which coders then had to translate to Java for production, to sort out all the edge cases, because these visual tools (and the people that use them) are very bad at expressing edge cases. They might as well have given us graphs drawn on paper because the references were so far from reality you could never run them.
Together with that, when the only tool you have is a hammer, everything looks like a nail. Take https://blueprintsfromhell.tumblr.com/image/162487395171 as an example, it's actually one of the simpler graphs from that page. But if you take a look at it you see it's just doing standard function calls in a sequence, like trunctate(), toText() setText(), what's then the point of making it visual?! For simple signal control loops with only mathematical operations the labview model works really well, but when you start doing things like subString, groupBy or indexOf and don't realize you should change domain, that's when you get the worst spaghetti.
The benefit of a visual representation is that there are different ways to visualize things and you can choose which one fits the problem best. You have sequence, dependency, state, class, call-graphs, signals, lexical parsers, tables, charts, etc. Most visual environments only implement one of these and very soon you are using the wrong tool for the job.
I would very strongly argue that Media Molecules’ Dreams would not be as popular as it is if it used textual programming to implement game logic.
I recently wrote software that allows you to transform data step-by-step visually (https://www.easydatatransform.com). I think the visual, flow-based approach here is a real win, especially for non-programmers.
However I wrote the software in a primarily text-based programming environment (C++ and QtCreator). I think it would have been a nightmare to do in a purely visual programming environment.
Horses for courses.
Maybe we just haven't figured out good visualization types yet for some kinds of programs we write. Interestingly, the opposite is not true. I have yet to see a case where we had a half-decent visualization, and people still said "This would be easier to understand if it were 1000 lines of C++".
People have been using visual representations of programs for ~70 years. I'm not sure how much room there is for major improvements to visualization.
There turned out to be lots of room for improvements at the top! We could put almost all of the old housekeeping and busywork into our tools, and use a much simpler language for describing the unique parts of our program that we actually care about.
I see no evidence that our current style of object-oriented programming is going to turn out to be the pinnacle of computing abstraction. The median language is (slowly but steadily) climbing the abstraction ladder, and each step brings it closer to an abstract visualization.
This is bullshit. A lot of musicians & visual artists are super successful with tools like Max/MSP, PureData, vvvv, or the ton of graph-based shader editors out there. Max/MSP is taught in a lot of place to music students, e.g. in conservatories. I promise that by the time you get them to understand for() and if() they've already made some music in Max, and the better ones have already started toying with sensors and controllers.
Disclaimer: I worked 3.something years in a company where the owner routinely shipped software entirely built in Max.
People without any programming abilities are able to use Blender's node editor to make super nice visuals : https://www.youtube.com/watch?v=JhLVzcCl1ug and at the other end of the spectrum, code that runs in airplanes is routinely designed using SCADE (https://www.ansys.com/products/embedded-software/ansys-scade... - see e.g. these slides which talk about its use in the A380: http://www.artist-embedded.org/docs/Events/2008/Autrans/SLID...) and other tools based in the synchronous reactive model.
There are benefits to visual coding such as being able to visually group or cluster related code, being able to use zoom in and zoom out to create 'region's of code visible from further out with large labels, and being able to follow the lines connecting your code and state visually.
But also there are big big big drawbacks, I don't need to mention the literal visual spaghetti code or the more difficult search / find and replace, regex find and replace, any text manipulation tool not available in visual coding. Hard to say whether visual coding in game frameworks is a boon or a curse. For large teams that include non-technical folks they do seem to be a net benefit.
Text can be grouped as well.
> being able to use zoom in and zoom out to create 'region's of code visible from further out with large labels
Can you elaborate on this?
The visual language is gravy compared to the standard library and the integrated programming environment.
I can also write programs in LabVIEW in a few weeks that would take weeks in another language. And I can write programs in LabVIEW in many months that would take weeks in another language.
The largest maker of a bad language is that it doesn't matter how long you work on it, you just get less and less productive and your program actually goes nowhere.
It's easier to reason about when you can see the branching, merging, and direction of flow.
This ambiguity clearly does not exist with a pipeline-like, single-source/single-sink "flow", but it does arise as soon as one has multiple juxtaposed 'flows' in a single diagram or section of a diagram. I'm not aware of visual programming methodologies that try to handle both semantics with equal clarity, and without introducing significant confusion between them.
All code, or at the very least functional code, can be directly mapped to a DFG. Should all functional programming be done visually?
In contrast for instance in music, patchs were already the main interaction and creation mean even before computers existed - see e.g. music studio patchbays, modular synths or even guitarist pedalboards. So people in that field already have the mental model of a signal flow and bricks connected together. Both models can mix - a lot of people use small snippets of code in Max/MSP for instance. In the software I develop, https://ossia.io it's super common that people will rely on a JS script embedded in a timeline for one small part of their score. Etc etc..
The linked article for this thread would be much more meaningful if it mentioned a) specific visual programming languages found to be lacking and b) domains where the visual models map directly to the underlying processes being modeled. As you note, electronic music processes/devices like modular synths, recording studios, and electric guitar rigs are ripe for modeling with boxes/wires in node-based systems.
Then it's not functional programming. I'm asking specifically about functional programming.
That depends on what is being built, and who is building it.
Like most things, it isn't one size fits all.
And you're going to have to actually try it out to discover what works best.
Python and R code can be embedded within the graphs, which are all C++ under the hood.
I have thought of doing somethings like that in the browser with a WYSIWYG editor (document.designMode='on') and web components. The DOM is like the AST of the program and is fed into a runtime. Non-expression tags are basically just comments.
Not for regular programming, but for a type of scientific notebook interface to do exploratory programming. Anyway one of this days.
Personally my favourite VPL was Automator, which had the control flow primitives equivalent to the bash operators | and ;. This made layout and control flow trivial at the expense of expressiveness. However, as the goal of the thing was basically “terminal for the rest of us”, arguably those two operators are what most people end up using at the command line anyway.
Of course Automator failed because of a number of other issues (lack of abstraction and bad package management I think are at the top of the list). But I think tools like that have a good amount of potential.
My general feeling is that a visual programming language excels as domain specific languages where the domain is quite visual.
Ive seen two problems with visual programming languages which are quite severe in my opinion.
1. Merging sucks. Merging visuals are not straight forwards. I’ve seen companies do trunk based development before it is was hip just to get around it.
2. As with all high level languages, without an understanding about the overhead of different constructs, it is easy to create very inefficient solutions. On cloud systems it may be easy to throw more processing power at the problem. On embedded systems, less so.
Solved by the ability to pan and zoom. Node graph tools in professional visual effects environments can grow to >20,000 nodes in a document, especially given that nodes can be nested in groups or cross-included from other documents.
> Need for automatic layout. [...] For example, generating an optimal layout of graphs and trees is NP-Complete .
Doesn't need to be absolutely optimal to make the problem go away; making something that helps the user enough that they don't have to think about it is completely tractable.
Many professional node graph tools don't do any automatic layout at all and let the user handle it; which while less than perfect, doesn't stop the tools from being incredibly powerful.
> Lack of formal specification. Currently, there is no formal way to describe a Visual Language.
First, a formal specification is not necessary in order to build something useful.
Second, if you really need a formal specification, then write/invent one. There is nothing about visual systems that makes them less possible to specify than any other software system.
> Tremendous difficulty in building editors and environments. [...] These editors are hard to create [...] the language designer must create a system for display [...] which usually requires low-level graphics programming.
So... do that stuff? What about that work is more difficult than the other parts of writing a compiler; or any other piece of complex software? Maybe this was more of a big deal in 1989, but basically all software today is based on a visual UI, so the complaint that the need for a UI makes visual languages intractable is... pretty ridiculous now. None of this would be harder than, say, writing a simple video game.
> Lack of evidence of their worth. There are not many Visual Languages that would be generally agreed are "successful"
"Nothing good exists yet, so nothing good is possible" doesn't hold water for me.
> Metrics might include learning time, execution speed, retention, etc.
From my experience in visual effects, non-coders can pick up a node graph tool very quickly and get very creative/inventive with nodes— but the moment they have to fall back to writing a script, they struggle or don't even try. It seems abundantly clear to me that node graphs are far easier to learn (but can be just as powerful) as writing with a machine grammar.
I also did an image processing development project in a node graph tool that took about 4 days of experimentation to get to patentable IP. If I had to do the same project in Python or C++, it would have taken weeks or months, if I was even able to solve the problem at all without the fluidity and responsiveness of the node based editor. So even for "serious developers", a good node-based editor can multiply speed by a factor of 10.
> Poor representations. Many visual representations are simply not very good.
Can't argue there— but the solution is to make one that is good. :)
> Lack of Portability of Programs. [...] Graphical languages require special software to view and edit
...Like basically all other file formats in the universe. :)
It is not 1989 anymore, and it's time to fix this stuff.
On the other hand, there's no fundamental reason that visual editors shouldn't be able to load and save source code; it's just a different way of thinking about program structure and user interface. There's already a lot of tools out there that generate dependency graphs within or alongside an IDE; the main shortcoming i see is a lack of nesting. There are few instances in which i want to see the entire structure of a program all at once, any more than I want to read all the source code in a single giant text buffer
One might equally ask if textual languages are so great, why are there so many of them? Couldn't everything be written in a nice capable high-level language like C? And of course it could, but lots of people don't like C's minimalism and lack of guardrails. It seems to me that most new general-purpose languages originate in a desire to have the computer do some of the tedious housekeeping (like casting variables) and frustration with awkward syntax.
I'm old, so when I was learning to write code it was just flat text files and every build necessitated a round of swearing over forgotten semicolons, mismatched brackets, or incorrectly spelled variables. Syntax highlighting, code folding, autocompletion, and bracket matchings which do so much to make development in a modern IDE pleasant and productive were once viewed as undesirable crutches that would lead to poor practices, lack of portability, and inaccurate second-guessing of developers' intention by the IDE, subjecting hapless coders to the tyrannical will of the interface illuminati.
I suspect that behind the aversion to visual programming lies a fear of standardization and automation that will result in widespread developer redundancy as it becomes easier for domain experts to build out their tools without code specialists, somewhat parallel to the development of electronics from hobby clubs where people helped each other with soldering irons, multimeters, and resistor identification charts to today's electronic factories populated only by pick-and-place machines assembling components too small for adult hands to manipulate at dizzying speeds.
The more automation and and ML that finds its way into IDEs, the more standardized the underlying libraries are likely to become. To the extent that programmers are freed from syntax production and housekeeping duties and processes can more easily be read (from visual graphs/charts) with domain knowledge alone, the more likely it is that the most common functions become wholly standardized, in the same way that chips are. 555 timer ICs have been around since 1971 and the most popular derivatives are dual and quad versions. Nobody is writing new and improved 555 timers and there's no real demand for them.
>Brad A. Myers is a Professor in the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University. He was chosen to receive the ACM SIGCHI Lifetime Achievement Award in Research in 2017, for outstanding fundamental and influential research contributions to the study of human-computer interaction. He is an IEEE Fellow, ACM Fellow, member of the CHI Academy, and winner of 15 Best Paper type awards and 5 Most Influential Paper Awards. He is the author or editor of over 500 publications, including the books "Creating User Interfaces by Demonstration" and "Languages for Developing User Interfaces," and he has been on the editorial board of six journals. He has been a consultant on user interface design and implementation to over 90 companies, and regularly teaches courses on user interface design and software. Myers received a PhD in computer science at the University of Toronto where he developed the Peridot user interface tool. He received the MS and BSc degrees from the Massachusetts Institute of Technology during which time he was a research intern at Xerox PARC. From 1980 until 1983, he worked at PERQ Systems Corporation. His research interests include user interfaces, programming environments, programming language design, end-user software engineering (EUSE), API usability, developer experience (DevX or DX), interaction techniques, programming by example, mobile computing, and visual programming. He belongs to ACM, SIGCHI, IEEE, and the IEEE Computer Society.
For example, he wrote this one year later in 1990:
Creating User Interfaces Using Programming by Example, Visual Programming, and Constraints. Brad A Myers. ACM Transactions on Programming Languages and Systems, Vol 12, No. 2, April 1990.
I really enjoyed this 1999 paper “A Taxonomy of Simulation Software: A work in progress” from Learning Technology Review by Kurt Schmucker at Apple. It covered many of my favorite visual programming languages.
And it's also worth reading the more modern and comprehensive "Gadget Background Survey" that Chaim Gingold did at HARC, which includes Alan Kay's favorites, Rockey’s Boots and Robot Odyssey, and Chaim's amazing SimCity Reverse Diagrams and lots of great stuff I’d never seen before:
I've also been greatly inspired by the systems described in the classic books “Visual Programming” by Nan C Shu, and “Watch What I Do: Programming by Demonstration” edited by Alan Cypher.
Visual Programming. Nan C Shu:
Watch What I Do: Programming by Demonstration. Allen Cypher,. Daniel Conrad Halbert.
In short, textual programming is just a subset of visual. People think of the dichotomy as a duality of text vs. visual, it is not.
Text just has features people tend to like: A Fixed line based structure. Isomorphism with the english vocabulary... Who says other visual programs can't have this either?
The issue with a lot of these "visual programs" is that they also represent a subset of Visual Programming similar to how text is just a tiny subset of visual programming. What you are in fact seeing with most "visual programs" is mostly programs represented as graphs.
The real dichotomy people are discussing is not text vs. visual but text vs. graph. Whenever I see something like labView or something labeled as a "visual program" it's always a program represented with a series of nodes and lines, aka a graph.
Hopefully by illustrating the reality of what's going on, people can see beyond representing programs as just text or just graphs.
There are other ways to represent modules and processes.
Here's a great piece of exploratory research that imagines how things could be different, by an amazing visionary visual thinker and artist, Scott Kim:
"Viewpoint" is part of Scott Kim's PhD dissertation that he did at Stanford in 1988 (with Donald Knuth as principle advisor). It was programmed in Cedar, and ran on a color Dorado workstation at Xerox PARC.
>Demo and explanation of Viewpoint, a computer system that imagines how computers might be different had they been designed by visual thinkers instead of mathematicians. Caution: this is basic research, not a proposal for a practical piece of software. Part of my PhD Dissertation at Stanford University in 1988.
At 12:22, he starts with a blank screen, and performs a "visual boot", to use Viewpoint to build itself.
This is a visualraccoon Perspective on Scott Kim‘s very graphic and revolutionary exploration, Viewpoint: Toward a Computer for Visual Thinker.
What would it be like to go back to visual first principles and take a fresh look at graphic user interfaces?
The Viewpoint Thesis is that a small number of pixel manipulation primitives can be defined such that if they are bound to keyboard and mouse actions it is then possible to build a simple text-graphic editor by drawing it, and that that editor can be used to draw-build itself.*
The Viewpoint Thesis & Editor is part of a larger project founded on the hypothesis that:
“Only by treating the screen itself as a first class citizen will we be able to build computers that are truly for visual thinkers.” Scott, 1987.
This project includes building visual programming languages for such thinkers.
Viewpoint: Toward a Computer for Visual Thinkers
When I started my PhD project in 1981, the IBM PC was brand new, and it would be years before the Mac and Microsoft Windows would appear. Nonetheless I was familiar with graphical user interfaces because of my internship at nearby Xerox PARC, the visionary research center that spawned, among other things, the laser printer and the bitmapped display. I was well-acquainted with Alan Kay's vision of a dynabook (he envisioned the notebook computer in 1975), and understood the power of graphic user interfaces and graphic tools like paint programs to bring the power of computers to artists and visual thinkers.
At the time I was part of the digital typography program within the Stanford computer science department, which built computer programs for digital typeface designers. I was also enthralled by the Visual Thinking course at Stanford, which taught engineers in the product design program how to think visually.
It struck me as odd, and deeply wrong, that we were building tools for visual artists in a programming language that was utterly symbolic, and lacking in visual sophistication. I yearned for a programming language that had the same visual clarity that graphic user interfaces had.
So I set about wondering what a visual programming language might look like. If computers had been invented by artists and visually oriented people, instead of by mathematicians and engineers, how might they write programs? It seemed to me an important question, but one that hardly bothered most computer scientists. I read about a few attempts to build visual programming languages, and decided there was something fundamental I needed to understand.
My journey took me deep into the foundations of computer science, where I asked fundamental questions like “what is programming” and “what is a user interaction” — questions that often get passed over in computer science (any definition of “programming” that starts with “a sequence of symbols that…” is not deep enough to encompass visual programming languages). I never did build a visual programming language, but I did figure out a fundamental idea (things get interesting when the user and the computer share the same mental model of data), and built a rudimentary visual editor that demonstrated my ideas.
I’m posting my dissertation and the accompanying video demo to re-open the conversation. What in my dissertation is interesting to you? What other work do you know of along these lines? Do you know of foundational research in interaction design and visual programming? And most importantly, what would be a good next step to push the work forward? (My research stalled for lack of a juicy application domain.) Please email me your thoughts. I'd love to hear them.
The possibilities are boundless but you have to temper the boundlessness with the fact that our brains are biased and specialized to work in certain ways. Case in point can you communicate your entire post in terms of pictures? Is there any form of non linear diagramming you can do on a 2D canvas that can deliver your point better than text? If I give you the freedom of a 2D canvas to convey your point can you convey it better than the restricted form of text? No. In fact text was the best way to communicate your point. Restriction and lack of freedom are often more effective tools than ones that are completely unrestricted and boundless.
It doesn't matter whether you're a visual thinker. When it comes to parsing and conveying complex information our brains have a dedicated module for parsing information that comes in as a linear and serial string of symbols. If you can talk, listen read or write than you can be exceptional at writing programs that do this. Your brain is biologically biased to operate this way.
That being said if I show you a picture of a dog, you will learn more about what that dog looks like than any amount of textual description. There is no denying that certain things are communicated better visually and other things are communicated serially.
It's hard to delineate axioms that communicate exactly what is better communicated visually and what is better communicated textually. I can only talk in terms of examples: Your ideas on visual programming is better communicated through text, what your car or face looks like is better communicated through pictures.
One thing that has worked throughout history is a combination of text and pictures. Powerpoint. A linear slide show where each slide delivers serial text and pictures in parallel to the human brain.
The future of visual programming, if it occurs at all, will ultimately be some form of a combination of diagrams and restricted serial strings of symbols following the same principles as powerpoint. I don't believe it has anything to do with foundational computer science but rather the language must be built around the biases and limitations of the human brain. We build biased textual languages because our brains are partly biased towards text.
I have an idea about what that would look like and how that would work in terms of maintaining programmer productivity. Also I didn't see your dissertation (assuming your name is Don Hopkins) in your post.
Because if you actually watched the video, you would realize that Viewpoint is all about text. Nobody claimed it was "better than text", or that it was trying to get rid of text. You'd know that if you had watched the video. But you don't know that, so I can conclude that me simply writing text was not enough to get you to watch the video.
>I have an idea about what that would look like and how that would work in terms of maintaining programmer productivity.
That's very nice. And do you know how much that's worth? Nothing. Not only are ideas worthless without an implementation, but also ideas that you haven't even bothered to write down without implementing are worth even less, because they're a waste of everyone's time, and not tempered by reality or experience or empirical testing or iterative design and development or stakeholder requirements or user feedback.
What point is there to telling me that you have an idea about something you haven't implemented, or even written down, or drawn a picture of? Or did you forget to provide a link to your own dissertation, a video, and a runnable demo?
It's much better for you to spend your precious time doing difficult tedious things like writing down and implementing and testing your brilliant ideas yourself, instead of having so much fun talking about how great it would be if only you wrote it down for other people to read, payed somebody on Fiverr to draw a picture of your idea for you, or found some out-of-work but highly skilled programming with nothing better to do, who was willing to pair up with a starry-eyed "idea guy", and work night and day for free, on your sincere verbal promise of equity and exposure, implementing your grandiose but unwritten ideas, in the hopes that you don't get angry at them for not precisely interpreting and efficiently engineering your brilliant vision correctly, or suddenly change your mind once you see what they actually implemented that did not perfectly live up to your idealistic expectations and nebulous handwaving.
Good luck with that! Let me know when it ships.
Viewpoint is all over the place. It's basically a 2D canvas with no order. Sure it has text but that text is used more like a random label with random coordinates. There's no linear process of instructions delivered or ingested. What you see is basically a reactive canvas with arbitrary coordinates.
The universe we live in has a linear one dimensional property that follows the arrow of time. Viewpoint fails to represent this concept symbolically. The process of time, the process of execution. However you choose to represent this concept, all programming languages must have some sort of way of representing time as a spacial dimension. For text it is left to right, top to bottom reading order, for viewpoint it does not exist.
What's more all the symbols can be destroyed as text is just an abstraction over a pixel drawing interface. The axioms (pixels) of this interface are too low level. But this is just a minor flaw.
The main flaw I see with Viewpoint is that it's not clear how computation can be modeled with these pixels. How do you represent logic? Or, and, nor, xor? Is viewpoint Turing complete?
Time is an intrinsic part of logic, a programming must be able to represent logic and time by using only spacial dimensions.
Did you notice the part in the video description that I quoted in my post that said "Caution: this is basic research, not a proposal for a practical piece of software. Part of my PhD Dissertation at Stanford University in 1988"? While you're at it, read Scott's introduction and PhD dissertation that I linked to, as well.
Do you even know who Scott Kim or his thesis advisor Donald Knuth are, or how the research at Xerox PARC in the 80's contributed to the fields of programming language and user interface design? What were you doing in 1981?
So what's that great idea you claim to have? Where's the link to your dissertation and a runnable demo?
That's mine. When it's ready I'll reveal it.
>I think you need to go back and watch it again, because what you're saying doesn't align with what the video shows, and it sure sounds like you misunderstand, and that you don't know what you're talking about.
In his video he even called viewpoint an "editor" as opposed to a "visual programming language." It's clearly not Turing complete and I would say outside the topic of visual programming languages. I think he's more in the domain of researching user interfaces for the common computer user rather than a visual interface for programmers and programming in general.
Oh, it looks like we have ourselves a classic "idea guy"! I've met a whole lot of people like you before. It must be hard getting a job in such a crowded field, especially at a time like this.
So that's a hard no, you've never heard of Scott Kim or Donald Knuth or Xerox PARC, right?
I can see you have excellent soft "people skills", and it must be a delight to work with you. Good luck talking somebody else into implementing your ideas for you, but don't let anybody else know what they are, and keep them top secret, and never discuss them with anyone who's done similar work, and make sure everyone you discuss them with signs NDAs with No Complete clauses (and make sure you're in a state where they're enforceable) and Reach-Around Clauses (in case they fuck you up the ass), because your ideas are so easy to implement once you know them, that somebody might steal them from you before you get around to conning somebody else into implementing them for you for free, if you leak any details of all those original ideas of yours that are all yours and nobody else's and they belong to you and are yours. ahem ahem cough
Better see somebody about that cough!
It's just a fuzzy idea. Not a business idea. I don't see how I can make any money off of it. Also I'm not the type of guy who can get other people interested in working on it. Likely even if I do build it myself and open source it one day it won't take off without external support or luck.
>So that's a no, you've never heard of Scott Kim or Donald Knuth or Xerox PARC, right?
No who are those guys. Never heard of them. I only know about Mark Zuckerberg, Elon Musk, Jeff Bezos and Bill Gates. That's all I know.
>I can see you have excellent soft "people skills", and it must be a delight to work with you.
Yeah I figured you might get pissed off. Note that it was nothing personal, and none of my comments (other than the sarcasm above) ever deviated from just attacking the idea. I was just extremely direct with my criticism. I attacked the idea, but inevitably as all people are... if I attack the idea, the person behind the idea feels he's getting attacked too.
I tend to be more direct on the internet to cut through all the politics as the consequences of not playing the game on the internet are not that serious.
Well I'm sure it proves your point, whatever it is, and definitively wins your argument, hands down. Who am I to question such extreme self confidence? I graciously concede. Congratulations!
>Yeah I figured you might get pissed off. Note that it was nothing personal, and none of my comments (other than the sarcasm above)
Why I'm not pissed off at you at all, and I don't mean for you to take any of this personally, and I non-apologitically apologize for you choosing to be offended by any of my words, plus I give you the full and sincere benefit of the doubt that you would never retroactively claim you were being sarcastic, simply because you didn't want to admit you made a mistake. Who in their right mind would ever do something like that, in this day and age? I'm sure your brilliant idea is good enough to win the Noble Prize in Journalism for curing Coronavirus by injecting bleach.
>No who are those guys. Never heard of them. I only know about Mark Zuckerberg, Elon Musk, Jeff Bezos and Bill Gates. That's all I know.
Yeah, I didn't think you knew who you were criticizing, or even had the curiosity to google them.
I was being sarcastic. Either way you shouldn't worship people. Everyone is infallible. Clearly Scott made an editor and clearly that editor is inferior in every way to photoshop. You can literally do the same thing with Brushes. Maybe viewpoint was revolutionary for its time, but no longer.
>Why I'm not pissed off at you at all, and I don't mean for you to take any of this personally
You are pissed off, and you took it 100% personally.
>Well I'm sure it proves your point, whatever it is, and definitively wins your argument, hands down. Who am I to question such extreme self confidence? I graciously concede. Congratulations!
My idea could be a piece of shit for all I care. I didn't share it with you so there's really nothing to talk about.
The topic at hand is viewpoint and how viewpoint isn't even turing complete. Viewpoint can't express any logic so it doesn't even fit with the overall topic of this thread: Visual programming languages.
Then who are Scott Kim and Donald Knuth and Larry Tesler and Jeff Raskin, and what did they do that relates to this discussion? If you were being sarcastic about not knowing, then you should be able to easily answer that question. And here's another easy question: which one of them did HN most recently turn the header black for, and why?
>You are pissed off, and you took it 100% personally.
No, I was only sarcastically pretending to be pissed off and take it personally. ;)
Because the question is obvious. Who doesn't know Knuth? As for Scott Kim, and the rest, I don't know them and could care less. They aren't the topic of conversation.
That's optimistic of you!
incapable of making mistakes or being wrong.
"doctors are not infallible"
And in every way prior to Photoshop, too:
>Adobe Photoshop is a raster graphics editor developed and published by Adobe Inc. for Windows and macOS. It was originally created in 1988 by Thomas and John Knoll.
>Scott Kim's 1988 PhD Dissertation: History: Timeline: p. 26:
>Undergrad school 1973: Computers, computer music, and computer graphics. Visual thinking class with Robert McKim. Basic graphic design class with Matt Kahn. Met Doug Hofstadter. BA in music, with studies in mathematics. Wrote article on four-dimensional optial illusions.
>Graduate school 1979: Started as graduate student (Masters in CS) Fall 1979.
>Metafont 1980: Gave Metafont demos. Programming AMS-Euler font in Metafont. Xerox PARC. Started as consultant.
>Inversions 1981: Worked with Richard Weyrauch on computational philosophy. Wrote and produced Inversions book. Viz Din (visual programming language discussion group with Fred Lakin and Warren Robinett) started. Started doing graphic design jobs.
>Computer languages 1982: Started interdisciplinary PhD. Taught visual thinking. Dave Siegel took over Euler.
Visual programming 1983: July started work at Information Applianc. August ATypI (Association Typographique Internationale) conference.
>Thesis Proposal 1984: Wrote dissertation proposal. Dropped idea of programming. Added idea of pixels. Introduction of Apple Macintosh computer.
>Programming 1985: Lecture at Hewlett-Packard: the field of user interface design and why it doesn't yet exist. Learned to use Cedar programming environment at Xerox. Wrote first draft of Viewpoint program.
>Thesis, draft 1986: Taught "Graphic Invention for User Interfaces at Stanford with William Verplank. Wrote dissertation, draft 1.
>Thesis, final 1987: Rewrote program to match writeup. Revised theory. Dissertation defense, including a videotaped demonstration of Viewpoint. Wrote final version of dissertation.
And speaking of Adobe Photoshop: Scott Kim acknowledged John Warnock, the President of Adobe, in his thesis, who supported his stay at Xerox PARC:
>Xerox PARC: Leo Guibas. John Warnock. Maureen Stone. Michael Plass. Leo invited me to PARC. John and Maureen sponsored my stay. Michael helped with programming.
The timeline proves how idiotic it was for you to complain that Scott's dissertation wasn't as good as Photoshop (while his stay at Xerox PARC was SPONSORED by John Warnock, president of Adobe). Again: Photoshop was released AFTER Scott's dissertation. And a commercial product developed and supported by thousands of full time people over decades is in no way comparable to a PhD dissertation written by one person that is clearly labeled "Caution: this is basic research, not a proposal for a practical piece of software."
Do you also attack Scott's thesis advisor Donald Knuth, because Metafont isn't as powerful as Adobe Illustrator?
You're baselessly attacking Scott for his "exploratory research" that was completed BEFORE Photoshop was even released. So your indefensible position is that it was wrong for me to post his thesis to this discussion because his exploratory research wasn't better than a product released AFTER he received his PhD thesis from the guy you claim not to know, Donald Knuth.
It's ironic you'd criticize Scott for his exploratory research not being better than a commercial product from Adobe that was released later, since Adobe respects him enough to invite him to give distinguished lectures like this:
>The Adobe Distinguished Lecture Series brings our industry’s most eminent researchers and creative professionals to the Adobe campus for a lecture and full day of informal discussions. Sponsored by the Adobe Research, the series provides an opportunity to meet and learn from visionaries at the cutting edge of digital media.
>FEBRUARY 05, 2009. 12:30PM. Scott Kim (Shufflebrain)
>Scott Kim (Shufflebrain). February 05, 2009 12:30pm. Games for Visual Thinking.
>In this talk veteran puzzle designer Scott Kim will show you original puzzles that stimulate visual imagination. You will see magazine puzzles that introduce ideas is visual design. You will see animated ambigrams that playfully combine typography and illusion. And you will see how his new online game Photograb exercises visual thinking skills. Finally Scott will discuss what makes a good puzzle, and how games can change the way you think.
>BIO: Scott Kim is one of the world’s most prolific and versatile puzzle designers. He has designed thousands of puzzles for such web game companies as PopCap, Gamehouse and the Tetris Company. He is the puzzle columnist for Discover magazine, and writes the annual Brainteasers page-a-day calendar. His strategy game MetaSquares is now available on iPhone. He has designed educational games for puzzle toy company ThinkFun. Scott has a PhD in Computers and Graphic Design from Stanford University. He is now designing smart games for his new company Shufflebrain.
Do you really believe you're infallible? And that I and everyone else is infallible too? Or was that just your sarcastic spelling?
Which visual programming languages have you actually developed or even used? Or are you just all talk and baseless criticism for other people with decades more experience than you?
I worked on developing and documenting and programming in the visual programming language in The Sims, SimAntics, which worked well enough to ship in an award winning product that was the top selling PC game of all time, so a lot of people have used it (over 200 million copies sold, making EA over $5 billion), and it works quite well. So by any measure, that's at least one example of a successful visual programming language.
>The Sims Franchise Has Officially Made Over $5 Billion
>The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo. This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that I developed for The Sims with Will Wright at Maxis and Electronic Arts.
>The Sims Steering Committee - June 4 1998. A demo of an early pre-release version of The Sims for The Sims Steering Committee at EA, developed June 4 1998.
Documentation that I wrote about the visual programming language in The Sims:
Ken Forbus taught SimAntics programming with Edith it in his game design course:
And I've also worked on developing other visual programming languages, like Bounce / Body Electric, and others that weren't released as commercial products. And I've actually used (and studied the visual code and source code and extension APIs of) a bunch of other visual programming languages, too.
Which visual programming languages have you used, and what did you use them for? Have you ever actually designed or developed any yourself?
So now are you finally willing to share any of your great ideas and experiences, since you admit that you'll never get around to actually implementing those ideas yourself? What do you have to lose by sharing them, then? And if you refuse to share your ideas that you'll never implement, then why did you even bring them up in the first place?
Not a misspelling. A misuse of a word. Can we move on from this?
>So now are you finally willing to share any of your great ideas and experiences, since you admit that you'll never get around to actually implementing those ideas yourself? What do you have to lose by sharing them, then? And if you refuse to share your ideas that you'll never implement, then why did you even bring them up in the first place?
The topic of conversation is viewpoint and in my opinion viewpoint is not an example of a good visual programming language as it's not even turing complete. I illustrated my thoughts on what a good visual programming language would entail. That is the topic.
Whatever your background is, great. Whatever my background is it doesn't matter. I will say it's not as illustrious as yours and I have not implemented a visual programming language for a best selling game. Either way credentials have nothing to do with the topic at hand.
I don't care about the timeline of things and the real topic at hand isn't who made a better photoshop. The topic at hand is visual programming languages. I made a comment on my thoughts about viewpoint and what I thought about what a visual language should look like. It appears that you disagree with me except, none of my thoughts were addressed. I just mostly see credentials of everyone being mentioned everywhere.
I'm willing to talk about visual programming languages but I'm not going to introduce my idea to you because I don't know you and you haven't been very friendly to me. Additionally I only want to introduce it at a point in time when I have the resources to be very involved with the project if that ever happens in my lifetime. I'm not going to just give it to you so you can run with it.
Also it's not that great of an idea anyway, it wasn't brought up to be talked about, it was just "mentioned" is a better word for it. Likely it will never get implemented as I don't have have the time to get it working. Visual programming is an interdisciplinary field that requires expertise in programming language theory, user interface design and graphics programming. My specialty is just web, so while I'm learning those things on the side because I'm interested in those things, whether all of that coalesces into a visual programming language remains to be seen.
Scott Kims viewpoint is irrelevant because it's not programming. It's an editor for pixels.
I am contributing by telling you whats wrong with viewpoint and the ideas behind it. You're contributing by spouting off about everyones credentials.
But I don't understand why you have wasted so much time and effort trying to convince me not to discuss it, when you could have simply not said anything at all, since you don't have anything useful to contribute.
If your problem is that there is too much irrelevant noise in this discussion, you have nobody to blame but yourself, because you have added absolutely nothing useful or positive or interesting, just a lot of useless whining. You've already said you don't care FOUR times, so stop denying that you care so frequently, or simply stop posting.
People who don't have anything useful to contribute, or refuse to contribute anything useful that they claim to know but want to keep secret because they don't trust anyone not to steal their idea, shouldn't mention their secrets in the first place, and shouldn't attack other people for contributing things to the discussion.
Stop complaining, and start contributing.
Now back to the point:
I had a discussion in 1999 with Jaron Lanier about the 3D tree node data structure (Swivel3D trees) and plug-in COM data structures in Body Electric / Bounce, and he raised some interesting points about visual thinking and explicit visual representation of data, which parallel what Scott was exploring in his Viewpoint thesis.
Jaron, who founded VPL Research, developed and used Body Electric extensively, programming real time virtual reality simulations and musical instruments by integrating data gloves, body suits, eyephones, two separate SGI workstations to render for two eyes, 3d input trackers, music synthesizers, convolvotrons, and other i/o devices.
In case you're not familiar with his work, here is a classic 1986 interview with Jaron from the book "Programmers at Work" (which also interviews Scott):
>Today I posted the Jaron Lanier interview from his early days as a young programmer in California. Back then, this free-spirited guy was touting Virtual Reality to disbelieving stares. Jaron has become a great spokesman and sage for the industry, questioning where we are going and where we have come from. He is an author, computer scientist, and gadfly of the industry. You can read all about his recent work here.
>INTERVIEWER: What are you doing with programming languages now?
>LANIER: Well, basically, I’m working on a programming language that’s much easier to use.
>INTERVIEWER: Easier because it uses symbols and graphics?
>LANIER: It needs text, too. It’s not exclusively graphics. With a regular language, you tell the computer what to do and it does it. On the surface, that sounds perfectly reasonable. But in order to write instructions (programs) for the computer, you have to simulate in your head an enormous, elaborate structure. Anytime there’s a flaw in this great mental simulation, it turns into a bug in the program. It’s hard for people to simulate that enormous structure in their heads. Now, what I am doing is building very visual, concrete models of what goes on inside the computer. In this way, you can see the program while you’re creating it. You can mold it directly and alter it when you want. You will no longer have to simulate the program in your head.
Jaron designed the musical visual program (which Scott Kim cited in his thesis) on the cover of the September 1984 Scientific American on Computer Software (a wonderful issue, with many articles about programming languages and software by some amazing people).
To draw a parallel, pixels are to Viewpoint as the Swivel3D scene graph is to Body Electric, and they both share the ideal that "the virtual world and the knowledge base were the same thing" and "it's user interface all the way to the bottom":
"I had always thought the swivel tree was ridiculous, of course, but on
the other hand I liked the idea that the virtual world and the
knowledge base were the same thing- that unity encourages the
visibility and grabbability of the underlying concepts. I think the
brain works that way- there isn't some barrier behind which everything
gets abstract- instead, it's user interface all the way to the bottom!
What I think would be the coolest long-term destination of BE would be
extending the scenegraph so that it was as powerful a knowledge base
as you'd want..." -Jaron Lanier
At 12:31 AM -0400 7/8/99, Hopkins, Don wrote:
Hi, Jaron. We've met briefly once or twice - I'm a friend of David
What's this about Sun acquiring the rights to the VPL patents, and
Does Sun actually have the Body Electric source code? What version do
Does anybody at Sun even know how to compile a Mac program?
Last time I worked there (admittedly a long time ago), they were too
embarrassed to allow Macs in the building (which might make somebody
realize how bad Unix sucks in comparison).
Has anyone tried to rewrite it in Java?
When I was working with David at Levity and Interval, I totally
overhauled the source code to Bounce: porting it to the latest version
of CodeWarrior and the PowerPC, cleaning up C code translated from
Pascal, that no human had ever touched before, and just generally
re-indenting and adding white space so it was pretty to look at.
Then I implemented a new interface for plugging in DM's, based on
ActiveX (yes, ActiveX runs on the Mac). I added a new data type that
you can flow along blue wires: a COM object (aka a plug-in ActiveX
object in a shared library).
Then we made plug-ins modules that produced and consumed the new
plug-in data types, like strings and polymorphic dictionaries. With
these new data types, we were able to model very complex simulations
as nested trees of dictionaries, strings and numbers, treat
dictionaries as high level objects, pass them all around on wires,
reading and modifying them at will.
The thing that the original Body Electric was missing is a way to
dynamically model structured data like Lisp s-expressions and
association lists, which is now possible in Bounce, using
dictionaries. (The swivel3d trees just don't cut it for representing
Bounce still had the "M4" Director player rendering engine, but it
didn't do everything we needed, so I implemented my own graphics
library for drawing sprites, playing sounds, and stuff like that. We
used it to implement a simulation of Rush Limbaugh and Jesse Jackson
watching TV and arguing over the closed captioning stream.
A big fat housefly would skitter around the screen, land on their
faces, and tickle them into waving their hands around and
I just got a 400 mhz G3 powerbook, and fired up a 2 year old copy of
Bounce and the crazy Limbaugh/Jackson demo, and it still works,
actually like a bat out of hell! I ran into David and John Szinger
(another Bounce programmer from Interval) a few days ago at a 4th of
July party, and they really got a kick out of seeing the old demo that
we worked together on, still running!
I live in Oakland just south of Berkeley. Drop me a line if you're in
the bay area, and would like to see what Bounce has evolved into.
From: Jaron Lanier
To: Don Hopkins
Sent: Thursday, July 08, 1999 6:13 AM
Subject: Re: Body Electric lives?
Hey there, and thanks for writing!
Yup, Sun owns VPL and Body Electric, and my guess is that if anyone
looked very closely it would turn out Interval doesn't have rights to
Bounce. But no one is likely to look very closely, so let's forget
about that. (I don't think Sun is aware of Bounce or the work at
I had never asked what was done with Bounce at Interval- it's
fascinating to hear what you were up to. I live in NYC for the most
part, but I'd love to see it sometime when I'm in the Bay Area. Also
Chuck Blanchard lives in SF and you and he might want to trade demos
There IS a community of Body Electric users. It is STILL building the
most interactive 3D virtual worlds of any tool (though Alice, from
Carnegie Mellon, is the other hot contender). That's SHAMEFUL! While
BE sucks in every other way, all the more recent vr design tools,
especially the vrml ones, simply avoid the problem of deep
interactivity. How could the community be so whimpy, at this late
On the Body Electric side of things, there has also been some work
updating to CodeWarrior/PowerPC (shame we didn't share that work!), as
well as support for Quickdraw3D, OpenGL on the Mac, OMS, and some
There have been some changes to the interface, but not as ambitious as
yours. The neatest thing is a debugger/tracer tool that is a pleasure
to use. Since BE is used mostly in 3D domain, there are also some
tools dealing with textures, lights, etc.
I had always thought the swivel tree was ridiculous, of course, but on
the other hand I liked the idea that the virtual world and the
knowledge base were the same thing- that unity encourages the
visibility and grabbability of the underlying concepts. I think the
brain works that way- there isn't some barrier behind which everything
gets abstract- instead, it's user interface all the way to the bottom!
What I think would be the coolest long-term destination of BE would be
extending the scenegraph so that it was as powerful a knowledge base
as you'd want...
I'd LOVE to see a fancy BE release, in JAVA, or at least spitting JAVA
out. The question, of course, is where the money would come from.
I've tried to talk Sun into an open source release so the community
could hack on it voluntarily (the source was included in the patents
that were granted, so it's actually already released by the US
government anyway, though only on paper). Unfortunately, Sun doesn't
want to do that. What they've said instead is that I can choose up to
six sites with free hacking privileges at a given time. Bizarre!
With so few sites, I think there'd need to be money to make sure the
sites stayed focused on it... You'd have thought Sun would know
better by now.
The body electric community is surprisingly NOT entertainment
oriented, though I and a few others still use it that way. There are
people hidden away who are still using it for ergonomic simulations,
simple surgical planners (because the fancy systems are too rigid to
model some situations), cognitive test rigs, and some work with kids.
You should know there was some tension about BE/Bounce at one point.
The problem was that Chuck Blanchard wasn't credited as the lead
designer/programmer of BE/Bounce when David brought the program to
Interval. Chuck's name was reduced in stature in the "about" and he
was not mentioned in some important semi-public demos at Interval. He
was also offered a pseudo-position to help with Bounce at Interval,
but without health benefits - and Chuck has MS and absolutely NEEDS
health insurance. I at one point yelled at the guy who runs Interval
(forgot his name..) about their treatment of Chuck- and David is
STILL mad at me for making a fuss about it. But I felt I had to.
So that's the scoop! Both sides of the mystery revealed!
All the best,
Jaron on the web: