Hacker News new | past | comments | ask | show | jobs | submit login
Maybe visual programming is the answer, maybe not (metaobject.com)
136 points by mpweiher on April 25, 2020 | hide | past | favorite | 149 comments



These issues are old.

- Large programs / graphs? I can browse maps and photos of the entire planet from my web browser. ZUIs are well understood now. Also, computers got really big and fast. My graphics card has 8 GB (!) of RAM. Find a VPL that was designed for the era where graphics are cheap.

- Automatic layout is hard? Yes, an optimal solution to graph layout is NP-complete, but so is register allocation, and my compiler still works (and that isn't even its bottleneck). There's plenty of cheap approximations that are 99% as good.

- No formal spec? They sound desperate to come up with excuses why something can't work. How many (textual) programming languages had a formal spec before they got popular? I tried to look up when Ruby got a formal BNF grammar, for comparison, and it seems the answer in 2020 is still "nope, whatever parse.y does (13,000+ lines of C), that's Ruby".

- Difficulty to create editors? This is a legitimate concern, as it's true "there are no general purpose Visual Language editors", but that's an economic issue, not a technical one. Plain text is one of the very few formats where we have general-purpose editors. In most areas of computing, profitability has taken over, and companies refuse to cooperate. There's no one word processing or spreadsheet or database format, either.

- Lack of evidence? Again, they're holding VPLs to a higher standard. When a language like Ruby appeared, nobody asked for proof. Simply creating something that some programmers liked was sufficient.

- Poor representation? That's an issue with all kinds of programs! If I had a nickel for every horribly written program I've had to deal with in my life...

- Lack of portability? This is another symptom of the editor problem, above. Make a standard file format, and actually figure out how to get people to use it, and there's no problem.

These issues are from an article written in the 1980's. The same day as its publication, Apple released the Macintosh IIci, with a 40 or 80 MB hard disk, 1 or 4 MB of RAM, and a 640x480 pixel display. (For all that computing power, the starting price was $6269 -- only slightly less than the $6615 starting price for a Honda Civic that year.) Clearly, visual programming was a non-starter in that day and age. So were 90% of the other things I do on my computer every day. That doesn't mean they're bad ideas.

If you'd have asked me to write down a list of issues with C++ in 1989, I'd have a list that was similarly useless in 2020.


Visual programming languages are commonly used for the creation of chatbots in frameworks such as IBM Watson Assistant. They end up being an absolute complete mess. The reliance on global variables and go-to statements for control flow makes them extremely bug prone and unintelligible.


What does that have to do with visual programming? Any program written with a "reliance on global variables and go-to statements" is going to be "extremely bug prone and unintelligible" -- visual or otherwise. There are certainly VPLs with scope and flow control constructs.


But who’s using them? From the symptoms you describe I would guess it’s people very far removed from software development.


The Sims uses a visual programming language called "SimAntics" for scripting behaviors and "AI" of people and objects.

The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo. This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that I developed for The Sims with Will Wright at Maxis and Electronic Arts.

https://www.youtube.com/watch?v=-exdu4ETscs

https://medium.com/@donhopkins/the-sims-pie-menus-49ca02a74d...

The Sims Steering Committee - June 4 1998. A demo of an early pre-release version of The Sims for The Sims Steering Committee at EA, developed June 4 1998.

https://www.youtube.com/watch?v=zC52jE60KjY

https://medium.com/@donhopkins/the-sims-pie-menus-49ca02a74d...

Mod The Sims Wiki: SimAntics:

https://modthesims.info/wiki.php?title=SimAntics

Simantics Wiki:

http://simantics.wikidot.com/

SimsTek Wiki: SimAntics

https://simstek.fandom.com/wiki/SimAntics

From Ken Forbus's game design course:

Under the hood of The Sims:

http://www.cs.northwestern.edu/~forbus/c95-gd/lectures/

Programming Objects in The Sims:

http://www.qrg.northwestern.edu/papers/Files/Programming_Obj...


Thanks for taking the time to write this up!


A bunch of disgruntled computer science grads questioning their life choices, at the large telecommunications provider for which I was working.

The Watson language just doesn't provide any other way to do anything. It's a common problem amongst visual programming languages as mentioned in the article.


Here's another paper about visual programming that Brad Myers wrote a couple years after his 1989 paper.

C32: CMU's Clever and Compelling Contribution to Computer Science in CommonLisp which is Customizable and Characterized by a Complete Coverage of Code and Contains a Cornucopia of Creative Constructs, because it Can Create Complex, Correct Constraints that are Constructed Clearly and Concretely, and Communicated using Columns of Cells, that are Constantly Calculated so they Change Continuously, and Cancel Confusion

Spreadsheet-like system that allows constraints on objects to be specified by demonstration. Intelligent cut and paste. Implemented using Garnet. 1991

Brad A. Myers. "Graphical Techniques in a Spreadsheet for Specifying User Interfaces," Proceedings SIGCHI'91: Human Factors in Computing Systems. New Orleans, LA. April 28-May 2, 1991. pp. 243-249.

https://www.researchgate.net/publication/221518856_Graphical...

C 32 Spreadsheet 1991: https://www.youtube.com/watch?v=IsINJ8mlD5A

If you enjoyed the full audacious name of C32, then you may also enjoy some of the other acronym names of projects by Brad and his students:

http://www.cs.cmu.edu/~bam/acronyms.html

(My meager contribution was "GLASS: Graphical Layer And Server Simplifier".)


One more general-purpose thing I miss very much is filesystem. Really what is a good option for a usb disk filesystem which you would like to use on Mac, Win and Linux? Seems to me that without using Paragon/Tuxera/etc. you're screwed.


Agreed. For sharing a filesystem, I assume the best you can do today is probably FAT32 (or whatever the latest of that family is), which is not great.

Why can't we get people from major tech companies together and have them develop a good on-disk format which supports the features we all need, and then everybody can go back and write their own implementations? Like we do with Unicode, or IP. And like we should do for many other features.

For all the talk about the importance of separation of interface and implementation, operating system designers really suck at it.


The modern answer is exFAT. August last year Microsoft opened up the specifications, and pledged that it won't be patent encumbered[].

How well an operating systems has a separation of concerns between interface and implementation really has very little (unfortunately) to do with business incentives. Even if the Linux kernel had perfect separation of concerns, there's no way to force Microsoft or Apple to support it in their products.

Not to mention the problem with standards: https://xkcd.com/927/

[1] https://cloudblogs.microsoft.com/opensource/2019/08/28/exfat...


Amazingly, the web folks seem to be doing OK on this front. They often manage to miss the xkcd#927 trap. What I see (admittedly the simplified public version) is more like:

- Here's a standard that kind of works and everybody is using it.

- Hey, we invented this cool new thing that isn't part of that standard but people seem to like it! (XHR, Canvas, ...)

- OK everybody let's put that into the standard as well so we can all agree on how it works. Thanks!

There was no way to "force Microsoft or Apple" to make a web browser, either, and yet they both did, and made it part of their operating systems.


FAT32 is too modern in some cases. I had to fall back to that recently to use with a fairly recent commercial photocopying machine (of the kind that can do full color, A3 paper, staple and fold outputs, etc, so not bottom line).

And I think that, “For sharing a filesystem” nowadays your best option is ”the cloud” for most people.


UDF is secretly well-supported for read/write across devices. That said, exFAT or NTFS seems to be the right option nowadays; NTFS-3g works well on Linux, there’s various options for macOS (not perfect but oh well.)


Agreed! Many of those claims completely disregard the research in visual programming that has happened in the past 30 years. Specifically the idea that it is about visual vs. textual seems utterly outdated now that more and more modern development environments find ways to combine the best of both worlds.

If anyone finds the time to update these claims, please consider our vvvv in your research: - A visual programming environment for the .NET ecosystem - Compiles to C# using Roslyn - Can use code from any .NET library as visual code blocks - Its language VL augments dataflow with features from OOP and functional programming, supports generics and easy multithreading - Useful since 2002: https://vimeo.com/371511910

Free for non-commercial use without restrictions (Windows only): http://visualprogramming.net

vvvv definitely doesn't solve all known issues but we hope it raises the bar for visual programming and such discussions.


I'll preface by saying that vvvv looks impressive and certainly seems like one of the more polished options in this space. We have a type of visual programming as part of an internal configuration tool, and I will definitely look into vvvv for how things can be done differently.

That said, this screenshot from one of your tutorial videos highlights the type of thing I think people are worried about: https://i.xkqr.org/screenshot_2020-04-27T09:47:46.png

I'm not saying it's bad – it might just be a question of habit whether or not graph edges are better than named identifiers in text, but it is hard to build away, even with best in class tooling.


It has it's advantages and disadvantages, same goes for text-programming though: Sometimes in text you store results in badly named variables because it is hard to make up a meaningful term for each step of a computation. In those cases visual code shines because instead of forcing you to come up with a name it allows you to directly connect things with the added benefit of visualizing dataflow, thus structuring your code. If you have an algorithm though where each interim result can be clearly named, then text code shines. So roughly we argue that the more high-level your code is, the better it works visually, the more low-level it is, the better it is expressed in text. And it is therefore about a combination of both worlds. We don't have this yet in vvvv gamma (but have it in vvvv beta) that you can at any point write text-code if you prefer and wrap this in a node for further use in your visual graph.


> Lack of evidence? Again, they're holding VPLs to a higher standard. When a language like Ruby appeared, nobody asked for proof. Simply creating something that some programmers liked was sufficient.

If there was a visual programming language with anywhere near the popularity of Ruby, I'd be willing to consider that maybe the idea has some merit.


I don’t know how many people use Ruby, its certainly not as popular as it was during the Rails hype, so its hard to day, and I don’t have numbers for the visual languages either, but Unreal Engine’s Blueprints has a lot of users. Max/MSP does too, and apparently a lot of artists use tools like Substance Designer to create visual materials and those tools have visual programming too.

EDIT: Media Molecules’ Dreams has a lot of users using its visual language to implement game logic! It sold about 8.5 million copies. Obviously not all people who bought it are using its visual language, but I imagine a very large percentage have at least tinkered a bit. I doubt Dreams would be as popular if it required textual programming.


Excel.

I could turn your argument around: If Ruby were anywhere near as popular, widely used, and successful as Excel, I'd be willing to consider that maybe the idea that Ruby is a viable programming language has some merit.

But I won't, because whether or not something is a visual programming language isn't up to a popularity contest.

Can you come up with a plausible definition of visual programming languages that excludes Excel, without being so hopelessly contrived and gerrymandered that it also arbitrarily excludes other visual programming languages?


> I could turn your argument around: If Ruby were anywhere near as popular, widely used, and successful as Excel, I'd be willing to consider that maybe the idea that Ruby is a viable programming language has some merit.

How is Excel, as widely used, a programming language? Excel sheets in their widely used form are not instructions or behaviour; you can't run them, only manually edit them. Perhaps there is a way to turn Excel sheets into programs, but I highly doubt such a way of using Excel would be more popular than Ruby.

> Can you come up with a plausible definition of visual programming languages that excludes Excel, without being so hopelessly contrived and gerrymandered that it also arbitrarily excludes other visual programming languages?

I'd say a visual programming language is a programming language for which the primary way of making changes to the program is visual. Even if we were to consider Excel a programming language, the primary way in which people make changes to Excel sheets - especially advanced users who are doing more programming-like things - is by editing textual formulae, IME.


Are we talking about the same "Excel"? I mean Microsoft Excel, in case there's something else called Excel that I haven't heard of.

Microsoft Excel spreadsheets certainly do contain instructions and behavior. That's the whole point of spreadsheets, that distinguish them from plain CSV files! They continuously run every time you make any change to them. And Excel is several orders of magnitude more popular and successful than Ruby: there is no comparison.

Excel even supports "Programming by Demonstration" through its macro recorder. Programming by Demonstration happens to be one of Brad Myer's favorite areas of research!

Brad wrote the 1989 paper about visual programming that we're discussing, and Visual Programming, Programming by Demonstration, and Programming by Example are closely interrelated topics that he has done a lot of work with and written a lot about.

I think it's important to consider some of his and other people's more recent work, before extrapolating on the technological limitations that he wrote about in 1989, without taking into account 31 years of progress, research, and development.

https://en.wikipedia.org/wiki/Brad_A._Myers

"Myers is a leading researcher in the field of programming by demonstration and created the Garnet and Amulet toolkits."

https://en.wikipedia.org/wiki/Programming_by_demonstration

"In computer science, programming by demonstration (PbD) is an end-user development technique for teaching a computer or a robot new behaviors by demonstrating the task to transfer directly instead of programming it through machine commands."

Watch What I Do: Programming by Demonstration. Edited by Allen Cypher, Daniel Conrad Halbert, David Kurlander, Henry Lieberman, David Maulsby, Brad A. Myers, and Alan Turransky.

https://archive.org/details/watchwhatido00alle

http://acypher.com/wwid

Watch What I Do: Foreword by Alan Kay

http://acypher.com/wwid/FrontMatter/

>I don't know who first made the parallel between programming a computer and using a tool, but it was certainly implicit in Jack Licklider's thoughts about "man-machine symbiosis" as he set up the ARPA IPTO research projects in the early sixties. In 1962, Ivan Sutherland's Sketchpad became the exemplar to this day for what interactive computing should be like--including having the end-user be able to reshape the tool. [...]

Microsoft Excel Macro Programming History

https://en.wikipedia.org/wiki/Microsoft_Excel#Macro_programm...

"History: From its first version Excel supported end-user programming of macros (automation of repetitive tasks) and user-defined functions (extension of Excel's built-in function library). In early versions of Excel, these programs were written in a macro language whose statements had formula syntax and resided in the cells of special-purpose macro sheets (stored with file extension .XLM in Windows.) XLM was the default macro language for Excel through Excel 4.0. Beginning with version 5.0 Excel recorded macros in VBA by default but with version 5.0 XLM recording was still allowed as an option. After version 5.0 that option was discontinued. All versions of Excel, including Excel 2010 are capable of running an XLM macro, though Microsoft discourages their use."

>I'd say a visual programming language is a programming language for which the primary way of making changes to the program is visual.

Your arbitrarily gerrymandered definition turns C++ into a visual programming language, which it clearly is not.

So how is making changes to a C++ program in VI or Emacs not visual, then? People use visual editors to make changes to textual C++ programs all the time.

The "V" in "VI" STANDS FOR "Visual", because it is the "visual mode" of the line editor called ex.

But the actual structure and syntax of a C++ program that you edit in VI is simply a one-dimensional stream of characters, not a two-dimensional grid of interconnected objects, values, graphical attributes, and formulas, with relative and absolute two-dimensional references, like a spreadsheet.

https://en.wikipedia.org/wiki/Vi

>The original code for vi was written by Bill Joy in 1976, as the visual mode for a line editor called ex that Joy had written with Chuck Haley.[3] Bill Joy's ex 1.1 was released as part of the first Berkeley Software Distribution (BSD) Unix release in March 1978. It was not until version 2.0 of ex, released as part of Second BSD in May 1979 that the editor was installed under the name "vi" (which took users straight into ex's visual mode),[4] and the name by which it is known today. Some current implementations of vi can trace their source code ancestry to Bill Joy; others are completely new, largely compatible reimplementations.

>The name "vi" is derived from the shortest unambiguous abbreviation for the ex command visual, which switches the ex line editor to visual mode. The name is pronounced /ˈviːˈaɪ/ (the English letters v and i).[5][6]



> They continuously run every time you make any change to them.

Which is very much unlike what a program does.

> FYI, the "V" in "VI" STANDS FOR "Visual", because it is the "visual mode" of the line editor called ex. Your arbitrarily gerrymandered definition turns C++ into a visual programming language, which it clearly is not.

I appreciate your condescension, but I'm well aware of the history of vi and in any case quite capable of using wikipedia myself.

How is taking the plain meanings of the words in the definition "arbitrarily gerrymandered"? You tell me what visual means to you and how C++ is not a visual programming language, if that's somehow clear in a way that does not apply to Excel.


>> They continuously run every time you make any change to them.

>Which is very much unlike what a program does.

You're saying "Program that run continuously every time you make any change are very much unlike what a program does?" That doesn't make any sense to me at all, can you please try to rephrase it?

Speaking of program that run continuously, have you ever seen Bret Victor's talks "The Future of Programming" and "Inventing on Principle", or heard of Doug Engelbart's work?

The Future of Programming

https://www.youtube.com/watch?v=8pTEmbeENF4

Inventing on Principle

https://www.youtube.com/watch?v=8QiPFmIMxFc

HN discussion:

https://news.ycombinator.com/item?id=16315328

"I'm totally confident that in 40 years we won't be writing code in text files. We've been shown the way [by Doug Engelbart NLS, Grail, Smalltalk, and Plato]." -Bret Victor

Do you still maintain that "Excel sheets in their widely used form are not instructions or behaviour", despite the examples and citation I gave you? If so, I'm pretty sure we're not talking about the same Microsoft Excel, or even using the same Wikipedia.

Your definition is arbitrarily gerrymandered because you're trying to drag the editor into the definition of the language, while I'm talking about the representation and structure of the language itself, which defines the language, not the tools you use to edit it, which don't define the language.

I'll repeat what I already wrote, defining how you can distinguish a non-visual text programming language like C++ from a visual programming language like a spreadsheet or Max/MSP by the number of dimensions and structure of its syntax:

>But the actual structure and syntax of a C++ program that you edit in VI is simply a one-dimensional stream of characters, not a two-dimensional grid of interconnected objects, values, graphical attributes, and formulas, with relative and absolute two-dimensional references, like a spreadsheet.

Text programming languages are one-dimensional streams of characters.

Visual programming languages are two-dimensional and graph structured instead of sequential (or possibly 3d, but that makes them much harder to use and visualize).

The fact that you can serialize the graph representation of a visual programming language into a one-dimensional array of bytes to save it to a file does not make it a text programming language.

The fact that you can edit the one-dimensional stream of characters that represents a textual programming language in a visual editor does not make it a visual programming language.

Microsoft Visual Studio doesn't magically transform C++ into a visual programming language.

PSIBER is an interactive visual user interface to a graphical PostScript programming environment that I wrote years after the textual PostScript language was designed at Adobe and defined in the Red Book, but it didn't magically retroactively transform PostScript into a visual language, it just implemented a visual graphical user interface to the textual PostScript programming language, much like Visual Studio implements a visual interface to C++, which remains a one-dimensional textual language. And the fact that PostScript is a graphical language that can draw on the screen or paper doesn't necessarily make it a visual programming language.

https://medium.com/@donhopkins/the-shape-of-psiber-space-oct...

It's all about the representation and syntax of the language itself, not what you use it for, or how you edit it.

Do you have a better definition, that doesn't misclassify C++ or PostScript or Excel or Max/MSP?


> You're saying "Program that run continuously every time you make any change are very much unlike what a program does?" That doesn't make any sense to me at all, can you please try to rephrase it?

Running continuously every time you make any change is very much unlike what a program does. Programming is characteristically about controlling the sequencing of instructions/behaviour, and someone editing a spreadsheet in the conventional (non-macro) way is not doing that.

> Do you still maintain that "Excel sheets in their widely used form are not instructions or behaviour", despite the examples and citation I gave you? If so, I'm pretty sure we're not talking about the same Microsoft Excel, or even using the same Wikipedia.

This is thoroughly dishonest of you. You edited those points and examples into your comment, there was no mention of macros or "programming by demonstration" at the point when I hit reply.

To respond to those added arguments now: I suspect those features are substantially less popular than Ruby. Your own source states that Microsoft themselves discourage the use of the things you're talking about. Excel is popular and it may be possible to write programs in it, but writing programs in it is not popular and the popular uses of Excel are not programs. Magic: The Gathering is extremely popular and famously Turing-complete, but it would be a mistake to see that as evidence for the viability of a card-based programming paradigm.

> Your definition is arbitrarily gerrymandered because you're trying to drag the editor into the definition of the language, while I'm talking about the representation and structure of the language itself, which defines the language, not the tools you use to edit it, which don't define the language.

Anything "visual" is necessarily going to be about how the human interacts with the language, because vision is something that humans have and computers don't (unless you're talking about a language for implementing computer vision or something).

> I'll repeat what I already wrote, defining how you can distinguish a non-visual text programming language like C++ from a visual programming language like a spreadsheet or Max/MSP by the number of dimensions and structure of its syntax:

But you can't objectively define whether a given syntactic construct is higher-dimensional or not. Plenty of languages have constructs that describe two- or more-dimensional spaces - e.g. object inheritance graphs, effect systems. Whether we consider these languages to be visual or not always comes down to how programmers typically interact with them.

> PSIBER is an interactive visual user interface to a graphical PostScript programming environment that I wrote years after the textual PostScript language was designed at Adobe and defined in the Red Book, but it didn't magically retroactively transform PostScript into a visual language

There's nothing magical about new tools changing what kind of language a given language is. Lisp was a theoretical language for reasoning about computation until someone implemented an interpreter for it and turned it into a programming language.


> Lisp was a theoretical language for reasoning about computation until someone implemented an interpreter for it and turned it into a programming language.

Lisp was designed and developed as a real programming language. That it was a theoretical language first is wrong.


color tree seems to embody a lot of the things you’ve mentioned. it uses a zui to edit lisp code. i hope the author finishes it soon! http://colortree.app


The model of a program as a linear file of 80-column text is a historical accident based on IBM mainframes in the 1950s, and the decks of punch cards they read. It seems like there must be a way to treat text in a more flexible way without going to a full visual programming language.

For example, consider all the things that have a physical order and position in a file that is actually irrelevant. Why can't methods (for instance) get displayed as convenient, without worrying about where they are in the file?

Limiting programs to fixed-width, single-font text also seems like a historical artifact. (The Xerox Alto allowed full WYSIWYG text for program code.) Having to draw diagrams with ASCII graphics just seems wrong.

Admittedly, these examples are somewhat trivial, but my point is that program structure is probably a local maximum, not the best possible solution.


You might like Smalltalk. Programs are stored as images. An image holds the entire (fully introspectable, editable) runtime and your program. You can browse through your image with a powerful suite of built-in tools. Pieces of the image can be "filed out", or serialized to a file format.

You can see an example at 15:00 of this video - some snippets of code and then showing the System Browser.

https://www.youtube.com/watch?v=eGaKZBr0ga4


> It seems like there must be a way to treat text in a more flexible way without going to a full visual programming language.

I've mentioned this a few times on past articles on HN, but I think the issue is we're marrying two different concepts with text based programming languages, namely presentation and representation. Consider - in an idealized context - HTML and CSS. One being the representation, the other being the presentation or more accurately how to present the representation. A user can change how the HTML is presented to them without changing the HTML directly. I think something like this would be useful for programming source code.

So as a thought experiment for this in terms of programming, the source code would be some backing representation that you edit which can be rendered in any way a user desires, including back to regular text as we have now. Tooling (compilers/interpreters, version control, editors, etc) would deal primarily with the representation rather than the presentation.

I think this would also solve a lot of problems that tooling ends up dealing with. Consider version control not having to deal with formatting changes because that's just an artifact of how a programmer chooses to render the representation rather than the representation itself. Autoformatters (gofmt, clang-format, etc) would be less about putting source code in a canonical format for the sake of tooling and more about rendering source code as text in a particular chosen way for individual programmers. Similar for tooling such as IDE code folding or refactoring, document generation, etc.

This is actually something I have been pondering for a decade at this point. I have no idea how a system such as this could be implemented such that it is general and sound enough to be usable. But I still think it would be a neat way to program.


I think the problem emds up being that visual programming implementations I have seen over the years don't really make a moderately skilled programmer more productive or efficient.

So the value ends up being in giving more people who are unskilled or less skilled in programming a way to express "programmatic thinking" and algorithms.

I have taught dozens of kids scratch and that's a great application that makes programming accessible to "more" kids.

At work, my team abstracts complex feature creation logic into KNIME components that our analysts (who struggle with Python programming) can mix and match and run analysis.


You make me think of Azure ML studio.


> Why can't methods (for instance) get displayed as convenient, without worrying about where they are in the file?

This is something unison is playing with with their codebase manager.

https://www.youtube.com/watch?v=gCWtkvDQ2ZI


Actually Pilot works in the same way. It treats all code as independent of the file system, identified by the h SHA256 of the content. https://github.com/zubairq/pilot


> Why can't methods (for instance) get displayed as convenient, without worrying about where they are in the file?

Get displayed as convenient what?

> Limiting programs to fixed-width, single-font text also seems like a historical artifact. (The Xerox Alto allowed full WYSIWYG text for program code.) Having to draw diagrams with ASCII graphics just seems wrong.

I can imagine benefits of better diagrams, for documentation, but how would more fonts in the code help?


Comments could use native italics and links instead of HTML markup.

More comment would fit on a line with a proportional font.

Headings would be great for breaking up sections of code, and could be collapsible. Maybe linkable as well.

Things that are arrays of arrays or tuples, like enums, could be tables.

When you pass three functions to create an observer, there could be a table with columns "Parameter name, signature, implementation".

Async code like observables and a thread's run method could be in a different font than code that executes right when you interpret it.


It's possible to make visual user interfaces to text based programming languages. And that's especially convenient for programming languages that are good at drawing graphics.

Here's an interactive visual programming and debugging user interface to the PostScript programming language, implemented in the NeWS dialect of PostScript:

https://medium.com/@donhopkins/the-shape-of-psiber-space-oct...

The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989

Abstract: The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS.

The PSIBER Space Deck is a programming tool that lets you graphically display, manipulate, and navigate the many PostScript data structures, programs, and processes living in the virtual memory space of NeWS.

The Network extensible Window System (NeWS) is a multitasking object oriented PostScript programming environment. NeWS programs and data structures make up the window system kernel, the user interface toolkit, and even entire applications.

The PSIBER Space Deck is one such application, written entirely in PostScript, the result of an experiment in using a graphical programming environment to construct an interactive visual user interface to itself.


When a programmer says "I want visual programming" I think it's often one of those situations where 1) the asker has correctly identified a pain point, but 2) the thing they are asking for isn't necessarily the right way to do it.

Instead of a box-and-lines diagram, I think the stuff that people really want is:

- Structured editing. Valid change operations are obvious (maybe listed in a menu). No invalid states.

- Cross-cutting connections are explicitly displayed (with lines or arrows or etc) instead of being implicit.

- UI does a better job of highlighting just the relevant information and hiding everything else.

- Overall more simplicity. As in, I think some people imagine that their thousand-lines-of-code project could be fully represented as a few boxes and lines with labels. There's some unrealistic expectations tied up in this area. But simplicity is still a good goal.

Those are all good goals and there's different approaches to solve them. A lot of those asks can be solved by keeping the text document interface and adding enough cleverness on top of that (like EVE / LightTable). And there's other visual systems other than boxes-and-lines. People just need to think outside the box. (ha ha, sorry)


I disagree. I use boxes and lines all the time, on paper, when programming. I think in boxes and lines. I briefly wrote some Max/MSP visual code some years ago and I found that I no longer needed to do it on paper, because thinking directly in Max was enough.


But why? In every case other than compilers, when we say we want these types of features, we never mean "let's stick with plain text, like the previous generation did, and not even consider a different type of solution".


Visual programming doesn't just mean boxes and lines. It could mean anything other than plain text.


That's right. It has to do with the dimensionality and topology of the language syntax, not just how it looks.

C++ is not a visual programming language, no matter how much you colorize and pretty-print and fold it, because it's represented as a one-dimensional stream of characters, one after the other.

Excel is a visual programming language (and one of the most widely used programming languages in the world of any kind, visual or not), because it's represented as a two-dimensional grid of numbers, expressions, strings, relative and absolute two-dimensional references, and it also (though no not necessarily for the definition of a VPL) supports all kinds of graphical attributes, colored text and background, fonts, text styles, formatting, boxes and lines, freezing and grouping and outlining rows and columns, embedded graphs and graphics, etc.


I agree with this. I think people don't mind text being there, as even all so called Visual Languages have text in their boxes. People want to understand "what" their program does in a more clear way, and Structured editing, lines or arrows, highlighting just the relevant information, and more simplicity do just that.


I think it would be great to do the reverse - take existing programs and map out the calls in a graph structure that can be visually displayed.

Hardest part of looking at existing code base is to "mentally map" the call stacks and overtime, atleast for the way my brain operates, I tend to build a 2d map in my mind. It doesn't have concrete positions for blocks (functions), but there is a sense of structure in the brain that persists. I can come back to the code base and it is evoked automatically.

We need visual tools for examining codebases just like we have tools for examining database schemas. Perhaps its not that simple though.


Having worked in actual visual programming environments I'd argue that even the most trivial applications are too complicated to mapped out visually. There just isn't enough visual resolution in a screen, or for that matter, in your eye to render it sufficiently. Zoom out and lose detail. Zoom in and use lose context. I've seen programs printed out into huge graphs covering entire boardroom tables and I can assure you nobody benefited from the display.

Ironically the convoluted programs that people think this would be a solution for make for the most disgusting and hard to follow diagrams.


Whenever I've tried making dependency graphs of programs (usually the easiest type of visualization to create), I find they always turn up some crazy dependencies you never expected, and don't want. It's easy to hide one weird #include in a list of 20 other #includes. It's hard to hide an arrow going sideways halfway across your diagram.

For visualization types other than dependency graphs or class hierarchies, it's tricky to do any sort of automatic code-to-diagram conversion, so I wouldn't expect there to be a lot of value in that. Even something simple like a state machine might be much simpler to see in diagram form, but it would take quite a bit of sophistication for a visualization program to look at 1000 lines of C and recognize that you're implementing a state machine.

The visual form is more abstract, so you need to start there and have the computer compile it down. Going the other way is always harder. It's 1000 times easier to start with Python code and automatically convert it into machine code, than it is to take machine code and automatically extract a readable Python program from it. That's not because Python is inherently bad for reading or writing. The less abstract form, by its nature, is full of implementation details that don't matter in the more abstract form.


I found the reverse. I had to do a tech debt cleanup of the code and found that doing an abstract diagram made it much easier to map out the interrelation between all the classes which was hard to keep in my head all at once. Also made it easier to communicate to others visually why it was spaghetti & easy to create a diagram about how I'd like to clean it up. I used PlantUML because what a lot of text -> visual things miss is that they don't provide the necessary knobs to tune the level of detail so doing the diagram by hand was a necessity.

There may be multiple ways to look at something - sequence graphs, data flows - and additionally you don't want to capture all the details in the visualization, you want to be able to cut-off the diagrams at choke points where they're not helpful (e.g. I don't care about package X in the code) which is what all visualization packages try to do. This is directly analogous to geographic maps.

Imagine if you viewed the map for a state/province whatever & had the local road network detailed. That map would be unusable & discarded quickly. Diagramming tools need to be able to take text & generate a visualization that you can zoom into/out of & lose/gain the appropriate level of detail necessary, be able to switch between different aspects (like you can with street view vs regular vs satellite, vs topography). I think the reason we don't have equivalent tools for programming is three-fold:

1. There's a lot of popular languages. Maintaining support for parsing & mapping that to the visualization abstraction is going to be a lot of work. Probably manageable though.

2. As engineers we tend to think in either solving the problem or not. No visualization is going to be good as a hand-drawn diagram in the moment nor is it probably good enough (at least the first version) as a vehicle for making the modification.

3. There's lot of type erasure/dynamism that's impossible to capture in code analysis (e.g. type-erased callbacks like `std::function`, `std::any`). That makes the visualization incorrect or cause it to fail prematurely.


Agree with you here. When working on more complex problems, I keep a hand-drawn diagram of the relevant code next to my keyboard. Often I'll spend a day sketching out 5 or 6 rough drafts before I arrive at an accurate understand of both the existing code how and how to portray it visually.


The whole thing maybe.

But often I'm in a function and I'd like to know "who calls this", or "what does this call". So being able to get a graph limited to the current function (node) and a couple of levels out on either end could be very helpful. Or maybe being able to mark two functions and see the ways they are connected (if at all).

I guess there might be some tools that do this though, haven't looked.


Sourcetrail is one such app. It recently went open source. I’ve been trying on spare time to contribute to it, it currently supports C++, Java and Python.

I’m looking at ways to improve the analysis to get more detail from apps, but it’s tricky. As far as I know, nobody’s made an ontology of computer science or source code patterns yet.

Creating the graph is easy compared to how much you might want to do to interpret it. Then we’ve data flow graphs to consider, and UI component graphs, dependency graphs, supported platform annotations, and ideally, telemetry from code execution in tests and production, and each can supplement or annotate control flow graphs and source code.

It’s a problem we’ll solve as a community but it’ll take a number of years before it’s general purpose enough that step one is load a program and step two is read and modify it... with everything you need neatly linked and annotated...

https://www.sourcetrail.com/

https://github.com/CoatiSoftware/Sourcetrail

https://www.patreon.com/sourcetrail/posts

For more on my thoughts of “understanding code automatically,” see https://github.com/CoatiSoftware/Sourcetrail/issues/750#issu... as an example.

My preference would be just as books, programmers and IDEs build mental models from source code, that eventually we could generically go from a collection of source files to identifying languages, build commands, and app starting points to multiple layers of graphs: control flow, data flow, cross-project references, and eventually — help programmers like an IDE would, but smarter, more like documentation written by a human with annotated examples from test cases and live code, etc.


Visual studio can show a call-graph, containing both callers and callees to/from a function. "Who calls this" is standard feature of any IDE worth it's name, it's usually called find usages or find references.


Please someone do a PyCharm plugin for this!


It's just a matter of a lot of research. I've been thinking for a long time that now that we have tonnes of memory and cpu power, its time to add hardware that keeps track of stuff and make things easier to understand.

For instance when you look at hex in a file, you see a bunch of numbers, but it would be nice if there was some inference and cloud database of commonly used tropes/functions and could infer concepts, ideas of what the programmer was thinking.

Currently with code, you get someones thoughts that are then translated into a language, then are then translated into machine code.

It would be nice not to lose information regarding what does what in human readable language, I understand for performance reasons why programming languages as they currently are do what they do.

But c and c++ were really designed when hardware and memory was expensive. So they are "too the metal" in terms of their model.

I think reimplementing a basic machine, aka actual hardware design has to be done at the same time as a compiler is made, and work out the theory concepts behind it.

The fragility of von neuman machines and their code has always bugged me, the machine can only blindly do what the hardware is designed to do, since hardware is all about speed, you want to minimize information, but for coding, it would make sense to have machines developed just for development and have "light code" for when you release it out into the wild.

Either way I think there is plenty of innovation left, it's just a lot of research at the hardware and compiler level so that you have base foundations and abstractions to make real innovations in understanding how programs behave.

One of the massive issues is not being able to understand cause and effect in programming which leads to bugs, because the brain can't make easy models of how code or algorithms will behave.


If your codebase is in Python, you can generate a module-level dependency graph using `pydeps`: https://pydeps.readthedocs.io/en/latest/#usage

The graphs are overwhelming for anything beyond a smallest project, but I find them useful as an overview. In other words you can see "what is there" but maybe not understand "how it works."


My experience was different. I really enjoyed working in visual languages and found them quite pleasant to navigate.

I also found that when applying good software engineering principles, visual languages can look very clear. Its just most users of visual languages aren’t professional programmers and never learned software engineering principles like abstraction. Here’s the thing: textual programming languages are also impossible to understand and follow if they don’t follow software engineering practices, but that’s often ignored because we’re used to programming in them. Well abstracted textual code is usually compared to badly abstracted visual code and textual languages are declared the “winner” (even if it shouldn’t be a competition at all).


There is an absolute dearth of tools, especially modern ones, in this space, but the tools to generate call graphs have existed for quite a while. Egypt will parse output from gcc and output a dot file for gnuplot/Graphviz. There may also be D3.js support in places.

I agree we need visual tools, especially when trying to maintain a large codebase, but only the largest companies can sponsor development of an in-house tool for their codebase, and many of them won't. Maintaining code isn't sexy and doesn't make for new features to sell.


It would be pretty cool when you zoom out first you see methods of how they are linked. Then classes and how they linked together.


We look at compute graphs in deep learning, but it isn't very valuable because the graphs are so complicated. Among other issues, it's difficult to create a visualization of a compute graph that represents the time dimension well.


Have you tried sourcetrail?


The issue is this dichotomy people keep fabricating between the “textual” and the “visual.” This makes it seem like one is “better” than the other for all time and we just need to get to the bottom of it. That distinction is made up and is itself part of the problem, and what we should be focusing attention to: https://dl.acm.org/doi/abs/10.1145/3313831.3376731

A related point this work makes is this association people make between “visual” and “flow diagram” coding. Not all “visual” coding is flow-based.


During times when the world isn't falling apart, I teach Scratch to kids on weekends: https://scratch.mit.edu/

I don't pretend that Scratch's drag-and-drop block paradigm is a viable replacement for text-based programming, but I do think it demonstrates that there might be a text-like system we could create that would free us from keyboards, without reducing our source code to piles of spaghetti.

Also, is declarative programming typically mappable to a visual paradigm without severe compromise? Think of all the applications that produce non-executable documents; don't they factor into the visual programming discussion?


Scratch is really fun and with a big budget I bet a lot of improvements could be made to it.

GoAnywhere MFT is drag-and-drop and available for commercial use. I enjoyed it. My programs were pretty and powerful, if not well architected.

Tasker programming is visual; it lets you program on a mobile phone with taps and not typing.


I do a fair bit of UE4 coding in both Blueprint & C++. Each has benefits and drawbacks, many of which have been mentioned already.

The biggest unmentioned one from a learning perspective though, is that with node-based editors, while you can still make logic errors, it’s difficult to actually make syntax errors. When you get used to it, you might not mind forgetting occasionally to match brackets, remember semicolons, use the right keywords in the right places etc, but as a beginner, all of these things can be quite frustrating and opaque, with often cryptic error messages.

With visual node-graphs though, many of these things are just non-issues, which I think helps beginners gain confidence way faster without bouncing off and giving up in frustration. You can’t even connect pins of the wrong type most of the time, so you get some pretty well communicated type safety to boot.

I’d actually love to see an evolution of a fairly advanced visual programming tool like Blueprint turn into a more general-use language, although preferably with files that are still stored as plain text (so at least play nicely with git etc) rather than the binary black boxes that are currently chewing up my LFS...


I really don't get why they didn't want to serialize blueprints into XML or something, things have improved a bit now with the blueprint diffing feature in terms of merging, but having to put BPs into LFS is always going to be a pain point.


This is exactly what we're building vvvv for, please have a look: http://visualprogramming.net/


I just want a modern spreadsheet-like interface that isn't limited to the concept of sheets. A visually-represented functional language that lets a user with no programming knowledge run a pipeline of data transformation and ultimately output the data in a management/client-ready format.

I want to build it myself, but I have neither the time nor the expertise.


Years ago I worked with a project called FormScape that did that. The end result was usually a printed report or PDF but there is no reason it couldn't be some other data format.

The whole thing was tree based, basically you were gluing together a bunch of domain specific objects. The result was something like a visual abstract syntax tree but not quite. There were general purpose nodes on the tree that handled conditions or arithmetic and specialised ones that mapped data.

The people who grasped it well would have been equally happy writing code. Unfortunately it was often sold as a tool that didn't require programmers, that wasn't the case. If you don't have the ability to decompose your requirements to their constituent parts no tool is going to help.

There was often debate about whether it was programming or not however it was certainly Turing complete. I managed to write a tiny basic interpreter in it.

It was very productive in its narrow niche.


Do you mean like Node-RED, or something different? https://nodered.org/


I'd say that certainly has some of the elements I'm thinking of, but what I'm describing is a little more spreadsheet like


What limitations are you looking to remove? 2d to multi-dimensional? Tables that float instead of living on sheets?


EDIT: I think users can only really easily operate with 2D data, but to borrow someone else's acronym, a ZUI with multiple layers / levels of data could be more ergonomic than a vanilla GUI, so in a way I am indeed arguing for 3D.

And yes to floating tables. Probably allowing users to also reference other sheets/tables semantically rather than just by row-column.

Also creating explicitly defined (visual) pipelines of how the data is being transformed so users can audit and understand complex "spreadsheets".

Auditing deeply linked calculations by following text formulas is too cumbersome for most users.


I've sat through a presentation done with a ZUI (Zoomable User Interface) and it was nauseating. I literally almost threw up due to the unexpected motion every few seconds.


I believe you, but I think there's a difference between watching someone work and working – if you control the motions, they're not unexpected after all


I would love to have a tool that would display code as some kind of annotated AST, rather than the layout imposed by most IDEs or tools such as gofmt.

Put a bag over Java’s ugly face, that kind of thing.

I don’t want to look at IntelliJ’s “staircase of doom” formatting, I want to see one parameter per line when there is more than one parameter.

I don’t want to trace down data “COME FROM”s (local variables that are only used once and have to be tracked down from whence they came).

Give me an alternate layout which directly tracks between the text and the diagram, similar to a structure layout, but showing details within functions/methods as a tree, as well. Show me clearly how values/expressions funnel or pipeline into each other, but filter out some of the noise.

And auto-display fuglyCaps names as kebab-case, while we’re at it :-). Cuz, science.


What you are describing is closely related to some of the ideas of Intentional Programming.

The core idea is that the code is just a model (AST, graph...) and your editor projects it to whatever format or syntax you like.

Development then is coding in these representations of the model or building new smart abstractions and model transformations (DSLs) for going from your screen representation to lower-level building blocks and eventually compiling to e.g. something that runs on the JVM.

It quickly becomes just as complicated as it sounds.

The best implementation in my experience is that of JetBrains’ free tool MPS (Meta Programming System). They start with the model (your AST) rather than the code.

The learning curve is steep and it is difficult to adopt it in teams where people are more focused on shipping features than building exotic tooling to help them do it.


Thanks. That goes on my to-do list, then.


This article provides a good overview of the problems and pitfalls of visual programming languages and environments and why their perceived benefits have not been realized up to now. I'd add one more. Cobol has gotten a lot of attention lately and although not "visual" in any sense whatsoever, had the goal of making programming accessible to managers and business users by adopting paragraphs and sentences to add structure to programs. Which is great and fine for trivial examples, but in practice the 25,000+ LOC real programs (and thousands of these in typical systems) were/are inscrutable to everyone but the specialist.

The inherent problem is the lack of primitives to build abstractions to "move the langauge toward the problem domain". These primitives can be first class functions, objects, macros, etc [1]. Visual programming environments I've seen also fail to provide any such mechanisms as this article supports.

[1] http://www.paulgraham.com/avg.html


Maybe the path is to merge visual and code programming. We're doing that for data engineering - you can draw in visual editor, and press a button and you have code. So as long as you constrain the domain, it seems doable. https://prophecy.io/, https://medium.com/prophecy-io/spark-deserves-a-better-ide-9...


Here is a long list of reasons this is a bad idea: https://blueprintsfromhell.tumblr.com/

Having used my fair share of labview i know such examples are reality. Common mistake of the model is the sentiment that because it's visual everybody can do it! Biggest wtf i experienced was when system engineers wrote half completed "reference-implementations" in labiew, which coders then had to translate to Java for production, to sort out all the edge cases, because these visual tools (and the people that use them) are very bad at expressing edge cases. They might as well have given us graphs drawn on paper because the references were so far from reality you could never run them.

Together with that, when the only tool you have is a hammer, everything looks like a nail. Take https://blueprintsfromhell.tumblr.com/image/162487395171 as an example, it's actually one of the simpler graphs from that page. But if you take a look at it you see it's just doing standard function calls in a sequence, like trunctate(), toText() setText(), what's then the point of making it visual?! For simple signal control loops with only mathematical operations the labview model works really well, but when you start doing things like subString, groupBy or indexOf and don't realize you should change domain, that's when you get the worst spaghetti.

The benefit of a visual representation is that there are different ways to visualize things and you can choose which one fits the problem best. You have sequence, dependency, state, class, call-graphs, signals, lexical parsers, tables, charts, etc. Most visual environments only implement one of these and very soon you are using the wrong tool for the job.


And yet Blueprints enables a lot of non-programmers to write game logic. Actually, that’s why you have so many bad examples: they’re not programmers, they don’t necessarily know about software engineering principles like abstraction, or they don’t care because they consider it throwaway code. There are a LOT of similarly bad examples for textual languages (dailywtf anyone?), so its unfair to say that this is proof that visual bad textual good.

I would very strongly argue that Media Molecules’ Dreams would not be as popular as it is if it used textual programming to implement game logic.


Visual programming is great for some application areas.

I recently wrote software that allows you to transform data step-by-step visually (https://www.easydatatransform.com). I think the visual, flow-based approach here is a real win, especially for non-programmers.

However I wrote the software in a primarily text-based programming environment (C++ and QtCreator). I think it would have been a nightmare to do in a purely visual programming environment.

Horses for courses.


"Flow-based" is only one kind of visual programming. UML alone has over a dozen different kinds of standardized diagrams. I can write a state machine with a big switch statement, for example, but I still find them easier to read when presented visually.

Maybe we just haven't figured out good visualization types yet for some kinds of programs we write. Interestingly, the opposite is not true. I have yet to see a case where we had a half-decent visualization, and people still said "This would be easier to understand if it were 1000 lines of C++".


I think a visual approach is great for representing the overall structure of a C++ program (e.g. inheritance hierarchy), but I much prefer code for the lower level logic.

People have been using visual representations of programs for ~70 years. I'm not sure how much room there is for major improvements to visualization.


Back in the 1980's, I heard virtually the same argument -- just replace "C++" and "visualization" with "assembly language" and "compilers".

There turned out to be lots of room for improvements at the top! We could put almost all of the old housekeeping and busywork into our tools, and use a much simpler language for describing the unique parts of our program that we actually care about.

I see no evidence that our current style of object-oriented programming is going to turn out to be the pinnacle of computing abstraction. The median language is (slowly but steadily) climbing the abstraction ladder, and each step brings it closer to an abstract visualization.


> Lack of evidence of their worth. There are not many Visual Languages that would be generally agreed are ‘‘successful,’’ and there is little in the way of formal experiments or informal experience that shows that Visual Languages are good.

This is bullshit. A lot of musicians & visual artists are super successful with tools like Max/MSP, PureData, vvvv, or the ton of graph-based shader editors out there. Max/MSP is taught in a lot of place to music students, e.g. in conservatories. I promise that by the time you get them to understand for() and if() they've already made some music in Max, and the better ones have already started toying with sensors and controllers. Disclaimer: I worked 3.something years in a company where the owner routinely shipped software entirely built in Max.

People without any programming abilities are able to use Blender's node editor to make super nice visuals : https://www.youtube.com/watch?v=JhLVzcCl1ug and at the other end of the spectrum, code that runs in airplanes is routinely designed using SCADE (https://www.ansys.com/products/embedded-software/ansys-scade... - see e.g. these slides which talk about its use in the A380: http://www.artist-embedded.org/docs/Events/2008/Autrans/SLID...) and other tools based in the synchronous reactive model.


Game dev frameworks Unity and Unreal also have good coding support for things like developing shaders, AI behavior trees, or just regular game logic.

There are benefits to visual coding such as being able to visually group or cluster related code, being able to use zoom in and zoom out to create 'region's of code visible from further out with large labels, and being able to follow the lines connecting your code and state visually.

But also there are big big big drawbacks, I don't need to mention the literal visual spaghetti code or the more difficult search / find and replace, regex find and replace, any text manipulation tool not available in visual coding. Hard to say whether visual coding in game frameworks is a boon or a curse. For large teams that include non-technical folks they do seem to be a net benefit.


> being able to visually group or cluster related code

Text can be grouped as well.

> being able to use zoom in and zoom out to create 'region's of code visible from further out with large labels

Can you elaborate on this?


Venn-like overlapped groupings are notoriously error prone in text via annotations.


luna-lang.org has a hero gif that shows zooming in a visual editor.


I can write programs in LabVIEW in a few hours that would take me weeks in another language. Not all of that is due to the graphical nature, most probably due to the domain specificity and IDE integration. LabVIEW makes it super easy to create GUIs, visualise data and debug whilst running.


I've used a bit of LabVIEW; my hunch is that LabVIEW's standard library has high level components, which it can provide because its niche is so narrow. You could make a similar framework or library in a general purpose language. The other thing LabVIEW provides is a comprehensive solution for project management--many general purpose programming languages give you half-assed package and dependency management (less than half-assed if you're dealing with C or C++) and some make you learn an entirely different programming or configuration language or just to list your dependencies or make your tests runnable. Those are the big rocks that make LabVIEW easier than most general purpose programming languages; however, if you're comparing LabVIEW to C, there are many other advantages, notably garbage collection and abstraction--things that are common in high level programming languages (anyone who says C is a HLL can see themselves out).

The visual language is gravy compared to the standard library and the integrated programming environment.


Ouch. I can write programs in LabVIEW in a few hours that would take me weeks in another language.

I can also write programs in LabVIEW in a few weeks that would take weeks in another language. And I can write programs in LabVIEW in many months that would take weeks in another language.

The largest maker of a bad language is that it doesn't matter how long you work on it, you just get less and less productive and your program actually goes nowhere.


Was going to mention Blender and Houdini, visual effects people use node graph editors all the time. Parametric CAD can be done very effectively with Rhino Grasshopper, which has such good tools for splitting and merging data stored in tables that I'm surprised it's not marketed for Extract/Transform/Load operations.


In what circumstances, generally speaking, do you think visual programming is more desirable than textual programming? Amateur programmers? Media production? Something else?


When you're building a data or signal flow.

It's easier to reason about when you can see the branching, merging, and direction of flow.


Part of the problem with visual programming in the "flow" style is the very fact that "branching" and "merging" has two internally-consistent, equally worthwhile but totally incompatible interpretations. Namely, parallel flow of data (known from "dataflow" diagrams; with parallel juxtaposition representing a product, the empty product standing for a unit flow, and a natural branching operation standing for "copy" or "clone") vs. divergent choice (known from "flowchart" diagrams; with parallel juxtaposition representing an alternative, the empty case standing for a zero flow, and a natural merging operation standing for a processing step that can be reached in multiple ways).

This ambiguity clearly does not exist with a pipeline-like, single-source/single-sink "flow", but it does arise as soon as one has multiple juxtaposed 'flows' in a single diagram or section of a diagram. I'm not aware of visual programming methodologies that try to handle both semantics with equal clarity, and without introducing significant confusion between them.


Perhaps a step in the right direction would be to use different shapes of components for flowchart-like flows (like how in flowcharts diamonds are used for conditionals) and for parallel dataflow (rectangles in flowcharts). Perhaps not perfect, but at least you can visually see “this is a branch” vs “this is a parallel split”.


They aren't incompatible. Conditionals can be modeled as data flows. See circuit design.


> When you're building a data or signal flow.

All code, or at the very least functional code, can be directly mapped to a DFG. Should all functional programming be done visually?


Of course not, because if you write say a sorting algorithm the standard tool for that is structured programming primitives. It would be explained as text in a textbook, with ifs, whiles and variables assignations, not CFG / DFG prilitives.

In contrast for instance in music, patchs were already the main interaction and creation mean even before computers existed - see e.g. music studio patchbays, modular synths or even guitarist pedalboards. So people in that field already have the mental model of a signal flow and bricks connected together. Both models can mix - a lot of people use small snippets of code in Max/MSP for instance. In the software I develop, https://ossia.io it's super common that people will rely on a JS script embedded in a timeline for one small part of their score. Etc etc..


Wow, Ossia looks very cool - I've wondered if there was something like a "visual OSC-aware API tool" and there definitely is.

The linked article for this thread would be much more meaningful if it mentioned a) specific visual programming languages found to be lacking and b) domains where the visual models map directly to the underlying processes being modeled. As you note, electronic music processes/devices like modular synths, recording studios, and electric guitar rigs are ripe for modeling with boxes/wires in node-based systems.


> It would be explained as text in a textbook, with ifs, whiles and variables assignations, not CFG / DFG prilitives.

Then it's not functional programming. I'm asking specifically about functional programming.


If a visual interface makes it easier, yes. If it makes it more difficult, no.

That depends on what is being built, and who is building it.

Like most things, it isn't one size fits all.

And you're going to have to actually try it out to discover what works best.


Well, of course. That's a tautology.


When you're working on a touch screen device without a keyboard.


Very true! I’ve often wished there was a good visual language I could use to program on my iPad.


Just to add to your point, Alteryx is a popular enterprise software app that is widely used and growing. Nobody on HN knows it exists, but it's very successful in it's space, and growing fast.

Python and R code can be embedded within the graphs, which are all C++ under the hood.


> routinely shipped software entirely built in Max. care to elaborate? music-related software?


museum installations and touch UIs, as well as visual stuff for events


I think this is relevant to the discussion: Fructure: A Structured Editing Engine in Racket https://m.youtube.com/watch?v=CnbVCNIh1NA


Every time I try a visual programming environment, I end up ditching it for code not because I can't to it in the visual language, but because it is very slow, tedious and boring.


Their might be an under-explored middle ground here, somewhere between representing a program as ASCII text and a full blown graph ZUI. Language is already a very efficient way to layout a graph structure, so lets keep that and borrow from maths the notation for summation, integration, ect, and from physics maybe Dirac's bracket notation for matrixes.

I have thought of doing somethings like that in the browser with a WYSIWYG editor (document.designMode='on') and web components. The DOM is like the AST of the program and is fed into a runtime. Non-expression tags are basically just comments.

Not for regular programming, but for a type of scientific notebook interface to do exploratory programming. Anyway one of this days.



Whatever happened to Luna Lang? https://luna-lang.org/


Luna project received an additional million in funding recently. still cranking away in Poland on that project. It is a very challenging project, they are trying to make a hybrid text and graphical form language, where you can switch between them. That's a very complicated thing to build. Wish them luck, they will need it.


I think the lack of auto layout is actually a pretty significant part of the problem. It is an unsolved problem in data viz on how to layout complex graphs in a way that humans can parse. That also means that most humans are not very good at it.

Personally my favourite VPL was Automator, which had the control flow primitives equivalent to the bash operators | and ;. This made layout and control flow trivial at the expense of expressiveness. However, as the goal of the thing was basically “terminal for the rest of us”, arguably those two operators are what most people end up using at the command line anyway.

Of course Automator failed because of a number of other issues (lack of abstraction and bad package management I think are at the top of the list). But I think tools like that have a good amount of potential.


I’ve seen some projects succeed with (because of?) visual programming languages and others fail because of them.

My general feeling is that a visual programming language excels as domain specific languages where the domain is quite visual.

Ive seen two problems with visual programming languages which are quite severe in my opinion.

1. Merging sucks. Merging visuals are not straight forwards. I’ve seen companies do trunk based development before it is was hip just to get around it.

2. As with all high level languages, without an understanding about the overhead of different constructs, it is easy to create very inefficient solutions. On cloud systems it may be easy to throw more processing power at the problem. On embedded systems, less so.


I like the idea of visual programming for the same reason I like the idea of my compiler acting as a theorem prover. However, my sense is that both visual programming and formal systems run into the same problem, in that the space of possibilities, the space that needs to be expressed visually, explodes combinatorially, and so we find that the process of working in a visual environment is necessarily less expressive, always a bit behind what can be said by just writing it out in a program. That's why we do our best with compilers and type systems, but still have to write tests.


Surprised to not see schematic design entry for FPGAs mentioned in the comments. It's still considered heresy by professionals, but i think there can be some benefits to doing the top level design schematically so you can see at a glance how the major components connect to each other and the external interfaces. Things do become quickly unmanageable if you try to define all the logic that way, but if you limit it to just wiring black boxes together i would argue it can improve readability.



> Difficulty with large programs or large data. [...] too little will fit on the screen

Solved by the ability to pan and zoom. Node graph tools in professional visual effects environments can grow to >20,000 nodes in a document, especially given that nodes can be nested in groups or cross-included from other documents.

> Need for automatic layout. [...] For example, generating an optimal layout of graphs and trees is NP-Complete [95].

Doesn't need to be absolutely optimal to make the problem go away; making something that helps the user enough that they don't have to think about it is completely tractable.

Many professional node graph tools don't do any automatic layout at all and let the user handle it; which while less than perfect, doesn't stop the tools from being incredibly powerful.

> Lack of formal specification. Currently, there is no formal way to describe a Visual Language.

First, a formal specification is not necessary in order to build something useful.

Second, if you really need a formal specification, then write/invent one. There is nothing about visual systems that makes them less possible to specify than any other software system.

> Tremendous difficulty in building editors and environments. [...] These editors are hard to create [...] the language designer must create a system for display [...] which usually requires low-level graphics programming.

So... do that stuff? What about that work is more difficult than the other parts of writing a compiler; or any other piece of complex software? Maybe this was more of a big deal in 1989, but basically all software today is based on a visual UI, so the complaint that the need for a UI makes visual languages intractable is... pretty ridiculous now. None of this would be harder than, say, writing a simple video game.

> Lack of evidence of their worth. There are not many Visual Languages that would be generally agreed are "successful"

"Nothing good exists yet, so nothing good is possible" doesn't hold water for me.

> Metrics might include learning time, execution speed, retention, etc.

From my experience in visual effects, non-coders can pick up a node graph tool very quickly and get very creative/inventive with nodes— but the moment they have to fall back to writing a script, they struggle or don't even try. It seems abundantly clear to me that node graphs are far easier to learn (but can be just as powerful) as writing with a machine grammar.

I also did an image processing development project in a node graph tool that took about 4 days of experimentation to get to patentable IP. If I had to do the same project in Python or C++, it would have taken weeks or months, if I was even able to solve the problem at all without the fluidity and responsiveness of the node based editor. So even for "serious developers", a good node-based editor can multiply speed by a factor of 10.

> Poor representations. Many visual representations are simply not very good.

Can't argue there— but the solution is to make one that is good. :)

> Lack of Portability of Programs. [...] Graphical languages require special software to view and edit

...Like basically all other file formats in the universe. :)

It is not 1989 anymore, and it's time to fix this stuff.


This is a good and thoughtful essay, but in some respects reads like a rationalization of the author's preferences rather than an attempt to engage with the problems. He makes good points about the lack of formal specification, portability, and specialty software; this is indeed a problem, as all the best visual programming platforms are proprietary.

On the other hand, there's no fundamental reason that visual editors shouldn't be able to load and save source code; it's just a different way of thinking about program structure and user interface. There's already a lot of tools out there that generate dependency graphs within or alongside an IDE; the main shortcoming i see is a lack of nesting. There are few instances in which i want to see the entire structure of a program all at once, any more than I want to read all the source code in a single giant text buffer

One might equally ask if textual languages are so great, why are there so many of them? Couldn't everything be written in a nice capable high-level language like C? And of course it could, but lots of people don't like C's minimalism and lack of guardrails. It seems to me that most new general-purpose languages originate in a desire to have the computer do some of the tedious housekeeping (like casting variables) and frustration with awkward syntax.

I'm old, so when I was learning to write code it was just flat text files and every build necessitated a round of swearing over forgotten semicolons, mismatched brackets, or incorrectly spelled variables. Syntax highlighting, code folding, autocompletion, and bracket matchings which do so much to make development in a modern IDE pleasant and productive were once viewed as undesirable crutches that would lead to poor practices, lack of portability, and inaccurate second-guessing of developers' intention by the IDE, subjecting hapless coders to the tyrannical will of the interface illuminati.

I suspect that behind the aversion to visual programming lies a fear of standardization and automation that will result in widespread developer redundancy as it becomes easier for domain experts to build out their tools without code specialists, somewhat parallel to the development of electronics from hobby clubs where people helped each other with soldering irons, multimeters, and resistor identification charts to today's electronic factories populated only by pick-and-place machines assembling components too small for adult hands to manipulate at dizzying speeds.

The more automation and and ML that finds its way into IDEs, the more standardized the underlying libraries are likely to become. To the extent that programmers are freed from syntax production and housekeeping duties and processes can more easily be read (from visual graphs/charts) with domain knowledge alone, the more likely it is that the most common functions become wholly standardized, in the same way that chips are. 555 timer ICs have been around since 1971 and the most popular derivatives are dual and quad versions. Nobody is writing new and improved 555 timers and there's no real demand for them.


By the way there's a new version of ndepend, which is not quite the same thing but might be of interest to people here as it's conceptually related: https://www.ndepend.com/docs/visual-studio-dependency-graph


Doxygen can generate similar diagrams with C/C++ code: http://doxygen.nl/ At a former employer, we made it part of the CI pipeline and the class hierarchy diagrams and call diagrams were great for newcomers to get oriented with the codebase.


Simulink is a very successful and powerful visual language for doing all sorts of wonderful things with. In the right hands of course.


I visually program using vi and have for decades. Graphical programming is cumbersome for anything but trivial applications.


If you're going to criticize visual programming languages in general based on a 31-year-old paper written by Brad Myers in 1989, then you should at least look at some of the work he has done addressing those problems he set out in that paper, over the intervening three decades.

http://www.cs.cmu.edu/~bam/

>Brad A. Myers is a Professor in the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University. He was chosen to receive the ACM SIGCHI Lifetime Achievement Award in Research in 2017, for outstanding fundamental and influential research contributions to the study of human-computer interaction. He is an IEEE Fellow, ACM Fellow, member of the CHI Academy, and winner of 15 Best Paper type awards and 5 Most Influential Paper Awards. He is the author or editor of over 500 publications, including the books "Creating User Interfaces by Demonstration" and "Languages for Developing User Interfaces," and he has been on the editorial board of six journals. He has been a consultant on user interface design and implementation to over 90 companies, and regularly teaches courses on user interface design and software. Myers received a PhD in computer science at the University of Toronto where he developed the Peridot user interface tool. He received the MS and BSc degrees from the Massachusetts Institute of Technology during which time he was a research intern at Xerox PARC. From 1980 until 1983, he worked at PERQ Systems Corporation. His research interests include user interfaces, programming environments, programming language design, end-user software engineering (EUSE), API usability, developer experience (DevX or DX), interaction techniques, programming by example, mobile computing, and visual programming. He belongs to ACM, SIGCHI, IEEE, and the IEEE Computer Society.

For example, he wrote this one year later in 1990:

Creating User Interfaces Using Programming by Example, Visual Programming, and Constraints. Brad A Myers. ACM Transactions on Programming Languages and Systems, Vol 12, No. 2, April 1990.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.460...

I really enjoyed this 1999 paper “A Taxonomy of Simulation Software: A work in progress” from Learning Technology Review by Kurt Schmucker at Apple. It covered many of my favorite visual programming languages.

http://donhopkins.com/home/documents/taxonomy.pdf

And it's also worth reading the more modern and comprehensive "Gadget Background Survey" that Chaim Gingold did at HARC, which includes Alan Kay's favorites, Rockey’s Boots and Robot Odyssey, and Chaim's amazing SimCity Reverse Diagrams and lots of great stuff I’d never seen before:

http://chaim.io/download/Gingold%20(2017)%20Gadget%20(1)%20S...

I've also been greatly inspired by the systems described in the classic books “Visual Programming” by Nan C Shu, and “Watch What I Do: Programming by Demonstration” edited by Alan Cypher.

Visual Programming. Nan C Shu:

https://archive.org/details/visualprogrammin00shu_2pf

Watch What I Do: Programming by Demonstration. Allen Cypher,. Daniel Conrad Halbert.

https://archive.org/details/watchwhatido00alle


All programming is visual, think about it. Text is visual. I'm not just pointing out a semantic discrepancy here. Text is a specific method of encoding meaning and processes in a visual way.

In short, textual programming is just a subset of visual. People think of the dichotomy as a duality of text vs. visual, it is not.

Text just has features people tend to like: A Fixed line based structure. Isomorphism with the english vocabulary... Who says other visual programs can't have this either?

The issue with a lot of these "visual programs" is that they also represent a subset of Visual Programming similar to how text is just a tiny subset of visual programming. What you are in fact seeing with most "visual programs" is mostly programs represented as graphs.

The real dichotomy people are discussing is not text vs. visual but text vs. graph. Whenever I see something like labView or something labeled as a "visual program" it's always a program represented with a series of nodes and lines, aka a graph.

Hopefully by illustrating the reality of what's going on, people can see beyond representing programs as just text or just graphs.

There are other ways to represent modules and processes.


Well put! Text is graphics, but technology has historically limited how it looks and what you can do with it. The idea of text with control characters like "carriage return" and "newline", fixed width text, limited length lines, and text editors that wrap words when the cursor gets to the edge of the screen, and terminals with multiple lines that scroll, all incrementally evolved out of emulating punched card machines. And now we emulate VT100s emulating punched card machines. But it doesn't have to be that way.

Here's a great piece of exploratory research that imagines how things could be different, by an amazing visionary visual thinker and artist, Scott Kim:

"Viewpoint" is part of Scott Kim's PhD dissertation that he did at Stanford in 1988 (with Donald Knuth as principle advisor). It was programmed in Cedar, and ran on a color Dorado workstation at Xerox PARC.

https://www.youtube.com/watch?v=9G0r7jL3xl8

>Demo and explanation of Viewpoint, a computer system that imagines how computers might be different had they been designed by visual thinkers instead of mathematicians. Caution: this is basic research, not a proposal for a practical piece of software. Part of my PhD Dissertation at Stanford University in 1988.

At 12:22, he starts with a blank screen, and performs a "visual boot", to use Viewpoint to build itself.

http://www.scottkim.com/

https://en.wikipedia.org/wiki/Scott_Kim

https://visualraccoon.wordpress.com/2013/08/14/the-viewpoint...

This is a visualraccoon Perspective on Scott Kim‘s very graphic and revolutionary exploration, Viewpoint: Toward a Computer for Visual Thinker.

What would it be like to go back to visual first principles and take a fresh look at graphic user interfaces?

The Viewpoint Thesis is that a small number of pixel manipulation primitives can be defined such that if they are bound to keyboard and mouse actions it is then possible to build a simple text-graphic editor by drawing it, and that that editor can be used to draw-build itself.*

The Viewpoint Thesis & Editor is part of a larger project founded on the hypothesis that:

“Only by treating the screen itself as a first class citizen will we be able to build computers that are truly for visual thinkers.” Scott, 1987.

This project includes building visual programming languages for such thinkers.

Viewpoint: Toward a Computer for Visual Thinkers

http://www.scottkim.com.previewc40.carrierzone.com/viewpoint...

http://www.scottkim.com.previewc40.carrierzone.com/viewpoint...

When I started my PhD project in 1981, the IBM PC was brand new, and it would be years before the Mac and Microsoft Windows would appear. Nonetheless I was familiar with graphical user interfaces because of my internship at nearby Xerox PARC, the visionary research center that spawned, among other things, the laser printer and the bitmapped display. I was well-acquainted with Alan Kay's vision of a dynabook (he envisioned the notebook computer in 1975), and understood the power of graphic user interfaces and graphic tools like paint programs to bring the power of computers to artists and visual thinkers.

At the time I was part of the digital typography program within the Stanford computer science department, which built computer programs for digital typeface designers. I was also enthralled by the Visual Thinking course at Stanford, which taught engineers in the product design program how to think visually.

It struck me as odd, and deeply wrong, that we were building tools for visual artists in a programming language that was utterly symbolic, and lacking in visual sophistication. I yearned for a programming language that had the same visual clarity that graphic user interfaces had.

So I set about wondering what a visual programming language might look like. If computers had been invented by artists and visually oriented people, instead of by mathematicians and engineers, how might they write programs? It seemed to me an important question, but one that hardly bothered most computer scientists. I read about a few attempts to build visual programming languages, and decided there was something fundamental I needed to understand.

My journey took me deep into the foundations of computer science, where I asked fundamental questions like “what is programming” and “what is a user interaction” — questions that often get passed over in computer science (any definition of “programming” that starts with “a sequence of symbols that…” is not deep enough to encompass visual programming languages). I never did build a visual programming language, but I did figure out a fundamental idea (things get interesting when the user and the computer share the same mental model of data), and built a rudimentary visual editor that demonstrated my ideas.

I’m posting my dissertation and the accompanying video demo to re-open the conversation. What in my dissertation is interesting to you? What other work do you know of along these lines? Do you know of foundational research in interaction design and visual programming? And most importantly, what would be a good next step to push the work forward? (My research stalled for lack of a juicy application domain.) Please email me your thoughts. I'd love to hear them.


I do not agree with the freeform nature of viewpoint.

The possibilities are boundless but you have to temper the boundlessness with the fact that our brains are biased and specialized to work in certain ways. Case in point can you communicate your entire post in terms of pictures? Is there any form of non linear diagramming you can do on a 2D canvas that can deliver your point better than text? If I give you the freedom of a 2D canvas to convey your point can you convey it better than the restricted form of text? No. In fact text was the best way to communicate your point. Restriction and lack of freedom are often more effective tools than ones that are completely unrestricted and boundless.

It doesn't matter whether you're a visual thinker. When it comes to parsing and conveying complex information our brains have a dedicated module for parsing information that comes in as a linear and serial string of symbols. If you can talk, listen read or write than you can be exceptional at writing programs that do this. Your brain is biologically biased to operate this way.

That being said if I show you a picture of a dog, you will learn more about what that dog looks like than any amount of textual description. There is no denying that certain things are communicated better visually and other things are communicated serially.

It's hard to delineate axioms that communicate exactly what is better communicated visually and what is better communicated textually. I can only talk in terms of examples: Your ideas on visual programming is better communicated through text, what your car or face looks like is better communicated through pictures.

One thing that has worked throughout history is a combination of text and pictures. Powerpoint. A linear slide show where each slide delivers serial text and pictures in parallel to the human brain.

The future of visual programming, if it occurs at all, will ultimately be some form of a combination of diagrams and restricted serial strings of symbols following the same principles as powerpoint. I don't believe it has anything to do with foundational computer science but rather the language must be built around the biases and limitations of the human brain. We build biased textual languages because our brains are partly biased towards text.

I have an idea about what that would look like and how that would work in terms of maintaining programmer productivity. Also I didn't see your dissertation (assuming your name is Don Hopkins) in your post.


>Is there any form of non linear diagramming you can do on a 2D canvas that can deliver your point better than text?

I could write a JavaScript program just for you, that drew blinking animated sparkly text in a canvas which said "WATCH THE VIDEO I POSTED", but there would still be no point to that, since you can lead a horse to water, but you can't make him drink.

Because if you actually watched the video, you would realize that Viewpoint is all about text. Nobody claimed it was "better than text", or that it was trying to get rid of text. You'd know that if you had watched the video. But you don't know that, so I can conclude that me simply writing text was not enough to get you to watch the video.

>I have an idea about what that would look like and how that would work in terms of maintaining programmer productivity.

That's very nice. And do you know how much that's worth? Nothing. Not only are ideas worthless without an implementation, but also ideas that you haven't even bothered to write down without implementing are worth even less, because they're a waste of everyone's time, and not tempered by reality or experience or empirical testing or iterative design and development or stakeholder requirements or user feedback.

What point is there to telling me that you have an idea about something you haven't implemented, or even written down, or drawn a picture of? Or did you forget to provide a link to your own dissertation, a video, and a runnable demo?

It's much better for you to spend your precious time doing difficult tedious things like writing down and implementing and testing your brilliant ideas yourself, instead of having so much fun talking about how great it would be if only you wrote it down for other people to read, payed somebody on Fiverr to draw a picture of your idea for you, or found some out-of-work but highly skilled programming with nothing better to do, who was willing to pair up with a starry-eyed "idea guy", and work night and day for free, on your sincere verbal promise of equity and exposure, implementing your grandiose but unwritten ideas, in the hopes that you don't get angry at them for not precisely interpreting and efficiently engineering your brilliant vision correctly, or suddenly change your mind once you see what they actually implemented that did not perfectly live up to your idealistic expectations and nebulous handwaving.

Good luck with that! Let me know when it ships.


You misunderstand. I watched it. I'm talking about information delivered in a serial format. A linear and ordered ingestion of information. I'm not talking about text per se, but the property of text that makes all programs follow a linear order of execution. Viewpoint is anything but.

Viewpoint is all over the place. It's basically a 2D canvas with no order. Sure it has text but that text is used more like a random label with random coordinates. There's no linear process of instructions delivered or ingested. What you see is basically a reactive canvas with arbitrary coordinates.

The universe we live in has a linear one dimensional property that follows the arrow of time. Viewpoint fails to represent this concept symbolically. The process of time, the process of execution. However you choose to represent this concept, all programming languages must have some sort of way of representing time as a spacial dimension. For text it is left to right, top to bottom reading order, for viewpoint it does not exist.

What's more all the symbols can be destroyed as text is just an abstraction over a pixel drawing interface. The axioms (pixels) of this interface are too low level. But this is just a minor flaw.

The main flaw I see with Viewpoint is that it's not clear how computation can be modeled with these pixels. How do you represent logic? Or, and, nor, xor? Is viewpoint Turing complete?

Time is an intrinsic part of logic, a programming must be able to represent logic and time by using only spacial dimensions.


I think you need to go back and watch it again, because what you're saying doesn't align with what the video shows, and it sure sounds like you misunderstand, and that you don't know what you're talking about.

Did you notice the part in the video description that I quoted in my post that said "Caution: this is basic research, not a proposal for a practical piece of software. Part of my PhD Dissertation at Stanford University in 1988"? While you're at it, read Scott's introduction and PhD dissertation that I linked to, as well.

Do you even know who Scott Kim or his thesis advisor Donald Knuth are, or how the research at Xerox PARC in the 80's contributed to the fields of programming language and user interface design? What were you doing in 1981?

So what's that great idea you claim to have? Where's the link to your dissertation and a runnable demo?


>So what's that great idea you claim to have? Where's the link to your dissertation and a runnable demo?

That's mine. When it's ready I'll reveal it.

>I think you need to go back and watch it again, because what you're saying doesn't align with what the video shows, and it sure sounds like you misunderstand, and that you don't know what you're talking about.

In his video he even called viewpoint an "editor" as opposed to a "visual programming language." It's clearly not Turing complete and I would say outside the topic of visual programming languages. I think he's more in the domain of researching user interfaces for the common computer user rather than a visual interface for programmers and programming in general.


>That's mine. When it's ready I'll reveal it.

https://www.youtube.com/watch?v=RKMNPQ35OUc

Oh, it looks like we have ourselves a classic "idea guy"! I've met a whole lot of people like you before. It must be hard getting a job in such a crowded field, especially at a time like this.

So that's a hard no, you've never heard of Scott Kim or Donald Knuth or Xerox PARC, right?

I can see you have excellent soft "people skills", and it must be a delight to work with you. Good luck talking somebody else into implementing your ideas for you, but don't let anybody else know what they are, and keep them top secret, and never discuss them with anyone who's done similar work, and make sure everyone you discuss them with signs NDAs with No Complete clauses (and make sure you're in a state where they're enforceable) and Reach-Around Clauses (in case they fuck you up the ass), because your ideas are so easy to implement once you know them, that somebody might steal them from you before you get around to conning somebody else into implementing them for you for free, if you leak any details of all those original ideas of yours that are all yours and nobody else's and they belong to you and are yours. ahem ahem cough

https://www.youtube.com/watch?v=Xs7r5xfucPs

Better see somebody about that cough!


>Oh, it looks like we have ourselves a classic "idea guy"! I've met a whole lot of people like you before. It must be hard getting a job in such a crowded field, especially at a time like this.

It's just a fuzzy idea. Not a business idea. I don't see how I can make any money off of it. Also I'm not the type of guy who can get other people interested in working on it. Likely even if I do build it myself and open source it one day it won't take off without external support or luck.

>So that's a no, you've never heard of Scott Kim or Donald Knuth or Xerox PARC, right?

No who are those guys. Never heard of them. I only know about Mark Zuckerberg, Elon Musk, Jeff Bezos and Bill Gates. That's all I know.

>I can see you have excellent soft "people skills", and it must be a delight to work with you.

Yeah I figured you might get pissed off. Note that it was nothing personal, and none of my comments (other than the sarcasm above) ever deviated from just attacking the idea. I was just extremely direct with my criticism. I attacked the idea, but inevitably as all people are... if I attack the idea, the person behind the idea feels he's getting attacked too.

I tend to be more direct on the internet to cut through all the politics as the consequences of not playing the game on the internet are not that serious.


>It's just a fuzzy idea.

Well I'm sure it proves your point, whatever it is, and definitively wins your argument, hands down. Who am I to question such extreme self confidence? I graciously concede. Congratulations!

>Yeah I figured you might get pissed off. Note that it was nothing personal, and none of my comments (other than the sarcasm above)

Why I'm not pissed off at you at all, and I don't mean for you to take any of this personally, and I non-apologitically apologize for you choosing to be offended by any of my words, plus I give you the full and sincere benefit of the doubt that you would never retroactively claim you were being sarcastic, simply because you didn't want to admit you made a mistake. Who in their right mind would ever do something like that, in this day and age? I'm sure your brilliant idea is good enough to win the Noble Prize in Journalism for curing Coronavirus by injecting bleach.

>No who are those guys. Never heard of them. I only know about Mark Zuckerberg, Elon Musk, Jeff Bezos and Bill Gates. That's all I know.

Yeah, I didn't think you knew who you were criticizing, or even had the curiosity to google them.


>Yeah, I didn't think you knew who you were criticizing, or even had the curiosity to google them.

I was being sarcastic. Either way you shouldn't worship people. Everyone is infallible. Clearly Scott made an editor and clearly that editor is inferior in every way to photoshop. You can literally do the same thing with Brushes. Maybe viewpoint was revolutionary for its time, but no longer.

>Why I'm not pissed off at you at all, and I don't mean for you to take any of this personally

You are pissed off, and you took it 100% personally.

>Well I'm sure it proves your point, whatever it is, and definitively wins your argument, hands down. Who am I to question such extreme self confidence? I graciously concede. Congratulations!

My idea could be a piece of shit for all I care. I didn't share it with you so there's really nothing to talk about.

The topic at hand is viewpoint and how viewpoint isn't even turing complete. Viewpoint can't express any logic so it doesn't even fit with the overall topic of this thread: Visual programming languages.


>I was being sarcastic.

Then who are Scott Kim and Donald Knuth and Larry Tesler and Jeff Raskin, and what did they do that relates to this discussion? If you were being sarcastic about not knowing, then you should be able to easily answer that question. And here's another easy question: which one of them did HN most recently turn the header black for, and why?

>You are pissed off, and you took it 100% personally.

No, I was only sarcastically pretending to be pissed off and take it personally. ;)


>Then who are Scott Kim and Donald Knuth and Larry Tesler and Jeff Raskin, and what did they do that relates to this discussion? If you were being sarcastic about not knowing, then you should be able to easily answer that question. And here's another easy question: which one of them did HN most recently turn the header black for, and why?

Because the question is obvious. Who doesn't know Knuth? As for Scott Kim, and the rest, I don't know them and could care less. They aren't the topic of conversation.


[flagged]


Did you just call me moron?


>Everyone is infallible.

That's optimistic of you!

    infallible
    /ɪnˈfalɪb(ə)l/
    adjective
    incapable of making mistakes or being wrong.
    "doctors are not infallible"
>Clearly Scott made an editor and clearly that editor is inferior in every way to photoshop.

And in every way prior to Photoshop, too:

https://en.wikipedia.org/wiki/Adobe_Photoshop

>Adobe Photoshop is a raster graphics editor developed and published by Adobe Inc. for Windows and macOS. It was originally created in 1988 by Thomas and John Knoll.

http://www.scottkim.com.previewc40.carrierzone.com/viewpoint...

>Scott Kim's 1988 PhD Dissertation: History: Timeline: p. 26:

>Undergrad school 1973: Computers, computer music, and computer graphics. Visual thinking class with Robert McKim. Basic graphic design class with Matt Kahn. Met Doug Hofstadter. BA in music, with studies in mathematics. Wrote article on four-dimensional optial illusions.

>Graduate school 1979: Started as graduate student (Masters in CS) Fall 1979.

>Metafont 1980: Gave Metafont demos. Programming AMS-Euler font in Metafont. Xerox PARC. Started as consultant.

>Inversions 1981: Worked with Richard Weyrauch on computational philosophy. Wrote and produced Inversions book. Viz Din (visual programming language discussion group with Fred Lakin and Warren Robinett) started. Started doing graphic design jobs.

>Computer languages 1982: Started interdisciplinary PhD. Taught visual thinking. Dave Siegel took over Euler.

Visual programming 1983: July started work at Information Applianc. August ATypI (Association Typographique Internationale) conference.

>Thesis Proposal 1984: Wrote dissertation proposal. Dropped idea of programming. Added idea of pixels. Introduction of Apple Macintosh computer.

>Programming 1985: Lecture at Hewlett-Packard: the field of user interface design and why it doesn't yet exist. Learned to use Cedar programming environment at Xerox. Wrote first draft of Viewpoint program.

>Thesis, draft 1986: Taught "Graphic Invention for User Interfaces at Stanford with William Verplank. Wrote dissertation, draft 1.

>Thesis, final 1987: Rewrote program to match writeup. Revised theory. Dissertation defense, including a videotaped demonstration of Viewpoint. Wrote final version of dissertation.

And speaking of Adobe Photoshop: Scott Kim acknowledged John Warnock, the President of Adobe, in his thesis, who supported his stay at Xerox PARC:

>Xerox PARC: Leo Guibas. John Warnock. Maureen Stone. Michael Plass. Leo invited me to PARC. John and Maureen sponsored my stay. Michael helped with programming.


What do his credentials have to do with anything? I don't care.


You're obviously being sarcastic when you say "I don't care." Otherwise you would have let the topic drop a long time ago.

The timeline proves how idiotic it was for you to complain that Scott's dissertation wasn't as good as Photoshop (while his stay at Xerox PARC was SPONSORED by John Warnock, president of Adobe). Again: Photoshop was released AFTER Scott's dissertation. And a commercial product developed and supported by thousands of full time people over decades is in no way comparable to a PhD dissertation written by one person that is clearly labeled "Caution: this is basic research, not a proposal for a practical piece of software."

Do you also attack Scott's thesis advisor Donald Knuth, because Metafont isn't as powerful as Adobe Illustrator?

You're baselessly attacking Scott for his "exploratory research" that was completed BEFORE Photoshop was even released. So your indefensible position is that it was wrong for me to post his thesis to this discussion because his exploratory research wasn't better than a product released AFTER he received his PhD thesis from the guy you claim not to know, Donald Knuth.

It's ironic you'd criticize Scott for his exploratory research not being better than a commercial product from Adobe that was released later, since Adobe respects him enough to invite him to give distinguished lectures like this:

https://research.adobe.com/distinguished-lecture-series/

>The Adobe Distinguished Lecture Series brings our industry’s most eminent researchers and creative professionals to the Adobe campus for a lecture and full day of informal discussions. Sponsored by the Adobe Research, the series provides an opportunity to meet and learn from visionaries at the cutting edge of digital media.

>FEBRUARY 05, 2009. 12:30PM. Scott Kim (Shufflebrain)

https://research.adobe.com/lecture/scott-kim-shufflebrain/

>Scott Kim (Shufflebrain). February 05, 2009 12:30pm. Games for Visual Thinking.

>In this talk veteran puzzle designer Scott Kim will show you original puzzles that stimulate visual imagination. You will see magazine puzzles that introduce ideas is visual design. You will see animated ambigrams that playfully combine typography and illusion. And you will see how his new online game Photograb exercises visual thinking skills. Finally Scott will discuss what makes a good puzzle, and how games can change the way you think.

>BIO: Scott Kim is one of the world’s most prolific and versatile puzzle designers. He has designed thousands of puzzles for such web game companies as PopCap, Gamehouse and the Tetris Company. He is the puzzle columnist for Discover magazine, and writes the annual Brainteasers page-a-day calendar. His strategy game MetaSquares is now available on iPhone. He has designed educational games for puzzle toy company ThinkFun. Scott has a PhD in Computers and Graphic Design from Stanford University. He is now designing smart games for his new company Shufflebrain.

Do you really believe you're infallible? And that I and everyone else is infallible too? Or was that just your sarcastic spelling?

Which visual programming languages have you actually developed or even used? Or are you just all talk and baseless criticism for other people with decades more experience than you?

I worked on developing and documenting and programming in the visual programming language in The Sims, SimAntics, which worked well enough to ship in an award winning product that was the top selling PC game of all time, so a lot of people have used it (over 200 million copies sold, making EA over $5 billion), and it works quite well. So by any measure, that's at least one example of a successful visual programming language.

https://www.ladbible.com/technology/gaming-the-sims-franchis...

>The Sims Franchise Has Officially Made Over $5 Billion

https://www.youtube.com/watch?v=-exdu4ETscs

>The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo. This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that I developed for The Sims with Will Wright at Maxis and Electronic Arts.

https://www.youtube.com/watch?v=zC52jE60KjY

>The Sims Steering Committee - June 4 1998. A demo of an early pre-release version of The Sims for The Sims Steering Committee at EA, developed June 4 1998.

Documentation that I wrote about the visual programming language in The Sims:

https://donhopkins.com/home/TheSimsDesignDocuments/VirtualMa...

Ken Forbus taught SimAntics programming with Edith it in his game design course:

https://donhopkins.com/home/TheSimsDesignDocuments/Programmi...

And I've also worked on developing other visual programming languages, like Bounce / Body Electric, and others that weren't released as commercial products. And I've actually used (and studied the visual code and source code and extension APIs of) a bunch of other visual programming languages, too.

https://medium.com/@donhopkins/bounce-stuff-8310551a96e3

https://www.donhopkins.com/home/archive/visual-programming/b...

Which visual programming languages have you used, and what did you use them for? Have you ever actually designed or developed any yourself?

So now are you finally willing to share any of your great ideas and experiences, since you admit that you'll never get around to actually implementing those ideas yourself? What do you have to lose by sharing them, then? And if you refuse to share your ideas that you'll never implement, then why did you even bring them up in the first place?


>Do you really believe you're infallible? And that I and everyone else is infallible too? Or was that just your sarcastic spelling?

Not a misspelling. A misuse of a word. Can we move on from this?

>So now are you finally willing to share any of your great ideas and experiences, since you admit that you'll never get around to actually implementing those ideas yourself? What do you have to lose by sharing them, then? And if you refuse to share your ideas that you'll never implement, then why did you even bring them up in the first place?

The topic of conversation is viewpoint and in my opinion viewpoint is not an example of a good visual programming language as it's not even turing complete. I illustrated my thoughts on what a good visual programming language would entail. That is the topic.

Whatever your background is, great. Whatever my background is it doesn't matter. I will say it's not as illustrious as yours and I have not implemented a visual programming language for a best selling game. Either way credentials have nothing to do with the topic at hand.

I don't care about the timeline of things and the real topic at hand isn't who made a better photoshop. The topic at hand is visual programming languages. I made a comment on my thoughts about viewpoint and what I thought about what a visual language should look like. It appears that you disagree with me except, none of my thoughts were addressed. I just mostly see credentials of everyone being mentioned everywhere.

I'm willing to talk about visual programming languages but I'm not going to introduce my idea to you because I don't know you and you haven't been very friendly to me. Additionally I only want to introduce it at a point in time when I have the resources to be very involved with the project if that ever happens in my lifetime. I'm not going to just give it to you so you can run with it.

Also it's not that great of an idea anyway, it wasn't brought up to be talked about, it was just "mentioned" is a better word for it. Likely it will never get implemented as I don't have have the time to get it working. Visual programming is an interdisciplinary field that requires expertise in programming language theory, user interface design and graphics programming. My specialty is just web, so while I'm learning those things on the side because I'm interested in those things, whether all of that coalesces into a visual programming language remains to be seen.


Instead of complaining that you're not interested in all these different topics and people you don't want to know anything about, and then mentioning your own things as being superior to the things you're criticizing, but then refusing to say anything more about your own things when asked, and then complaining that Scott Kim didn't make a better Photoshop, but then complaining that "the real topic at hand isn't who made a better photoshop", and then being sarcastic some of the time, but also accidentally using words that mean the exact opposite of what you intend (which makes it hard to tell your sarcasm from your mistakes, just like Trump), it would have been a better idea for you to simply not participate in this discussion that you find so uninteresting and refuse to contribute to.


No.

Scott Kims viewpoint is irrelevant because it's not programming. It's an editor for pixels.

I am contributing by telling you whats wrong with viewpoint and the ideas behind it. You're contributing by spouting off about everyones credentials.

We're done.


I quite strongly disagree that Scott Kim's PhD thesis on Viewpoint isn't relevant to a discussion of visual programming languages, and I'll be happy to explain why.

But I don't understand why you have wasted so much time and effort trying to convince me not to discuss it, when you could have simply not said anything at all, since you don't have anything useful to contribute.

If your problem is that there is too much irrelevant noise in this discussion, you have nobody to blame but yourself, because you have added absolutely nothing useful or positive or interesting, just a lot of useless whining. You've already said you don't care FOUR times, so stop denying that you care so frequently, or simply stop posting.

People who don't have anything useful to contribute, or refuse to contribute anything useful that they claim to know but want to keep secret because they don't trust anyone not to steal their idea, shouldn't mention their secrets in the first place, and shouldn't attack other people for contributing things to the discussion.

Stop complaining, and start contributing.

Now back to the point:

I had a discussion in 1999 with Jaron Lanier about the 3D tree node data structure (Swivel3D trees) and plug-in COM data structures in Body Electric / Bounce, and he raised some interesting points about visual thinking and explicit visual representation of data, which parallel what Scott was exploring in his Viewpoint thesis.

Jaron, who founded VPL Research, developed and used Body Electric extensively, programming real time virtual reality simulations and musical instruments by integrating data gloves, body suits, eyephones, two separate SGI workstations to render for two eyes, 3d input trackers, music synthesizers, convolvotrons, and other i/o devices.

https://en.wikipedia.org/wiki/Jaron_Lanier

https://en.wikipedia.org/wiki/VPL_Research

In case you're not familiar with his work, here is a classic 1986 interview with Jaron from the book "Programmers at Work" (which also interviews Scott):

>Today I posted the Jaron Lanier interview from his early days as a young programmer in California. Back then, this free-spirited guy was touting Virtual Reality to disbelieving stares. Jaron has become a great spokesman and sage for the industry, questioning where we are going and where we have come from. He is an author, computer scientist, and gadfly of the industry. You can read all about his recent work here.

https://programmersatwork.wordpress.com/jaron-lanier-1986/

>INTERVIEWER: What are you doing with programming languages now?

>LANIER: Well, basically, I’m working on a programming language that’s much easier to use.

>INTERVIEWER: Easier because it uses symbols and graphics?

>LANIER: It needs text, too. It’s not exclusively graphics. With a regular language, you tell the computer what to do and it does it. On the surface, that sounds perfectly reasonable. But in order to write instructions (programs) for the computer, you have to simulate in your head an enormous, elaborate structure. Anytime there’s a flaw in this great mental simulation, it turns into a bug in the program. It’s hard for people to simulate that enormous structure in their heads. Now, what I am doing is building very visual, concrete models of what goes on inside the computer. In this way, you can see the program while you’re creating it. You can mold it directly and alter it when you want. You will no longer have to simulate the program in your head.

Jaron designed the musical visual program (which Scott Kim cited in his thesis) on the cover of the September 1984 Scientific American on Computer Software (a wonderful issue, with many articles about programming languages and software by some amazing people).

https://www.scientificamerican.com/magazine/sa/1984/09-01/

To draw a parallel, pixels are to Viewpoint as the Swivel3D scene graph is to Body Electric, and they both share the ideal that "the virtual world and the knowledge base were the same thing" and "it's user interface all the way to the bottom":

"I had always thought the swivel tree was ridiculous, of course, but on the other hand I liked the idea that the virtual world and the knowledge base were the same thing- that unity encourages the visibility and grabbability of the underlying concepts. I think the brain works that way- there isn't some barrier behind which everything gets abstract- instead, it's user interface all the way to the bottom! What I think would be the coolest long-term destination of BE would be extending the scenegraph so that it was as powerful a knowledge base as you'd want..." -Jaron Lanier

https://macintoshgarden.org/apps/swivel-3d

https://www.donhopkins.com/home/archive/visual-programming/b...

    At 12:31 AM -0400 7/8/99, Hopkins, Don wrote:

    Hi, Jaron. We've met briefly once or twice - I'm a friend of David
    Levitt's.

    What's this about Sun acquiring the rights to the VPL patents, and
    Body Electric?
https://web.archive.org/web/20051217153424/http://www.advanc...

    Does Sun actually have the Body Electric source code? What version do
    they have?

    Does anybody at Sun even know how to compile a Mac program?

    Last time I worked there (admittedly a long time ago), they were too
    embarrassed to allow Macs in the building (which might make somebody
    realize how bad Unix sucks in comparison).

    Has anyone tried to rewrite it in Java?

    When I was working with David at Levity and Interval, I totally
    overhauled the source code to Bounce: porting it to the latest version
    of CodeWarrior and the PowerPC, cleaning up C code translated from
    Pascal, that no human had ever touched before, and just generally
    re-indenting and adding white space so it was pretty to look at.

    Then I implemented a new interface for plugging in DM's, based on
    ActiveX (yes, ActiveX runs on the Mac).  I added a new data type that
    you can flow along blue wires: a COM object (aka a plug-in ActiveX
    object in a shared library).  

    Then we made plug-ins modules that produced and consumed the new
    plug-in data types, like strings and polymorphic dictionaries.  With
    these new data types, we were able to model very complex simulations
    as nested trees of dictionaries, strings and numbers, treat
    dictionaries as high level objects, pass them all around on wires,
    reading and modifying them at will.

    The thing that the original Body Electric was missing is a way to
    dynamically model structured data like Lisp s-expressions and
    association lists, which is now possible in Bounce, using
    dictionaries. (The swivel3d trees just don't cut it for representing
    "knowledge".)

    Bounce still had the "M4" Director player rendering engine, but it
    didn't do everything we needed, so I implemented my own graphics
    library for drawing sprites, playing sounds, and stuff like that.  We
    used it to implement a simulation of Rush Limbaugh and Jesse Jackson
    watching TV and arguing over the closed captioning stream.

    A big fat housefly would skitter around the screen, land on their
    faces, and tickle them into waving their hands around and
    pontificating.

    I just got a 400 mhz G3 powerbook, and fired up a 2 year old copy of
    Bounce and the crazy Limbaugh/Jackson demo, and it still works,
    actually like a bat out of hell! I ran into David and John Szinger
    (another Bounce programmer from Interval) a few days ago at a 4th of
    July party, and they really got a kick out of seeing the old demo that
    we worked together on, still running!

    I live in Oakland just south of Berkeley. Drop me a line if you're in
    the bay area, and would like to see what Bounce has evolved into.

        -Don

    From: Jaron Lanier
    To: Don Hopkins
    Sent: Thursday, July 08, 1999 6:13 AM
    Subject: Re: Body Electric lives?

    Hey there, and thanks for writing!

    Yup, Sun owns VPL and Body Electric, and my guess is that if anyone
    looked very closely it would turn out Interval doesn't have rights to
    Bounce.  But no one is likely to look very closely, so let's forget
    about that.  (I don't think Sun is aware of Bounce or the work at
    Interval.)

    I had never asked what was done with Bounce at Interval- it's
    fascinating to hear what you were up to.  I live in NYC for the most
    part, but I'd love to see it sometime when I'm in the Bay Area.  Also
    Chuck Blanchard lives in SF and you and he might want to trade demos
    sometime.

    There IS a community of Body Electric users.  It is STILL building the
    most interactive 3D virtual worlds of any tool (though Alice, from
    Carnegie Mellon, is the other hot contender).  That's SHAMEFUL!  While
    BE sucks in every other way, all the more recent vr design tools,
    especially the vrml ones, simply avoid the problem of deep
    interactivity. How could the community be so whimpy, at this late
    date?

    On the Body Electric side of things, there has also been some work
    updating to CodeWarrior/PowerPC (shame we didn't share that work!), as
    well as support for Quickdraw3D, OpenGL on the Mac, OMS, and some
    other standards.

    There have been some changes to the interface, but not as ambitious as
    yours.  The neatest thing is a debugger/tracer tool that is a pleasure
    to use.  Since BE is used mostly in 3D domain, there are also some
    tools dealing with textures, lights, etc.

    I had always thought the swivel tree was ridiculous, of course, but on
    the other hand I liked the idea that the virtual world and the
    knowledge base were the same thing- that unity encourages the
    visibility and grabbability of the underlying concepts.  I think the
    brain works that way- there isn't some barrier behind which everything
    gets abstract- instead, it's user interface all the way to the bottom!
    What I think would be the coolest long-term destination of BE would be
    extending the scenegraph so that it was as powerful a knowledge base
    as you'd want...

    I'd LOVE to see a fancy BE release, in JAVA, or at least spitting JAVA
    out.  The question, of course, is where the money would come from.
    I've tried to talk Sun into an open source release so the community
    could hack on it voluntarily (the source was included in the patents
    that were granted, so it's actually already released by the US
    government anyway, though only on paper).  Unfortunately, Sun doesn't
    want to do that.  What they've said instead is that I can choose up to
    six sites with free hacking privileges at a given time.  Bizarre!
    With so few sites, I think there'd need to be money to make sure the
    sites stayed focused on it...  You'd have thought Sun would know
    better by now.

    The body electric community is surprisingly NOT entertainment
    oriented, though I and a few others still use it that way.  There are
    people hidden away who are still using it for ergonomic simulations,
    simple surgical planners (because the fancy systems are too rigid to
    model some situations), cognitive test rigs, and some work with kids.

    You should know there was some tension about BE/Bounce at one point.
    The problem was that Chuck Blanchard wasn't credited as the lead
    designer/programmer of BE/Bounce when David brought the program to
    Interval.  Chuck's name was reduced in stature in the "about" and he
    was not mentioned in some important semi-public demos at Interval.  He
    was also offered a pseudo-position to help with Bounce at Interval,
    but without health benefits - and Chuck has MS and absolutely NEEDS
    health insurance.  I at one point yelled at the guy who runs Interval
    (forgot his name..)  about their treatment of Chuck- and David is
    STILL mad at me for making a fuss about it.  But I felt I had to.

    So that's the scoop!  Both sides of the mystery revealed!

    All the best,

    Jaron

    Jaron on the web:
https://web.archive.org/web/20050212085044/http://www.advanc...




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: