
Ask HN: Why isn’t visual programming a bigger thing? - remolacha
Visual programming seems to unlock a ton of value. Difficult concepts can more easily be grokked in a visual form. Programming becomes more approachable to first-timers. Since text is difficult to manipulate without a physical keyboard, visual programming opens the doors to doing development on mobile devices. And yet it only seems to be mainstream in education (i.e. Scratch). Why?
======
barrkel
Symbols are pictures too; and they have denser meaning than diagrammatic
pictures.

It's not that difficult concepts are easier in visual form. It's that concepts
are verbosely described in visual form. Verbosity puts a ceiling on
abstraction and makes things explicit, which is why things seem simple for
people to whom everything is new (experts, on the other hand, find it harder
to see the wood for the collection of tall leafy deciduous plants).

When you need abstraction, you need to compress your representation. You need
to replace something big with something small. You replace a big diagram with
a reference. It has a name.

Gradually you reinvent symbolic representation in your visual domain.

Visual programming excels in domains that don't need that level of
abstraction, that can live with the complexity ceiling. The biggest win is
when the output is visual, when you can get WYSIWYG synergies to shorten the
conceptual distance between the visual program and the execution.

Visual programming is at its worst when you need to programmatically maniplate
the construction of programs. What's the use of building all your database
tables in a GUI when you need to construct tables programmatically - you can't
script your GUI to do that, you need to learn a new paradigm, whether it's SQL
DDL or some more structured construction. So now you've doubled what you need
to know, and things aren't so simple.

~~~
LargoLasskhyfv
Most visual programming is two-dimensional. _What if_ it were three-
dimensional instead, arranged in some sort of
[https://en.wikipedia.org/wiki/Axonometric_projection](https://en.wikipedia.org/wiki/Axonometric_projection)
with
[https://en.wikipedia.org/wiki/Platonic_solid](https://en.wikipedia.org/wiki/Platonic_solid)
's representing your modules/nodes, chosen according to the amount of
connections they have with other modules/nodes, the connections styled and
color coded, maybe even unobtrusive/discreetly animated to show the
direction/speed/volume of signals/dataflow, width of datatypes/buses, all in
an abstract fashion similar to the way large metro/rapid-transit systems are
mapped?

Able to encapsulate other modules/nodes, folding large graphs into one,
popping into an [https://en.wikipedia.org/wiki/Exploded-
view_drawing](https://en.wikipedia.org/wiki/Exploded-view_drawing) only on
demand?

Turnable/viewable from any direction you like, but usually snapping to some
virtual 3D-grid, in say steps of 45° ?

Zoomable?

This is even usable the _other_ way around for reasoning about stuff you have
_NO_ source code for. Like in
[https://www.youtube.com/watch?v=4bM3Gut1hIk](https://www.youtube.com/watch?v=4bM3Gut1hIk)
(skip to about 14m30sec, the _show_ starts there) and
[https://sites.google.com/site/xxcantorxdustxx](https://sites.google.com/site/xxcantorxdustxx)
(same guy) and others, but similar [https://codisec.com/binary-data-
visualization](https://codisec.com/binary-data-visualization) and even more so
[https://arcan-fe.com](https://arcan-fe.com)

That looks messy, but that is because it is in reverse, with no known sources.

Now imagine how tidy it could look when done originally, from first
principles.

Your own cybernetic Zettelkasten/Memex/Dynabook/whatever in a world of common
and open (architecture neutral) data & document formats, automagically
tagging, indexing and sorting all your stuff.

 _OFFLINE_ if you like to.

Usable on conventional desktops, tablets, large surfaces, overlaid onto some
augmented reality, VR, holographics, i don't care.

With the end goal of partially dynamic reconfiguration/JIT of some fpga-like
logic fabric, according to what you click, tap, swipe in that environment,
while hosting itself.

Without crashing. Formally verified at the same time.

~~~
wello
I was also thinking about this the other day. Essentially we should be able to
take advantage of the brain's superb ability to comprehend/remember locations
in our day to day programming experience. This would lower the cognitive load
of translating the codebase into a mental model, since locations are easier to
remember (memory palace) and reason about.

On a related note, the Unison language [1] and its data structure of the
codebase lends itself perfectly for this kind of visualization (since you get
the dependency graph between functions for free). Imagine being able to see
the whole codebase as some kind of large, coarsely detailed factory/assembly
line, linking all your higher-level functions together. Only when you "get"
sufficiently close, you can make out the inner connections between the
functions.

[1] [https://www.unisonweb.org/](https://www.unisonweb.org/) \- a programming
language where the definitions themselves are content-addressed

~~~
LargoLasskhyfv
Hm. I remember reading about it, but it isn't bookmarked. So i probably missed
something. What i didn't write in my post, was different symbology at
different levels of zoom. But you got the general idea, and I what you
probably mean.

edit: Would maybe look nice, but not very useful, to _only_ have platonic
solids, however they are connected. I'm thinking of icons/thumbnails also,
like in airports, stations, symbolising data types, (line)encoding schemes,
combined with something like [https://en.wikipedia.org/wiki/Shunting-
yard_algorithm](https://en.wikipedia.org/wiki/Shunting-yard_algorithm) /
[https://en.wikipedia.org/wiki/DRAKON](https://en.wikipedia.org/wiki/DRAKON)
depending on context/zoom level and simply _NOT_ fitting to each other, if you
try to couple them in incompatible/unsafe ways.

2nd edit: the same for common algorithms, have some box/body, define the
range/width of possible inputs via some sliders/knobs, which change the
possible plugs, ready.

------
URSpider94
I have led the eviction of LabView from three different engineering
organizations now. It’s seductive how easy you can get started with building a
bench top system. Then it grows from there and pretty soon people are doing
linear algebra and trying to automate tests by writing data into a database -
screen after screen of multiply nested windows. It’s not pretty, it’s not fast
(you need a much more powerful computer to run LabView than a comparable
Visual C++ or Matlab program). It’s also very hard to document it well, really
hard for a third party to bootstrap into your code base, doesn’t play well
with common version management tools, requires a full license for every
machine you run it on, and if your developer leaves, it can be hell to hire
someone to maintain it - LabView consultants cost a mint and are hard to find.

We had a bake-off once, once you have the instrument control DLL’s and link
them into Matlab, you can code up a new experiment in Matlab in about 10% of
the time it takes to do the same thing in LabView. And from there, you can do
pretty much anything that needs doing in terms of data work-up. I’m sure these
days you could do the same in Python, which has become the new science and
engineering language.

~~~
exmadscientist
> LabView ... writing data into a database

You know, this is actually pretty easy to do if you know SQL and use LabView's
FFI to call into a DLL for accessing the database. I did this once nearly a
decade ago and was able to replace a system that took a (horrifyingly,
_horrifyingly_ incompetent — seriously, these people were so awful they needed
state-level protection) team a year (?) to develop in a single afternoon. Oh,
and mine didn't crash; theirs did. Anytime I think "this is the worst code
I've ever seen" I stop, remember that code, and shake my head... that
particular chunk of LabView just keeps winning (losing?). (The fact that they
somehow had some sort of custom C extension with networked I-don't-think-I-
ever-knew-but-it-sure-didn't- _work_ in there probably makes it unassailable.)

That said... kill LabView with fire. Kill it _dead_. There's a reason that
it's not listed on my resume and I'll deny any knowledge of it when people
come asking.

(Personally, my biggest issue with LabView is its complete resistance to any
form of version control. But there are so, so many other issues to choose from
that I won't be sad if you pick a different one to hate on. Just don't use
LabView and we're cool.)

~~~
w_t_payne
My personal mission is to produce a version-control-friendly and open-source
LabView/Simulink killer. Visual display for the managers, textual editing to
save the poor engineers who have to program the thing from mouse-hand RSI. Do
the whole thing in Python/C to make it both hip and fast.

~~~
dongping
Maybe you want to look at Modelica [1] for inspiration? It is an open modeling
language standard with open source implementations. It should do what you
want. maybe abit too declarative however (you can always use the algorithm
block instead of the equation block though).

[1] [https://www.modelica.org/](https://www.modelica.org/)

~~~
w_t_payne
I was aware of OpenModelica, but hadn't looked at the project in any great
depth. I think it's a _lot_ more ambitious than my project, which is
restricted (at present) to a single model of computation.

------
prewett
Having worked with LabVIEW a fair amount, the problems I have with visual
programming are:

1) takes up a lot of space

2) each subroutine (sub-VI) has a window (in the case of LabVIEW, two
windows), so you rapidly get windows spewed all over your screen. Maybe
they've improved that?

3) debugging is a pain. LabVIEW's trace is lovely if you have a simple
mathematical function or something, but the animation is slow and it's not
easy to check why the value at iteration 1582 is incorrect. Nor can you print
anything out, so you end up putting an debugging array output on the front
panel and scrolling through it.

4) debugging more than about three levels deep is painful: it's slow and
you're constantly moving between windows as you step through, and there's no
good way to figure out why the 20th value in the leaf node's array is wrong on
the 15th iteration, and you still can't print anything, but you can't use an
output array, either, because it's a sub-VI and it's going to take forever to
step through 15 calls through the hierarchy.

5) It gets challenging to think of good icons for each subroutine

6) If you have any desire for aesthetics, you'll be spending lots of time
moving wires around.

7) Heavy math is a pain visually (so you use the math node that has it's own
mini-language, and then you're back to text)

~~~
technologic
as someone who's worked a lot with LabVIEW and automating some considerably
old instruments...i hate it i hate it i hate it. I hate it with the fire of a
thousand suns, most every paradigm I know is wrong in this bizarro-land willy-
wonka very proprietary and very expensive IDE. Not a fan. Not worth the
opportunity cost of not learning other languages that are useful in society.
Its the school of thought it subscribes to, the way things are designed just
makes zero sense. Learning something that makes you unlearn the right way to
learn a way that only works with one company's MATLAB-priced software, is no
fun at all.

------
DangerousYams
It's pretty main stream in the CG industry. Houdini is a great example of a
very interesting take on visual programming in a non-destructive procedural
workflow. Visual programming is a staple at Pixar and features heavily in all
their internal tools for shading, animation, set dressing etc. At Pixar they
push it to its limits, expressing all manners of development patterns through
these tools to artists who don't know any software development at all. I often
think of a modified version of your question which is "Why isn't visual
programming a bigger thing for trained programmers?" What needs to improve for
that to happen? It's already a big thing for visual artists in the CG
industry.

~~~
TeMPOraL
> _" Why isn't visual programming a bigger thing for trained programmers?"
> What needs to improve for that to happen?_

Because it's inefficient as hell. That _may_ be mostly a problem with existing
tooling, but it is a problem.

For context, I'm currently doing a side project with Unreal Engine 4, which
uses visual programming for scripting and defining shaders; I'm sticking
strictly to these tools, because I can't be arsed to set up yet another C++
dev environment on yet another machine. Because of that, the pros and cons of
visual programming are on the top of my mind right now. Here are some
thoughts:

Visual programming is fun while you assemble high-level blocks together. But
god forbid, you have to do any maths. You end up spending 10 minutes
assembling an unreadable mess of additions, multiplications, LERPs, etc.
taking half of your screen, to build an equivalent of three lines of code you
would've typed in 30 seconds in your editor.

(This presents an opportunity for improvement: why not give a generic "math"
node, where I could just type a math expression by hand, and have the free
variables show up as input pins? There's really no point in assembling things
like "lerp(A x B, C x D, alpha) * -beta" from half a dozen nodes.)

(EDIT: turns out this exists in UE4, see
[https://news.ycombinator.com/item?id=23256317](https://news.ycombinator.com/item?id=23256317).
I learned about it just now.)

In general, _all the dragging_ quickly becomes annoying. As a trained
programmer, you can type faster than you can move your mouse around. You have
an algorithm clear in your head, but by the time you've assembled it half-way
on the screen, you already want to give up and go do something else.

Visual programming is fun while your program fits on the screen. Because of
the graphical representation, you'll usually fit much less code this way than
you would in text. This, plus the fact that anything nontrivial won't form a
planar graph (i.e. edges will have to cross somewhere), makes you want to
abstract your code hard, otherwise it gets impossible to read. And here is
where Unreal Engine starts to fail you. Yes, you can make perfectly fine
functions - which are their own little subgraphs. But that also means they're
different "tabs"; it's hard to see the code using a function and the code of a
function simultaneously on the screen, it's hard to dig in and back out, jump
around the references.

As an improvement in that particular case, I'd love if I could expend
individual blocks into their definitions "inline", as a popup semi-transparent
overlay, or something that otherwise pushes other nodes out while still
keeping them on the same sheet. But given the performance of the editor, that
would just burn my CPU down.

I think visual programming works well for artists, because when building
shaders, you mostly stay on a single level of abstraction - so a flowsheet you
can zoom around makes sense. But when you start coding logic and have to start
DRYing code, dividing it up into layers of abstraction, the ergonomics starts
to wear you down. Then again, I hear good things about LabVIEW, so maybe it
can be made to work.

~~~
gambiting
>>Visual programming is fun while you assemble high-level blocks together. But
god forbid, you have to do any maths

Meh, I've worked on two games made in Snowdrop, where pretty much most of the
game logic is made directly in the editor, all of the shaders, all of the
animations, all of the UI is made using a visual scripting language and it's
been more than fine. It meant that we had people who didn't know much about
programming "coding" entire logic for animations for instance.

>>You end up spending 10 minutes assembling an unreadable mess of additions,
multiplications, LERPs, etc. taking half of your screen, to build an
equivalent of three lines of code you would've typed in 30 seconds in your
editor.

Yes, which I appreciate is frustrating if you actually know how to type those
3 lines of code, and where to type them and how to compile the project and run
it from scratch(so on the projects I worked with yes, it would take you 30
seconds to type in the code, but then 10 minutes to build and another 10 to
run the game, while making those changes in the editor-based visual script
you'd see the changes instantly). Our UI artists for example never even had
Visual Studio installed and in fact I don't think they even synced code since
they never had to - visual scripting was such a powerful tool for them that
there was no need.

~~~
TeMPOraL
> _so on the projects I worked with yes, it would take you 30 seconds to type
> in the code, but then 10 minutes to build and another 10 to run the game,
> while making those changes in the editor-based visual script you 'd see the
> changes instantly_

That's orthogonal to visual programming, and just the symptom of most
programming languages being edit-compile-run. Compare that with UE4 (as far as
I know) not allowing you to modify blueprints in a running game. Contrast that
with different programming languages. I do some hobby gamedev in Common Lisp
from time to time, which is a proper REPL-driven language. There, I can change
any piece of the code, and with a single keystroke in my editor, compile it
and replace it on a living, running instance of my game.

------
mimixco
This comes up in comp sci every so often since the 80s when it first gained
traction. I think the short answer is that boxes and wires actually become
harder to manage as the software increases in complexity. The "visual is
simpler" idea only seems to hold up if the product itself is simple (there are
exceptions.)

To my mind, this is analogous to textual writing itself vs drawing where text
is an excellent way to represent dense, concise information.

~~~
mgreenleaf
UE4 Blueprints are visual programming, and are done very well. For a lot of
things they work are excellent. Everything has a very fine structure to it,
you can drag off pins and get context aware options, etc. You can also have
sub-functions that are their own graph, so it is cleanly separated. I really
like them, and use them for a lot of things.

The issue is that when you get into complex logic and number crunching, it
quickly becomes unwieldy. It is much easier to represent logic or mathematics
in a flat textual format, especially if you are working in something like K. A
single keystroke contains much more information than having to click around on
options, create blocks, and connect the blocks. Even in a well-designed
interface.

Tools have specific purposes and strengths. Use the right tool for the right
job. Some kind of hybrid approach works in a lot of use cases. Sometimes
visual scripting is great as an embedded DSL; and sometimes you just need all
of the great benefits of high-bandwidth keyboard text entry.

~~~
TeMPOraL
Exactly. Take as an example something from a side project of mine that I
recently did:

[https://blueprintue.com/blueprint/jt69wz7e/](https://blueprintue.com/blueprint/jt69wz7e/)

Compare that with this bit of pseudocode:

    
    
      function inputAxisHandleTeleportRotation(x, y, actingController, otherController) {
        if(actingController.isTeleporterActive) {
          // Deactivate teleporter on axis release, minding controller deadzone around 0
          if(taxicabDistance2D_(x, y, 0, 0) < thumbstickReleaseDeadzone) {
            sendEvent(new inputTeleportDeactivate(actingController, otherController);
          }
          else {
            actingController.teleportRotation = getRotationFromInput(x, y, actingController);
          }
        }
      }  
    

Which one is more readable for someone who's even a little bit experienced in
programming? Which one is faster to create and edit?

~~~
conistonwater
That nested if statement, in particular, looks especially awkward in the
blueprint.

------
fossuser
You’d probably like Bret Victor’s inventing on principle talk:
[https://m.youtube.com/watch?v=PUv66718DII](https://m.youtube.com/watch?v=PUv66718DII)

He demonstrates some good examples.

I’m not quite sure it’s what you mean by visual, but it seems obviously
valuable and tools that enabled some of what he shows would be really useful.

I’m not sure why we don’t have these things - I think Alan Kay would say it’s
because we stopped doing real research to improve what our tools can be. We’re
just relying on old paradigms (I just watched this earlier today:
[https://www.youtube.com/watch?v=NdSD07U5uBs](https://www.youtube.com/watch?v=NdSD07U5uBs))

~~~
optymizer
Bret's talk was great! Thank you for sharing it.

~~~
fossuser
His website also has some great stuff:
[http://worrydream.com/#!/LearnableProgramming](http://worrydream.com/#!/LearnableProgramming)

------
DarrisMackelroy
I discovered TouchDesigner [1] a few months back- it’s an incredibly powerful
visual programming tool used in the entertainment industry. It’s been around
well for over a decade and is stable enough to be used for live shows.
Deadmau5 uses it to control parts of his Cube rig [2]. I’ve seen a few art
installations based around it as well [3].

There are some really amazing tutorials and examples here:
[https://youtu.be/wubew8E4rZg](https://youtu.be/wubew8E4rZg)

[1] [https://derivative.ca](https://derivative.ca)

[2] [https://derivative.ca/community-post/made-love-
touchdesigner...](https://derivative.ca/community-post/made-love-
touchdesigner-v99-cusersdeadmau5)

[3] [https://derivative.ca/community-post/making-go-robots-
intera...](https://derivative.ca/community-post/making-go-robots-interactive-
motion-tracked-dance-installation)

~~~
bmitc
TouchDesigner is indeed super cool. And I correctly guessed what that tutorial
was going to be before I clicked on it. :) Mathew Ragan is an excellent
tutorial maker. He's also relaxing to listen to.

TouchDesigner really showcases the enabling nature of visual programming
languages. You can _see_ your program working and _inspect_ and _modify_ it
while it is working. These are very powerful ideas, and visual programming
languages are much better platforms for ideas like this.

People, i.e. traditional programmers, are really hard and down on visual
programming languages. Meanwhile, people who use LabVIEW, TouchDesigner, vvvv,
Pure Data, Max, and Grasshopper for Rhino are all extremely effective and move
quickly. People who are experts in these environments cannot be kept up with
people using text-based environments when doing the same application.

Text-based programming is limited in dimensionality. This can become very
constraining.

~~~
mjfisher
> Text-based programming is limited in dimensionality. This can become very
> constraining

In contrast, I've often thought that visual programming tools are much more
limited dimensionally, and that's why they can become difficult to manage
beyond a low level of complexity: you only have two dimensions to work with.

With the visual programming tools, the connections between components need
very careful management to prevent them becoming a tangled and overlapping
web. In a 2D tool (e.g. LabVIEW), you could make a lot of layouts simpler by
introducing a third dimension and lifting some components higher or lower to
detangle connections - but then you'd face similar hard restrictions in 3D.

Text based programs suffer from no such restrictions; the conceptual space
your abstractions can make use of is effectively unlimited, and you can manage
connections and information flow between pieces of code to maximize
readability and simplicity, rather than artificially minimizing the number of
dimensions.

~~~
bmitc
> the conceptual space your abstractions can make use of is effectively
> unlimited, and you can manage connections and information flow between
> pieces of code to maximize readability and simplicity, rather than
> artificially minimizing the number of dimensions.

How does this not apply to a visual language like LabVIEW? Just because you
draw the code on a 2D surface doesn't prevent abstraction and arbitrary
programs. The way I program LabVIEW and the way it executes is actually very
similar to Elixir/Erlang and OTP. Asynchronous execution and dataflow are core
to visual languages. You are not "bound" by wires.

When you write text-based code, you are also restricted to 2 dimensions, but
it's really more like 1.5 because there is a heavy directionality bias that's
like a waterfall, down and across. I cannot copy pictures or diagrams into a
text document. I cannot draw arrows between comments to the relevant code; I
have to embed the comment within the code because of this
dimensionality/directionality constraint. I cannot "touch" a variable (wire)
while the program is running to inspect its value. In LabVIEW, not only do I
have a 2D surface for drawing my program, I also get another 2D surface to
create user interfaces for any function if I need. In text-languages, you only
have colors and syntax to distinguish datatypes. In LabVIEW, you also have
shape. These are all additional dimensions of information.

~~~
sinker
> Text-based programming is limited in dimensionality. This can become very
> constraining.

> When you write text-based code, you are also restricted to 2 dimensions, but
> it's really more like 1.5 because there is a heavy directionality bias
> that's like a waterfall, down and across.

These are two really good points. Text-based code is just a constrained
version of a visual programming environment. So far most attempts at VP have
attempted to represent code in terms of nodes and lines (trees), but that does
not necessarily need to be the case.

The interesting about VP is that is presents a way to better map the concept
and structure of programming as an abstract concept to how we physically
interface with our coding environments.

It's far from likely that character and line-based editing is the mode of the
future. Line-editing maps to the reality of programming in, to my eyes, such a
limited way that is seems the potential for new interfaces, and modes of
representation is wide open.

It's not that, as some people stubbornly say, there's no better alternative to
text-based programming, but I think we just haven't conceived of a better way
yet. We're biased to think in a certain way because most of us have programmed
in a certain way almost solely by text and most of our tools are built to work
with text. But that doesn't necessarily mean that the way we've done things is
the best way indefinitely.

There's so much unexplored territory. VR opens up new frontiers. What if the
concepts of files, lines, workspaces were to map to something else more
elemental to programming as an abstract concept. What if we didn't think so
much in terms of spacial and physical delineations, and instead something
else? Blind programmers have a different idea of what an editor is.
Spreadsheet programs "think" in terms of cells in relation to one another.
What about different forms of input? Dark Souls can be beaten on a pair of
bongos. Smash Bros players mod their controllers because their default mode of
input isn't good enough at a high level. Aural and haptic interfaces are
unexplored. Guitars and pianos are different "interfaces" to music. Sheet
music is not a pure representation of music.

I think there's the mistaken belief that text == code, that text is the most
essential form of code. Lines, characters are not the essential form of code.
As soon as we've assigned something a variable name, we've already altered our
code into a form to assist our cognition. Same with newlines, comments,
filenames, the names of statements and keywords. When we program in terms of
text, we're already transforming our interpretation of code and programming;
we've already chosen and contrained ourselves to a particular palette.

What is the most essential form of code are (depends on the language, but
generally) data structures and data flow. So far, our best interpretation of
this is in the form of text, lines, characters, inputted by keyboard onto a
flat screen - but this is still just one category of interpretation.

All this is to say is that text is not necessarily the one and only way, and
it's too soon to say that it's the best way.

~~~
bmitc
These are all excellent points, and I agree whole-heartedly. I'm glad someone
else gets it. :) I'm going to favorite this comment to keep it mind.

The way I see it is that we've had an evolution of how to program computers.
It's been:

circuits -> holes in cards -> text -> <plateau with mild "visual" improvements
to IDEs> -> __the future__

I think many programmers are just unable to see the forest through the trees
and weeds, but visual languages see a lot of power in specific domains like
art, architecture, circuit and FPGA design, modeling tools, and control
systems. I think this says something and also overlaps with what the Racket
folks call Language Oriented Programming, which says that programming
languages should adapt to the domain they are meant to solve problems in. Now
all these visual languages are separate things, but they are a portion of the
argument in that domain-specific problems require domain-specific solutions.

So what I believe we'll have in the future are hybrid approaches, of which
LabVIEW is one flavor. Take what we all see at the end of whiteboard sessions.
We see diagrams composed of text and icons that represent a broad swath of
conceptual meaning. There is no reason why we can't work in the same way with
programming languages and computer. We need more layers of abstraction in our
environments, but it will take work and people looking to break out of the
text=code thing. Many see text as the final form, but like I said above, I see
it as part of the evolution of abstraction and tools. Text editors and IDEs
are not gifted by the universe and are not inherent to programming; they were
built.

This has already happened before with things like machine code and assembly.
These languages do not offer the programmer enough tools to think more like a
human, so the human must cater to the languages and deal with lower-level
thought. I view most text-based programming languages similarly. There's just
too many details I have to worry about that are not applicable to my problem
and don't allow me to solve the problem in the way that I want. Languages that
do provide this (like Elixir, F#, and Racket) are a joy to use, but they begin
to push you to a visual paradigm. Look at Elixir, most of the time the first
thing you see in an Elixir design is the process and supervision tree. And
people rave about the pipe operator in these languages. Meanwhile, in LabVIEW,
I have pipes, plural, all going on at the same time. It was kind of funny as I
moved into text-based languages (I started in LabVIEW) to see the pipe
operator introduced as a new, cool thing.

In general, I have many, many complaints about LabVIEW, but they do not
overlap with the complaints of people unfamiliar with visual programming
languages, because I've actually built large systems with it. Many times, when
I go to other languages, especially something like Python, I feel I've gone
back in time to something more primitive.

~~~
sinker
When VP is mentioned I think people automatically assume hairy nests of nodes,
lines, trees. Slow and inefficient input schemes.

Text-based representations have tremendous upsides (granularity, easy to
input, work with existing tools, easy to parse), but they also have downsides
I think people tend to overlook. For example, reading and understanding code,
especially foreign code, is quite difficult and involved; involves a lot of
concentration, back and forth with API documentation, searching for and
following external library calls ad nauseum. Comments help, but only so much.
Code is just difficult to read and is expensive in terms of time and
attention.

> Text editors and IDEs are not gifted by the universe and are not inherent to
> programming; they were built.

Bret Victor has some good presentations that addresses this idea. One thing he
says is that in the early stages of personal computing, multitudes of ideas
flowered that may seem strange to us today. A lot of that was because we were
still exploring what computing actually was.

I don't dislike programming in vim/emacs/ides. Is it good enough? Yes, but...
is this the final form? I think it'll take a creative mind to create a
general-purpose representation to supersede text-based representations. I'm
excited. I don't really know of anyone working on this, but I also can't see
it not happening.

------
giantDinosaur
I'm curious as to which difficult concepts become easy to understand when
presented visually. To me, the difficulty of programming has never been in its
textual representation. Just as in mathematics, the real challenges have
always been related to conceptual understanding. Is there a good example of
visual programming making difficult concepts easy?

~~~
tiborsaas
A good example would be Max/Msp. It's used by a lot of musicians creative
coders. Visual representation abstracts the boring pars of programming and
enables experimentation much more rapidly. Quickly rearranging nodes is more
intuitive than changing how you pass around objects.

------
woggy
I rarely see Grasshopper [1] mentioned in these threads. This is a very
successful visual programming tool used by designers and engineers, primarily
for generating geometry.

Where I work (structural engineering firm) some of the engineers do use it for
general purpose programming. I see the appeal of it, but keeping the layout
tidy and all the clicking is just too much effort. However for generating
geometry it's quite a useful tool.

[1]
[https://www.rhino3d.com/6/new/grasshopper](https://www.rhino3d.com/6/new/grasshopper)

------
c2the3rd
Written text IS a visual medium. It works because there is a finite alphabet
of characters that can be combined into millions of words. Any other "visual"
language needs a similar structure of primitives to be unambiguously
interpreted.

You say visual programming seems to unlock a ton of value. What can you do
with a visual language that is much easier than text? Difficult concepts might
be easier to understand once there is visual representation, but that does not
imply creating the visual representation is easier. And why should pictures be
more approachable than text? People might understand pictures before they can
read, but we still teach everyone to read.

~~~
Nevermark
The term “visual programming“ generally refers to spatial diagrams (usually
2D, but 3D especially for 3D subject matter).

Think coordinates, graphs, nodes, edges, flows, and nested diagrams.

“Visual” is especially meaningful in that many relationships are shown
explicitly with connected lines or other means.

So yes, for many things a diagram, tree or table structure actually layer out
in 2-dimensions to match what it represents is easier to understand.

Surely you appreciate diagrams in educational material despite the text.
Surely you have drawn graphs or other kinds of diagrams when you need to
visualize (spatially) relationships between parts of something you are
designing?

If not, you just have a different style of thinking than many other people.

That contrasts with text code where connections are primarily discovered by
proximity of code or common symbols.

Of course text is visual in that it’s a visible representation.

Spreadsheets are a good example of a combination of text “code” embedded in a
visual table representation.

------
tluyben2
I am working (with a grant) on 2 different concepts of visual programming; one
is kind of more traditional wires/boxes dataflow type but with some novel
twists and the other is something completely new. I have built many of these
in the past (some of them ran production for years before getting phased out
in favour of a 'normal' PL) and it never felt right, but I keep trying.
Normally I started top-down; first visual tooling then, as an afterthought,
the storage of the language (xml for instance, jikes). Now I started with the
language in textual form (not meant to program in directly but readable) and
that works much better; so there is a language, runtime, debugger and
introspection tooling all done and now we are putting 2 different visual
frontends on top to see if it works. So far the results have been good; our
sponsor is looking to launch something as a product and it's quite cool work.

~~~
divan
Lame question: how do you find such grants?

~~~
tluyben2
I am not very silent about what I think is broken about programming and
sometimes I find people with the same feeling. This one is from a company I
bumped into; they have a db/codeless type of thing and they want to make it
easier for people to use. It is particularly not an investment; maybe that
comes after if it works; it is money to play around with different ideas which
they give out for this type of purpose.

~~~
divan
Got it, thanks! It's just struck me that my project about visual programming
have stallen exactly due to the lack of vision how to find time/money for it,
and I've never thought about grants possibility.

------
asfarley
In practice, most visual tools aren’t compatible or easily usable with things
like Git for diffing. This gets tough for large projects.

Labview does have a visual-diff tool, but when I was using labview regularly
on a complex project, no-one used the diff system. They just checked out the
entire file and compared it visually to another version.

Another thing: you can’t ctrl-f for control-flow structures. You end up
mousing around for everything.

Another problem, all major graphical languages I’ve used are proprietary
(labview, simulink, Losant workflow system).

~~~
negentropicdev
Successfully working in teams in LabVIEW means having experienced architect(s)
that can effectively divide up the design into something where developers
won't step on each other. When you inevitably have to merge efforts that do
collide you usually just punt, pick one of the two, and do some rework. There
are some options available in LabVIEW for making parts of how it works more
compatible with source control but at the end of the day they're still binary
files and the prescribed option can impede with some other work flows that are
sometimes needed. If LabVIEW source was saved in some hierarchical structure
such as XML (that would have to be another layer learned so not really much of
a fix) it could play better with patch/merge. The diff and merge tools really
aren't terrible I just think that people are so used to not needing to
configure external tools for this that they don't even know it's possible. I
can double click a VI in a git log and see highlighted differences between
versions just like people can in any textual tool.

If you label structures (you can turn on a structure's label and then give it
some unique text) it becomes searchable. You can also create # bookmark
comments that link to structures/nodes/anything on the block diagram.

NI just released a completely free for personal use version of LabVIEW (unlike
their former "cheap" home edition that was watermarked and lacked features
like compiling executables) though my belief is that they don't have an
outwardly evident plan of long term strategy which is only likely to further
the majority opinion of the platform. Also this ultimately doesn't help people
that may be interested in using it in a work / academic environment or anyone
that's used to a less expensive hardware platform. Ladder logic programming
feels more akin to chiseling into stone, and the platform support in the NI
ecosystem can be quite nice for a lot of applications.

------
glopes
I've been developing a visual reactive programming language, Bonsai
([https://bonsai-rx.org/](https://bonsai-rx.org/)) for the past 8 years and
it's proven successful in neuroscience research. It is an algebra for reactive
streams and addresses some of the issues raised in this thread about GPVPLs:

a) it's amenable to version-control;

b) you can type/edit a program with the keyboard without the need for drag-
drop;

c) the auto-layout approach avoids readability issues with too much "artistic
freedom" for freestyle diagramming languages like LabView;

d) because its mostly a pure algebraic language w/ type inference there is
little complicated syntax to juggle;

e) the IDE is actually faster even though Bonsai is compiled because there is
no text parsing involved, so you are editing the abstract syntax tree directly

It definitely has its fair share of scalability problems, but most of them
seem to be issues with the IDE, rather than the approach to the language
itself. I've never probed the programming community about it, so would be
curious to hear any feedback about how bad it is.

------
TheOtherHobbes
Difficult concepts can't be represented in visual form. There's a hard limit
to the amount of complexity you can represent visually. On the far side of
that line everything starts to look like spaghetti.

But... this can be improved with good tooling and clear abstraction layers.
Some of the problems with visual systems are caused by poor navigation and
representation systems. Making/breaking links and handling large systems in
small windows makes for a painful level of overhead.

This also applies to text - jumping between files isn't fun - but visual
representations are less dense than text, so you have to do more level and
file hopping.

If someone invented a more tactile way of making/breaking links and you could
develop systems on very large screens (or their VR equivalent) so you could
see far more of the system at once, hybrid systems - with a visual
representation of the architecture, and small-view text editing of details -
might well be more productive than anything we have today.

~~~
negentropicdev
I love this idea and I'm already picturing a mashup of LabVIEW class
hierarchy/VI call hierarchies but it's dwarf fortress with multiple z-level
view enabled and you can mousewheel up and down through the pyramid of the
application architecture. I'm just lost as to how you'd relate/visualize the
dynamic instances of classes/modules so that you could have a unified
debugging environment, like you do with the regular block diagrams.

I've recently debugged LabVIEW code that resulted in the taskbar listing of
windows names to overflow. Definitely less than optimal.

------
wrnr
To represent a program as a graph you need a hyper-graph. That is a graph
where every edge is also a node, for example:

    
    
      (1)--[2]-->(3)
      (2)--[3]-->(4)
    

There is no good way to draw this visually without losing the ability to be
able to draw all possible programs. The grammar of both natural and
programming languages use various strategies to get around this, and they are
actually very efficient at it (from an information theoretic point-of-view).

Thats not to say one can't improve on the current culture of writing programs
in what is basically ASCII text, but visual programming is just not powerful
enough to describe arbitrary computations.

~~~
adrianN
Could you go into detail why you think that a hypergraph is needed? I think
every program can be represented by a directed graph of basic blocks.

~~~
Ace17
First-order functions maybe?

The following function has an obvious visual representation as a processing
block:

    
    
        int add(int a, int b);
    

But how do you represent a function allowing other functions in its signature,
e.g:

    
    
        void transformInPlace(int* input, int (*func)(int));

~~~
negentropicdev
In LabVIEW I wire in a reference to a VI that provides the functionality or I
use a design similar to the command pattern where there's an override method
and I specify the class instance as the argument. Depends on the granularity
of the design needed and how many different classes I want to define (usually
not that many).

------
techdragon
Just dropping in DRAKON to the mix.
[https://en.wikipedia.org/wiki/DRAKON](https://en.wikipedia.org/wiki/DRAKON)

Its actively designed to both prevent several typical problems of visual
programming:

First, the overlapping wire mess getting out of hand. It doesn't let you
overlap wires, if your program gets this complicated it forces you to break it
down into a smaller unit. It takes some adjustment, but you get benefits from
it.

Second, the "cant use it for general purpose stuff" problem is solved by the
fact DRAKON can output to multiple different programming languages, meaning
you can use it to build libraries and other components where you want without
forcing you to use it for everything.

------
PopeDotNinja
When I first got into programming, I asked myself the same thing. The low
hanging fruit task it creating something like a CRUD app. Trivial drag & drop
CRUD forms are pretty achievable.

Let's say you whip up something cool using visual programming, and then you
have a business requirement that requires something you can't easily squeeze
into your CRUD app: maybe a database join, a query that doesn't cleanly fit
into your access patterns, or you just wanna make a certain thing faster.

Then you design a scripting console, and now you have something that lets you
build custom solutions. Well, at that point you're basically implementing non-
visual programming. And at a certain point you reach the limits of what you
can script in a hacky way. or you become more comfortable with the console
than they UI, and you just chuck the visual programming altogether.

As I'm writing this, I'm thinking that I actually do visual programming,
except I'm doing IN MY MIND. Who needs a body brain interface when the goal of
using your hands is to get it into your brain?, but it's already in my brain?!
Well, it'd be nice to get some stuff out of my brain cuz cognitive load. And
as much as I'd like to develop tech that makes it easier to get stuff out of
my head, I'm mired down in trying to get the latest feature from the product
team to work at all :)

~~~
tartoran
How about visual representations at different levels and yet stick to non-
visual programming. I find that a static visual helps here and there but a
full on visual programming would be a pain to use in my use cases.

------
vectorboost
I will just add some practical experience, we work with WebMethods flow
languages (used for business integration). It looks cool because you can make
"code" which is more readable for non IT people. The problem is that some task
which would require one or two lines of codes needs several graphical "nodes"
and just too many clicks. The more complex the diagram gets the easier is to
make mistake and eventually some things has to be implemented in Perl, Python
or Java because the flow languages also has it's limitations. I would say it
is great for simple or medium complex solutions but very complex solutions
tend to be messy and developers tend to avoid them. They say it is easier to
iterate and skim through complex text code, where you can search and use some
IDE features, than to expand and click through the whole graphical diagram.
The graphical notation also does not show all the information so you cannot
get it by scaning the code, instead you need to manually click and open the
nodes to get e.g. the connectiong interface name. The graphical notation needs
to abstract from some information otherwise it would be messy and hard to
read.

------
asdfman123
Try making a complicated program in a visual programming language and you'll
see why pretty quickly.

My company used Azure ML Studio, which is a great program for making quick ML
predictions. But making any kind of reasonably complicated data processing
pipeline takes a lot of effort. I switched to writing code to process and run
my predictions and my life became much simpler.

Language is extremely expressive and you can pack a lot of meaning into a
small space.

------
toomim
Let me refer you to a 2018 post called "Visual Programming — Why It's a Bad
Idea:

[https://news.ycombinator.com/item?id=18495094](https://news.ycombinator.com/item?id=18495094)

------
Mandatum
As soon as anything gets beyond basics, and you require a "power user" to
either comprehend, fix or add features - suddenly visual programming becomes a
fucking pain in the arse.

And you're required to go into code / the "source" representation or deep
"configuration" of the visual elements, which just takes 10x longer than
writing code in the first place, suddenly the last mile takes months to get
right.

------
gentleman11
Unreal engine has a visual scripting system called blueprints. It’s very
strange to follow the logic, the work is mouse heavy, code review and merging
is hard, and it feels very convoluted compared to actual programming. However,
the visual scripting for materials/shaders, particles in unity, and ai with
behaviour trees is quite nice.

Visual scripting is growing but it’s better for some things than others

~~~
sosodev
Making shaders with blenders visual scripting is satisfyingly easy to learn.
Domain specific stuff seems to work quite well in that way.

------
zyl1n
At work, I am required to use SPSS modeler which has visual programming model,
and I mainly dislike it for the reason of not having a way to easily diff and
find out what has changed.

That aside, I think most of us actually code in visual programming style, but
all the "visuals" are constructed in our head on the fly as we read the code
text. So how good you are at coding maybe a function of how well you can
represent these structures and how long you can maintain them in your head.
Maybe an external tool that does it for us produces a representation doesn't
mesh well with the internal representation for programmers experienced in text
based programming.

------
tester34
Show HN:

[https://www.luna-lang.org/](https://www.luna-lang.org/)

> Luna is a data processing and visualization environment built on a principle
> that people need an immediate connection to what they are building. It
> provides an ever-growing library of highly tailored, domain specific
> components and an extensible framework for building new ones.

~~~
mkl
Looks like you're new here. Is that your project? If not, you have
misunderstood what "Show HN" generally means. Click "Show" at the top of the
screen, then "rules".

We have covered Luna here quite a few times:
[https://hn.algolia.com/?q=luna+lang](https://hn.algolia.com/?q=luna+lang)

~~~
tester34
It's not mine, but I wanted to post it here because project is quite
impressive itself but it also fits this topic

------
jasonhansel
Saying "Visual programming will replace text-based programming" is like saying
"Flowcharts will replace novels." People tend to prefer text over diagrams for
anything longer than a page, partly because text follows the structure of
natural language.

Another issue: it's hard to pretty print visual programs or to put them in any
sort of canonical form. This makes it harder to read them--there's no way to
enforce a consistent style. It also makes various processing tasks (e.g.
diffing, merging) much harder.

------
cammil
I think in some ways spreadsheets are visual programming tools. You have data,
functions and layout / presentation, all working together in one space.

I think this is one of the draws of spreadsheets for simple "programs" and
non-programmers. And of course, spreadsheets are ubiquitous.

------
aasasd
Aside from everything else, visual programming as it's usually implemented is
inherently slower than typing text when the latter is not ‘hunt-and-peck’.
Because, after some experience with the keyboard, your brain knows exactly how
to jerk a finger to get a particular character and can do that very fast
(touch typing particularly excels at this since the normal positions of the
fingers are fixed—and it's aptly called ‘blind typing’ in some languages).
Meanwhile, to lay stuff out on the screen you need to 1) find with your eyes
where to grab a block and 1b) where to put it; 2) move the mouse to grab the
block and 2b) to drop it, and sometimes also do that with connectors too. Both
of these kinds of operations—visual and manual—are way slower than mechanical
jerking of the limbs, especially due to them being prone to error. Even worse
if you have to drag-and-drop instead of just clicking. All this fiddling
requires you to pay close attention to things on the screen, while typing
mostly allows the visual system to coast along or tune out altogether.
Multiply this by hundreds of words a day, and you'll see that visual
programming is in large part an exercise in mouse-brandishing amid visual
load, in the vein of FPSes. And usually you have to regularly move the hands
to the keyboard anyway, to name your blocks, variables and whatnot.

On phones, the difference shrinks since typing relies on just two thumbs with
no fixed position and no physical keys—while onscreen manipulation gets more
immediate compared to a mouse. However, phones suffer from short supply of
screen area where you pick and place the blocks. In my experience, it would
still make sense to choose blocks with typing (by filtering in real-time from
the entire list), and to minimize the work of placement—like in Scratch or
Tasker.

Visual programming _might_ have distinct value if it allowed to manipulate
higher-level concepts compared to text-based coding. However, it turns out
that the current building blocks of code work pretty well in providing both
conciseness and flexibility, so visual blocks tend to just clone them. Again,
the situation is better is you can ride on some APIs specific to your problem
domain—like movement commands in Scratch or phone automation in Tasker and
‘Automate’. Similarly, laying out higher-level concepts like in UML or
database diagrams has its own benefit by making the structure and connections
more prominent.

~~~
aasasd
A typo: that should've been ‘ _if_ you can ride on APIs’.

------
gapo
You are completely ignoring the vast swathe of 'engineering' programming
market that is covered by Simulink, LabVIEW etc.

~~~
rurban
Not 'engineering' programming, but real engineering programming. I did a lot
of that. Automotive, aerospace, space shuttle, power stations and such.

No syntax or type bugs, just logical or more like physical bugs. Because you
are modelling physics, and sometimes the model is just not good enough. Still
vastly better than traditional C++ models.

Problems: No diff tool. You can hardly see what changed. That is like shipping
updated lisp images or binaries without source code to the devs. You also get
a lot of windows, like 30 for a typical small model.

------
gen3
In college I had to use labVIEW; a visual language normally used for
automation. I found it significantly harder to work with compared to
programming the robot in C. Part of it has to do with my familiarity of the
language / learning what the shapes meant, but another part of it was trying
to juggle the programs layout. Eventually, everything became a big mess and
was hard to maintain.

Using labVIEW over C did have some benefits. It seemed like streamlined
concurrency is a major advantage.

~~~
Psychlist
Multiple 4k monitors help, but IMO the limitation really is "all the code can
be displayed at the same time".

~~~
bmitc
> IMO the limitation really is "all the code can be displayed at the same
> time"

You don't have to have all your code inside a single diagram. Any professional
LabVIEW programmer has a rule that a single VI (basically function or method)
should only require a single modestly sized monitor to view it, aside from a
few exceptions. This is akin to text-based languages having a sweet spot of
500-1,000 lines of code per file and keeping a function within a single screen
without needing to scroll. Anything above starts to become unwieldy.

The size of a LabVIEW diagram isn't a limitation of the medium just like the
size of a text-based programming language's file isn't a limitation of the
medium. It all boils down to the programmer needing to modularize
appropriately.

------
danielscrubs
It was a running joke that every year in our school another student would
start its master thesis journey in " visual programming for the masses, but
that works! ".

Can you start with using an WYSIWYG HTML editor and do a really good webpage
to see the benefits and drawbacks?

~~~
jcelerier
So, what do you make of the fact that most electronic / avant-garde music
composition students learn Max/MSP and are in general quite succesful at it
without even a computer science background

~~~
danielscrubs
"Computer Science is no more about computers than astronomy is about
telescopes.". I think you meant: programming-background.

I didn't mean to be condescending, it was just a very common master thesis
subject.

Go ahead and write your visual programming language, maybe I'll learn a thing
or two.

~~~
jcelerier
> "Computer Science is no more about computers than astronomy is about
> telescopes.".

I don't understand the relationship between that sentence and what I said. You
can do visual programming on paper without any computers involved, and I've
seen a fair share of artists actually do that. TBF in my native tongue there's
only one word for anything related to computer science, programming, etc which
is 'informatique', so that may bias things a bit.

> Go ahead and write your visual programming language, maybe I'll learn a
> thing or two.

I actually went ahead five years ago or so as part of a team :-)
[https://hal.inria.fr/hal-01364702/document](https://hal.inria.fr/hal-01364702/document)
\- currently instantiated as
[https://github.com/OSSIA/score](https://github.com/OSSIA/score)

~~~
danielscrubs
Looks really polished for such a small team! Great work. :)

------
onedognight
Is there a visual programming language that is self hosting? i.e. one where
the entire system is written in its own visual language including the compiler
and any runtime VM?

~~~
unnouinceput
Plenty are. I'd say almost all visual programming languages are like this, at
least the ones that have native code for their OS of choice.

~~~
mkl
That seems incorrect. The most commonly mentioned ones in this thread are:

LabVIEW (C, C++, C#):
[https://en.wikipedia.org/wiki/LabVIEW](https://en.wikipedia.org/wiki/LabVIEW)

Unreal's Blueprints seem to produce C++ code? I can't find any mention of it
being used to implement itself: [https://game-ace.com/blog/unreal-engine-
blueprints/](https://game-ace.com/blog/unreal-engine-blueprints/)

Max/MSP (C++):
[https://en.wikipedia.org/wiki/Max_(software)](https://en.wikipedia.org/wiki/Max_\(software\))

Please name a few of the "almost all" visual programming languages that are
self-hosting.

~~~
unnouinceput
Delphi, Lazarus, Visual Studio - to name just a few

~~~
mkl
None of those are visual programming languages. A graphical UI editor is
entirely different from what we're talking about, which is graphical _code_.

~~~
unnouinceput
oh, then based on that argument you're not human, just a bunch of atoms

------
evilotto
I think at least part of it is image. "real" programmers dismiss visual
languages, especially ones aimed at kids like Scratch or Snap as toys, not the
kind of thing Real Programmers use. (insert references to all the Real
Programmer humor)

Aside from that, there is the issue of tooling (source control, etc), editing
large blocks, etc. which the visual languages I've used are not great at.

But it should be recognized that some things are better visually and some
things are better textually. Typing "a = b + c" is way simpler than dragging a
bunch of blocks around to describe the same thing. But visual tools are
superior for understanding relationships - a connects to b, which connects to
c makes a lot more sense when you see it as "[a] -> [b] -> [c]", and an ascii
diagram like that quickly becomes unwieldy while graphical boxes still work.

I find an interesting comparison between drawing diagrams with a diagramming
tool (e.g., Lucidchart) vs with a textual description language (PlantUML). I
find the textual language far easier to use to quickly produce diagrams, but
LucidChart is superior for tweaking the exact dimensions and alignments of
things.

All of which is to say, both approaches have cases where they work better, and
others not so much.

~~~
negentropicdev
100% agree with the calculation expression difficulty in visual languages.
Stuff like math and binary communications over serial, TCP, etc. just feel
incredibly tedious for what I know how to do in 1 or two lines of C.

What I definitely appreciate in my professional life is the ability to
directly map the high level design of an application or module into nodes on a
diagram and then I descend down filling in the implementation. Not really much
of a difference in developing in text languages except that there's a physical
layout that matches, what should have been written, nearly directly the
documented UML, user stories, flow charts, and sometimes state diagrams of
design documentation. When you're doing combined architecture and
implementation work it can nearly eliminate the mind-load difference between
the two, or at least it does for me, having now worked in the visual
environment I've used for 8 years. (I grokked it many years ago)

------
foreigner
I'd love to see an editor for textual programming languages that can display
the code visually. For example a tool which can show the code within a single
JavaScript function as a flow chart. I don't see why that shouldn't be
possible but I've never found one.

Note I'm not talking about class diagrams, I want to see a flow chart of the
actual imperative code (for loops, if/then, etc...) of an existing popular
text-based programming language.

~~~
thelazydogsback
They have been around for a while, esp. about 10-20 years ago -- I've seen
these plug-ins wane in popularity. I think the reason is the same as the
arguments here for VP -- although a flowchart is nice _in principal_ it
becomes unwieldy and quickly looses any value once something doesn't all fit
on the screen -- now your "navigating" rather than reading -- there is no
gestalt to grok.

------
jchw
Visual programming, imo, is not popular for general purpose programming
largely because there doesn’t presently exist, and it’s unclear if there ever
will exist, general purpose visual programming tools that work well and
provide a notable benefit over text-based programming.

When you add some constraints in, like for example, the limitations of a
spreadsheet, visual programming can work exceedingly well. It works great for
these domain specific usages. But honestly, a text document of textual
statements is a pretty good way to represent a general purpose programming
language made up of procedural statements. You could make a UI like Scratch
for other programming languages, but:

\- the interface would be cluttered and likely not nearly as efficient as just
typing

\- other than virtually eliminating syntax errors its unclear what you are
accomplishing - its not easier to break down problems or think procedurally.

\- You could probably get similar benefits with a hybrid approach, like an
editor that is aware of the language AST and works in tokens.

So my view is that visual programming is perfectly mainstream and just has not
been demonstrated to have substantial benefits for typical general purpose
programming languages.

------
watt
I worked with a team working with visual programming tool once. The tool was
connecting programming blocks with arrows. The complexity of finished program
was such, that it looked laid out like a motherboard, full of arrows (traces)
- and an idea that you could follow the control flow from that was laughable.

It does not end well. The results are not pretty. Stick to text representation
of any control flows.

------
perl4ever
Well, so you're using the term "visual" as though it were a commonly
understood term, but implicitly excluding all of Microsoft's products with
Visual in the name?

I've been trying to implement something with Power Automate, and presumably
that's "mainstream", but it strikes me as falling into the classic pattern of
appealing to buyers rather than users. I feel 10-100 times less productive
than with, say, VBA, for no advantage.

One thing that is particularly frustrating to me is that it's so slow and
buggy I am afraid of losing my work at any moment. You can't save your work
unless it passes preliminary validation, but sometimes reopening it _makes it
invalid_ by removing quotes or whatever. Copying something out and pasting it
back often fails to validate too, as the transformations are not inverses like
they should be. Sometimes it just gets corrupted entirely. I'm not aware of
any way to manage versions, or undo beyond a few trivial actions.

But the more fundamental reason I hate this is because it seems not to be
designed to let you take a chunk of logic and use it in a modular way. At
least this style of "visual programming" seems to apply the disadvantages of
physically building things out of blocks, where it's entirely unnecessary.
You've got some chain of actions A->B->C, but the stuff _inside_ those actions
is on a different level; you can't take that chunk of stuff and use it as a
tool to do more sophisticated things. As far as I can tell. I keep thinking
"it can't be as simplistic as it seems" and thinking I'm about to find a way
to create general functions.

See: [https://flow.microsoft.com/en-us/](https://flow.microsoft.com/en-us/)

~~~
prewett
Visual Studio : visual programming :: VI (VIsual editor) : Blender

~~~
perl4ever
That's one possible reason, but anyway, that's implicitly why I brought up
Power Automate, to determine if that was the reason. Would you call it
(formerly Microsoft Flow) visual programming? Because it certainly is
frustrating to me in a way that traditional programming is not. Anything this
awful _must_ qualify as visual programming...

Today I was trying to figure out if I could work around some of my problems by
converting everything to XML and using XPath to manipulate it but I didn't get
far and apparently Microsoft only does XPath 1.0.

------
sea-shore
My assumption on what I have heard so far is that.. Ive been amazed for a good
while now by the on-line-system from Engelbart, Sketchpad by Sutherland,
Smalltalk from Xerox PARC, Nelsons Xanadu and later Bret Victors demos. They
where/are visually and philosophically strong, and seemingly inspired
countless weaker systems that in turn somehow got picked up as the "industry
standard".

Compromises where made, quick-fixes on quick-fixes made text interfaces just
usable enough, sunk costs grew and habits formed. The visual programming I see
in game-engines now are carrying those habits with it, because to build a
language of nodes you first have to learn the ways of ASCII code.

And from what I understand, hardware is optimised for what ever software is
popular enough to sell, so even if the software changed, the hardware would
take longer. It takes an awesome goal to justify starting over on a truly
visual interaction path when there is a system that almost, kinda works. And
what-ifs are not in budget.

------
simonsarris
It's been a dream concept for a very, very long time. My company (long before
I joined) started in 1995 making a visual programming language. We keep the
site up for posterity:
[http://www.sanscript.net/index.htm](http://www.sanscript.net/index.htm)

Lots of examples on Progopedia:
[http://progopedia.com/version/sanscript-2.2/](http://progopedia.com/version/sanscript-2.2/)

It turned out, people weren't really interested. However people _were_
interested in the diagramming library created to make the language, so by
virtue of already having thought really hard about what goes into good
diagramming tools, my company started selling that. Girst as C++, then Java,
then .NET, now as a JavaScript and TypeScript library called GoJS:
[https://gojs.net](https://gojs.net) (Go = Graphical Object).

------
thefz
Because complex programs ramp up in difficulty of reading and consumption (i.e
"looking at") way faster than text does.

~~~
mapcars
I would say it's the opposite, visual programs have more ways for you to zoom
in & out and trace what is happening. From what I saw in Mendix it makes code
navigation and understanding way easier.

~~~
thefz
It might also be related to how we learn in different ways. Some need to view,
others to reconstruct and visualize in their heads.

------
truckerbill
What I would like to see is a hybrid approach - text input on one side, and a
visual graph on the other. I'd like to be able to live-edit a graph by typing,
and see how the data flows between objects. Likewise I can rewire the graph
and have the references in the code update.

It's one of those ideas I have no time to implement sadly, at least for now.

~~~
akavel
You might like to explore the Luna language: [https://luna-
lang.org/](https://luna-lang.org/)

------
macklemoreshair
Control flow is hard to describe visually. Think about how often we write
conditions and loops.

That said - working with data is an area that lends itself well to visual
programming. Data pipelines don’t have branching control flow and So you’ll
see some really successful companies in this space.

Alteryx has a $8b market cap. Excel is visual programming as well.

~~~
bmitc
Aren't conditionals and loops _easier_ in visual languages? If you need
something to iterate, you just draw a for loop around it. If you need two
while loops each doing something concurrently, you just draw two parallel
while loops. If you need to conditionally do something, just draw a
conditional structure and put code in each condition.

One type of control structure I have not seen a good implementation of is
pattern matching. But that doesn't mean it can't exist, and it's also
something most text-based languages don't do anyway.

~~~
zozbot234
> If you need two while loops each doing something concurrently, you just draw
> two parallel while loops.

Not quite. You'd need to draw two parallel boxes, each of which is strictly
single-entry/single-exit, and draw a while loop in each box. This is because a
while loop

    
    
         +-------------------+ 
         v         +-->[fn]--^
      -->*-->«cond?»           +-->
                   +-----------^
    

depicts parallel flows that do _not_ represent stuff being done concurrently!
Once you acknowledge that, pattern matching actually becomes easy: just start
with a "control flow" pattern and include a conditional choice node with
multiple flowlines going out of it, one for each choice in the match
statement. You're drawing control flow so it's easy to see that the multiple
flowlines represent dispatch, not something happening in parallel.

~~~
bmitc
Here's a picture of what I was talking about:

[https://i.imgur.com/AgmF87b.png](https://i.imgur.com/AgmF87b.png)

Now, the two while loops, as shown here, have no dependencies between each
other and are indeed processing in parallel. However, there are various
mechanisms in LabVIEW to exchange data between the two loops, the most common
being queues, in which case they process concurrently.

You can also have a for loop iterating on an array.

[https://i.imgur.com/nRgyckx.png](https://i.imgur.com/nRgyckx.png)

In LabVIEW, it's nice because it's trivial to configure the loop iterations to
occur in parallel (if there are no dependencies between iterations), using all
available cores in the computer.

And by pattern matching I meant something like the pattern matching and type
destructuring you find in SML, Ocaml, F#, and Elixir.

~~~
zozbot234
Yes, that's really just abstracting away the control flow I was depicting in
my simple diagram. It's treating "while" as a higher order function of sorts,
with its own input(s) of type "data flow diagram connecting types Xs and Ys".
That's the best you can do if your language only really deals in dataflow, as
w/ LabVIEW ]. And that's really what leads to the difficulty you mention with
pattern matching. Pattern matching on a variant record is inherently a
control-flow step, even though destructuring variables in a single variant
should clearly be depicted as data flow.

------
analog31
I used LabVIEW for a while. I noticed a couple things. First, it is actually
physically laborious to create and edit code. I got really severe eyestrain
headaches and wrist fatigue from it. Also, programs (including one written by
a certified LabView consultant) bigger than one screen become very difficult
to read and maintain. While LabVIEW programs can be refactored like any other
language, the physical labor involved discourages it from actually happening.

I think another issue is that it's costly to create a visual language,
discouraging experimentation with new languages. With a text based language,
all of the editing tools are already there -- a text editor -- on any decent
computer. You can focus on refining your language, and getting it out there
for others to try.

------
mikhailfranco
Here is an early example of visual programming for scientific visualization
from 1989:

[http://vis.cs.brown.edu/docs/pdf/Upson-1989-AVS.pdf](http://vis.cs.brown.edu/docs/pdf/Upson-1989-AVS.pdf)

The idea spawned many imitators (VTK, IBM DX, SGI Iris Explorer). The product
was spun out of Stellar shortly afterwards, and the company is still in
existence:

[https://www.avs.com/avs-express/](https://www.avs.com/avs-express/)

------
thelazydogsback
This is slightly tangential but I think is related in some way to the broader
question of the effectiveness of VP. I can say from my experience with my son
and trying to teach other kids programming, that visual programming does not
seem to promote actual understanding, especially concepts composition, re-use,
etc. Kids can write animation sequences or even simple games that "work", but
then have absolutely no idea how to generalize -- every visual program is a
one-off. As soon as you start teaching them Python (or whatever) then they
start to understand what's going on. This is why I don't like the
"gamification" of learning in general.

Another way to look at it is the 7 +/\- 2 rule of short-term memory attention
-- when you look at something and try to "grok" it (a gestalt experience) you
really need a limited amount of information in your visual field. To do this
you need to move to higher and higher levels of abstraction, which is a
linguistic operation - assigning meaning to symbols. Even in visual
programming, you end up with a box that has a text field that holds the name
of the exact operation - so you may as well cut out the middleman and stick
with the linguistic abstractions.

Now, if a program is "embarrassingly visual" \-- dataflow operations in signal
processing, etc., the visual DSLs do seem appropriate.

------
lmilcin
I think it is because programming is inherently about working with
abstractions and visual programming is typically removing abstraction and
making details of your program visible before you.

Part of learning to program is to learn work with abstractions especially if
you have never been really exposed to something else similar (mathematics,
physics, engineering, etc.) Things go out of sight but you still need to train
your brain to manage these things.

This is a bit like playing chess. Good experienced player will be able to plan
long in advance because his brain has been trained to spot and ignore
irrelevant moves efficiently. If you imagine training that would let the
player learn recognize good moves but not learn how to efficiently search
solution space, you would be training brute force computer that would not be
very good player.

I think visual programming is a different thing from regular programming. I
think compromises like Logo
[https://en.wikipedia.org/wiki/Logo_(programming_language)](https://en.wikipedia.org/wiki/Logo_\(programming_language\))
are much better teaching tools. You still program a language with syntax but
the syntax is very simplified and the results (but not the program) are given
in a graphical form that lets you relatively easily understand and reason
about your program.

------
badrabbit
It's not efficient for large scale things. It's like communicating with memes,
you can't exactly write a news article with just memes.

It's useful to allow more "citizen devs" (regular folk with little exposure)
to come up with prototype high level proof of concept apps,including UX
design. It is a big deal in the corporate arenas I've had exposure too,but I
think widespread adoption is still years away.

You will always need non-visual languages to do things in a featureful and
scalable way.

~~~
bmitc
> It's not efficient for large scale things.

> You will always need non-visual languages to do things in a featureful and
> scalable way.

What is an example of a system that you have developed where things broke down
with a visual programming language?

~~~
badrabbit
I needed to make an API request to a cloud service provider but it supported
only one provider and even then not the api auth (oauth2) I needed. I couldn't
even begin to try and figure out how to implement the api myself or patch in
oauth2 support just with the visual lanaguage's facilities.

~~~
bmitc
How is that a limitation of the visual programming paradigm and not a library
problem? And that doesn't have much to do with scaling to a large program or
system.

~~~
badrabbit
The library isn't in a visual language. You can't do things with it if an
interface/lib to do that thing hasn't been implemented by a non-visual
language.

~~~
bmitc
That is again a limitation of the particular language you were using (which
one?) and not a limitation of the paradigm. There's nothing there that's an
inherent problem of visual programming languages, which is my point in asking.

For example, in LabVIEW, you have TCP/IP, HTTP, UDP, CAN, serial, MODBUS, and
more protocols and can build things out of them. If there's a missing
protocol, then you can write your own, call a C/C++ DLL, .NET assembly, Python
script, or a command line application, just like any other language (actually
more than most languages).

------
RNeff
I have not seen any solution for tracking changes, differences,displaying
versions, etc. I.E. git for pictures. Some visual languages can turn an area
into a 'subroutine', I have not seen any solution to build libraries of
reusable 'subroutines'. I used to draw flowcharts on size D (22.0 x 34.0 in)
and size E (34.0 x 44.0 in) sheets of paper. I wish I had a monitor of either
of those sizes.

Visual programming works very well for data flow problems.

~~~
bmitc
> I have not seen any solution for tracking changes, differences,displaying
> versions, etc. I.E. git for pictures.

LabVIEW has a diff capability, and while working for National Instruments, my
team actually had a quite capable custom code-review system built out of this
diff. This is an area that is brought up often, but text-based diff tools
weren't magically found in the universe. They were built, and some of them are
good and some of them not. It's not a paradigm's fault that tools like git
were built around the idea that code must be text.

I do agree that better tools need to exist though, but there isn't a reason
for why they can't. They just need to be built. There are hard and interesting
problems in that space, both technically and design-wise.

> Some visual languages can turn an area into a 'subroutine', I have not seen
> any solution to build libraries of reusable 'subroutines'.

LabVIEW has a third-party developed package manager in the JKI VIPM. And
LabVIEW has features where you can put your VIs and classes into various
LabVIEW specific containers, most usually source libraries, that can then be
referenced and re-used in projects. Just treat the libraries as modules like
you would in any other language. They should contain classes and functions
that have a shared or particular purpose.

~~~
steerablesafe
> It's not a paradigm's fault that tools like git were built around the idea
> that code must be text.

FWIW you can have custom diff and merge drivers in git. One could probably
hook up LabVIEW's diff tool as a git diff driver, I don't know about merge.

~~~
bmitc
Yes, but it isn't easy. Some people have done so, and I should probably
revisit it myself since I know use GitHub in my later jobs and not Perforce. I
mentioned this in another comment, but we had some tools integrated into
Perforce's workflow to do diffs.

Merge is harder in LabVIEW and isn't where it should be. However, that doesn't
mean it can't exist.

------
aSockPuppeteer
It is pretty big in engineering. Labview or VEE. TLDR: the more complex the
program becomes, the worse it is to use.

I used it to easily connect to a piece of lab equipment, reset it, set
whatever settings I want, run a test, and then log the output to a file. I
could setup a test then walk away and return to data. Doing the tests manually
would take many months.

Both have labels as remarks/comments and you can easily put in a switch
statement to test new code or use highlighting to see exactly where the
program is running albeit slowly.

One of the fun things to do was circle a repetitive task then turn it into a
function. A large program requires a large screen to see it all. Widescreens
are terrible for it.

After basic settings and availability in libraries, it is better to move to a
text language. Visual programming is a quick and dirty solution.

~~~
Psychlist
We have a Labview program for our circuit board tester and it is a bit of a
nightmare. At least with visual coding it is obvious, literal "spaghetti
code".

My only experience with Labview was in another job writing a DLL that it could
import so the poor sucker working on the code could do a bunch of complex
state machine stuff without having to drag "wires" all over the place in
Labview. That ended up with a design pattern that was "route inputs to DLL
function, route output to next stage" that turned out to be much easier to
maintain. Partly because it enforced modularisation, and partly just because a
series of if statements and function calls is easier to read than diagram.

[https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000...](https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019Ls1SAE)

Also, Lego have a whole series of graphical languages for their electronics,
and those are very cool but extremely limited. Once you get past "when this
switch is pressed make this motor go" it is easier to hit the textual
language.

~~~
MH15
Simulink has the ability to make Blocks out of Matlab (or any FFI supporting
language) code and run them in the simulation loop. We used this for state
machines at our lab.

------
k00b
I haven't used a visual programming language, but it's likely a lot harder to
build a good visual programming language than a nonvisual one at a given level
of expressiveness.

I suspect visual programming is more common than we realize though. I had an
acquaintance at Workday who claimed a lot of work was done there in a visual
programming language.

Also, arguably website/app builders are a visual programming "language" and
they are extremely common.

------
runxel
Visual programming is really hard to debug. Just as people tend to give the
advice to limit a procedure/method to one page, because you start to lose
control. You just can keep up with so much. Where does this line goes in? You
just don't know. Collaboration is nearly impossible. Opening projects from
others will be much worse than looking at pure code.

Also it is too verbose. The 'functions' take away a lot of screen estate (this
is most obvious for mathematical stuff). If you start on just a bit more than
something you could have achieved in 50 LoC, it tends to get really messy.

VP lacks referencing as well. At least most of those envs I know. Declared a
variable? Too bad, you have to connect this node to everywhere you need it.
Sigh.

Reusing of components is possible, and for some envs it is implemented, but
mostly just _per file_ , not in general, e.g. you can make a custom component
inside one project, and if you edit it, all instances get updated, but you
can't save that as a standalone component you can XREF in other
files/projects, which makes it hard to make a custom library of functions.

But it depends on the industry. As others already mentioned, the more an
industry is led by visuals in the first place, the more common it is to
actually utilize Visual Programming (or rather Scripting), also it's quite
useful in real time contexts – which is the place where the strengths are. The
CG industry is one of those, but also architecture and design in general
(think of McNeel's Rhinoceros with Grasshopper, THE most used visual scripting
environment used as of today, especially in a professional setting).

Conclusion: VP has its merits and is used extensively, just maybe not the
places you expected/hoped for.

------
tylerlarson
A visual programming languages are often high level and can only take the user
so far. If you were to make a programming editor mode that visually shows how
programs are structured that understood standard programming languages it
would be easier to understand the structure of these programs but the actual
edits would likely still be done in the native language. Higher level edits
might be very powerful but how often would you be able to do this safely? From
a documentation point of view I agree with your assessment, the ability to
understand how things work is easier visually at higher levels but we also
have other tools to do this like UML. There are also editors like
[http://staruml.io/](http://staruml.io/) that enable you to convert UML into
code but I find this only works when projects are starting when you are trying
to find the right high level abstractions. After this is set it is usually
best to keep the high level structures that you have in place.

------
iamwil
You might be interested in
[https://futureofcoding.org/](https://futureofcoding.org/)

------
todd8
Around 1977-1978 I tutored a friend that was a fellow engineer. I could not
get him to write structured programs. He insisted on creating large flowcharts
with lines going in all directions. He had been introduced to this "visual"
form of programming, and he kept going back to it each time he tried to
construct a program.

His programs ended up with state distributed all over the place and an
impossible to keep track of control flow.

Around the same time, I was intrigued by articles promising easy visual
construction of programs; it seemed to be in vogue then. I took me several
years to realize that the nice examples in journal articles were just that,
nice examples. Visual programming is appealing, like flowcharts to my friend,
but they suffer from lack of good support for building abstractions and an
inefficient method of building visual programs and keeping track of changes.

------
mbar84
If you're a bit frank and you're dealing with a person you don't particularly
care for, you might use the expression "Do you want me to draw you a picture?
How dense are you?"

The thing is that it's extra effort for the author to draw pictures and think
of image layout for what is ultimately the manipulation of symbols. If you're
a proficient touch typist with a powerful editor, including macros, jump to
definition, symbol search, multi cursor editing, you can spend a few minutes
at a time without your hands ever moving far from the home row, let alone
touching the mouse. We have very powerful tools for text manipulation and the
highest bandwidth input device we have is a keyboard, so it's not at all
surprising to me that text is still king for programming and probably will be
until we have some kind of direct brain -> machine interface.

------
thom
I think at best visual programming is good for tasks you might otherwise solve
with a DSL. But for general purpose programming, you will end up with a visual
grammar as complex as text, but harder for people to understand and compose
(because we just happen to have a certain proficiency with written languages).

------
JaumeGreen
Having tried for a long time for a viable option to practice programming in my
phone while commuting (not seating, seldom having both hands available) I'd
say that visual programming is not that useful in that medium, at least if
it's Scratch-like. Maybe a language with better abstractions would be more
useful.

I tried a Scratch-like for android and did the first couple of days of Advent
of Code a couple years ago. It was tyring (too many instructions to drag),
midly infuriating (when something didn't fall where it should), hard to
refactor (when experimenting).

That's why that year I ended up transitioning to lisp, writing in a text
editor and copying to a web-based lisp interpreter.

The local maxima I found was this last year with J's android client. With
their terseness array languages can be used quite effectively with mobile
constrains.

------
solomatov
I think the main problems is that it doesn't simplify things. Programming is
not easy, no matter which representation you use. Visual programming systems
might be easier for some categories of people, for example, electrical
engineers, thought, due to the fact they got used to working with diagrams.

------
gfxgirl
I recently had to fix some copy and pasted code across 11 files. So I write a
regex something like

    
    
        /(.*?)somefunc\((.*?)\) {\n( *?)return a + b;/$1somefunc($2) {\n$3return a * b;/
    

And then search and replaced across the 11 files. I have no idea how I'd do
that with a visual programming environment. I actually needed to do that about
7 times with different regular expressions to do all the refactoring.

Also did this yesterday. I `ls` some folder. It gives me a list of 15 .jpg
files. I copy and paste those 15 filenames into my editor, then put a cursor
on each line and transform each line into code and HTML (did both). Again, not
sure I how I do that in a visual programming environment.

~~~
Firadeoclus
You could do that in pretty much the same way, i.e. you create a visual regex.
The one difference is that regexes simply match characters, no matter the
structure of the code, whereas in a visual language you'd make a distinction
between structure and content.

------
globular-toast
One point that I think people are missing is that language is the most
powerful tool available for our brains. Language is what makes us human.
Language causes our brains to grow and helps us to think about problems.
Pictures don't do that. A picture is just a picture. If you try to make it
more than just a picture then what you're making is a language.

While visualisation is (sometimes) useful for grokking difficult concepts,
writing a program is a completely different kettle of fish. I could draw some
pictures that might convince you that you understand a Fourier transform, but
you'd be no closer to being able to efficiently implement a Fourier transform
in a computer.

------
robenkleene
Visual programming languages aren’t plain text and therefore are harder to do
version control for and harder to share. Whatever benefits visual programming
languages have, and they are many, version control and easy sharing are more
important.

------
gisborne
Most of the problems described in other posts concern trying to convert text
programming paradigms into a visual representation.

Let me suggest two ways visual programming might be a big part of the future:

1\. New paradigms, such as constraint based programming, might well lend
themselves better to a visual presentation; and 2\. VR. Visual programming is
indeed much less visually dense than text, but if you start over with the
assumption you’re doing it in VR, that suddenly is if anything a virtue.

Imagine something that was part Excel, part Access, with visual, animated 3D
representations of other major programming abstractions also, and you start to
see that VP might really be the future.

------
kazinator
> _Since text is difficult to manipulate without a physical keyboard, visual
> programming opens the doors to doing development on mobile devices._

All content creation is difficult on mobile devices, other than passive
content creation like shooting videos and taking pictures.

Visual programming will be way easier on a desktop machine with a proper
mouse, and a real keyboard (for the UI shortcuts you will end up using).

But, about the main question, visual pogramming has no value beyond being
friendly to newbies.

This is like asking why don't Fisher-Price plastic tools for kids take
carpentry by storm. They are so light and easy to hold, why don't we frame
houses with them?

------
aldanor
It's pretty big on some narrow areas - see, for instance, Max/MSP for audio
design
([https://cycling74.com/products/max](https://cycling74.com/products/max))

------
noodlesUK
I’m not sure it’s possible to keep the complexity managed (at least with
current tools). If you’ve ever tried to build a complicated PureData patch or
similar, you’ll notice that the complexity of visual programs just explodes.

------
pezo1919
I think the problem is with more complex algorithms - where complex means
dynamic in nature.

Creating (new)/ destroying (existing) actors runtime is the hard part in
programming, because the complexity dimension explodes via that. +1 thingy in
the system means many new possible runtime flows - in theory, even if it's not
a de facto new flow; you have to prove if it is and handle accordingly.

You can show it in a visualisation, but to be able to do that it must be an
animation; time matters.

I think a higher language like Idris will be able to generate these animations
from the code to make it easier to absorb existing codebases.

------
kweinber
Salesforce is a visual programming environment that has huge adoption and
reach.

They are almost universal at large companies and have a large ecosystem of
visually programmed/configured partner software companies and components.

------
sethammons
> Emoji-only text seems to unlock a ton of value. Difficult concepts can be
> more easily grokked in visual form. Writing and Reading becomes more
> approachable to first timers.

I can't imagine being able to write maintainable, well tested, scalable
software (cough, software engineering, cough) with some version of drag and
drop. I'd love a visual element added for helping navigate code. I like system
diagrams, flowcharts, etc. But I'd like these to be generated by my code, not
generate my code. I feel like this would be trying to write a book with only
emoji and/or gifs.

------
gitgud
Well, text is an abstraction for ideas. But text is a flexible abstraction,
which is easy to store change, diff, read.

Visual programming with nodes/blocks is another abstraction for ideas. But
blocks and nodes are much, much less flexible. So these abstractions have to
be much more precise... Which leads to problems.

A good analogy is Lego vs clay.

With Lego; you can make anything with the right bricks. The problem is each
brick is precisely crafted and you're limited to the blocks you have.

With Clay; you have the freedom to mold anything to whateber precision you
need... But it might take you longer

------
pnako
There are successful visual programming languages, like GRAFCET. But it
doesn't scale beyond simple problems. As others have pointed out, it's about
information density; what can be shown on a single screen, and also inputting
data with a keyboard as opposed to a mouse.

In principle, though, nothing prevents you in theory from writing very complex
programs using visual languages. It doesn't really make things simpler; once
you reach that point, as said above it's just more efficient to combine words
than drawing shapes with a mouse.

------
iulian131
There are a lot of solutions for doing programming in a visual way but most of
them are not well known.

For examples, for building backend infrastructures take a look at
[https://www.mockless.com/](https://www.mockless.com/).

They provide an easy to use interface where you can setup the data model and
even complex functional flows. In the background the tool creates the source
code and it's able to connect to your GIT repository and commit anytime you
made some changes exactly like a developer would do it.

------
drbojingle
If visual {x} unlocks a ton of value you'd expect someone would have been
doing visual math or visual novels by now and the market would eat it up. It
may exist but it didn't own the market for literature or science. Written word
is still king. Not sure off hand what the advantage is but there does seem to
be 'something' keeping writing on top of a visual centric approach across the
board, not just in programming. Even on imgur where images are king the
comments are mostly text.

~~~
zozbot234
Visual novels are definitely a thing.

~~~
drbojingle
I never said they weren't. I said they didn't over take written word in the
market place.

------
collyw
I have seen various promises of visual programming over the years (Java beans
was going t be the next big thing when I learned). The fact is it gains you
very little (in my opinion). The only thing that it solves is syntax, and
that's actually a relatively easy thing when it comes to programming. MS
Access had an OK visual query builder, but by the time you needed to anything
moderately advanced it was as easy to switch to text SQL (plus text SQL is a
more standard way of doing things).

------
ioseph
I feel certain domains lend themselves better to visual programming. Reaktor
is a great example in the audio space, but I'd hate to use a tool like that to
write a csv parser.

------
slightwinder
> Visual programming seems to unlock a ton of value.

It gives some value, and sacrifies other values. So far tooling has not reched
the point where the sacrified values are small enouhg to justifie the added
values.

> Programming becomes more approachable to first-timers.

How is that relevant? In no industry should first-timers get any
responsability. And textual code is easy enough to be grokh after a short
while. The problems people have with textual code after that would still
remain with visual code. So no value added.

------
lazyjones
It's big enough to be available to the "mainstream", it just doesn't have the
decades of tech debt C++ and other languages bring to the table.

I've been programming for about 37 years now and recently, not wanting to mess
with Swift for that, built a "quick action" command (for Finder) that
converts/shrinks HEIC images to .jpg suitable for e-mail. Took something like
2 minutes with nearly no experience using Automator. It's not a niche
technology.

------
cttet
It is harder to get GUI right, UX in text/CLI is much simpler. In addition,
developer UX is not quite profitable, the business side does not quite work
out.

------
tmaly
I think visual programming is just getting started. I have had good luck with
teaching Scratch 3 to kids this past year.

What sets it apart from previous versions of Scratch is that it can run in the
browser. It makes much more accessible to a wider audience.

Its my opinion with this browser based interface and the growth of
instructional videos, we will see visual programming become more mainstream.

------
bor100003
Visual tools seems to be harder to manage and add a lot of overhead.

Have you heard of Informatica PowerCenter? It creates a mapping instead of
writing down SQL query. The problem is you must manage inconsistent
interfaces, resize windows, writing down in small textboxes.

Of course it has its benefits, but in most cases it just doesn't help much in
removing complexity but it adds its own.

------
AnimalMuppet
> Difficult concepts can more easily be grokked in a visual form.

 _Some_ difficult concepts can more easily be grokked in a visual form. I'm
not sure that they all can. In fact, I suspect that it might be about even (as
many easier in text as are easier in pictures). We just notice the ones that
would be easier in pictures, because we're working in text.

------
rini17
I think it is because of mainstream languages are all prose-first and don't
expose the underlying graph structure in useful way for visual editor to
manipulate. Closest I know are:

* paredit/parinfer for Lisps are actually tree editors in disguise.

* DRAKON. Having put critical business logic in it, was really a boon for quick understanding after returning to the codebase later.

------
tarsinge
My guess is that in the majority of areas of programming the work is to
express rules, and here text is simply more powerful. Are laws written using
graphics? Visual programming can be useful for workflow (3D, movies or music
DAWs). But otherwise for expressing rules (which is a good part of
programming) visual is too limiting.

~~~
jrockway
Personally I hate visual editors for CAD, 3D, etc., and I think they are
inefficient. People are scared of programming, so people go out of their way
to build complicated UIs for them, which are much harder than programming.
But, at least there's no text to type! (Look at professional video editors,
that try to make video editing like programming by hooking up two keyboards to
their computer and using one of them to execute AutoHotKey macros in their
video editor.)

I often want to describe some sort of exact operation based upon exact
numbers, and code is a good way of doing this. Obviously, providing input to
the computer in program form does not preclude real-time display, and this is
essential for things like 3D modelling. You wouldn't just "sketch" an object
in your text editor with no visual reference. Fortunately, a lot of tools
cater to that use case -- Blender lets you define scenes and animations as
Python code (and when you mouse over something in the UI it tells you the
Python function that implements that button!), and there are some CAD packages
that let you describe your object as a computer program. (Unfortunately, I
tried these and didn't like them. I do like Fusion 360's sketch-and-extrude
model, but didn't like OpenSCAD's model. What I want to do is draw the rough
shape of something to get a template, then turn that into code that I can edit
to put in exact constraints and dimensions.)

I am also looking for a text-based schematic capture application if anyone has
any suggestions. I would much prefer typing the names of the edges of my
netlist graph to pointing and clicking them.

~~~
zenhack
Yeah, I think there's a lot of potential in hybrid models like this. You
actually do see these things out in the wild -- GUI builders come to mind --
but they're not a replacement for a proper programming language.

As it turns out, the written word has some genuine advantages over other
media? But on that point, the wall of text you see in published prose is an
artifact of the printing press -- prior to that it was unusual to have long
texts that didn't include little sketches, diagrams, illustrations...

------
gabrielsroka
Maybe slightly off topic, but has anyone seen Microsoft MakeCode?
[https://arcade.makecode.com](https://arcade.makecode.com)

It can convert Visual to/from TypeScript/ JavaScript. And it works on mobile.

It's for games, but you'd think the technology would be applicable for any
kind of program.

------
MH15
If you're a programmer that works on mechanical/fluid/aero systems there's a
good chance you will use Simulink at some point in your career. It's great,
but it's domain specific. Most everyone who knows the field will make a better
simulation quicker in Simulink then in Matlab or C.

------
fxtentacle
I can probably type 3 lines of source code in the time that it takes me to
drag a block from the sidebar onto my document and then connect the inputs and
outputs to other nodes.

As such, I think Blender handled this very well. You can set up the material
node graph both visually and through a scripting command line.

~~~
mapcars
Except programming isn't blind typing and typing is less than 40% of
programmer's work I would say.

------
decompiled_dev
I feel like this is covered already. Before I start coding I will always draw
out diagrams of the components visually, where as the code is the details to
make it actually work.

Using box and arrow diagrams for documentation can give you a lot of these
benefits without needing to adopt a radically new paradigm.

------
jki275
Great teaching tool for hello world type stuff, and UML is nice for design
documents that present data models and such.

Beyond that, for real engineering? I've worked with EEs who write massive
applications in Labview -- their codebases are all impossible to maintain
masses of pain and suffering.

~~~
tonyarkles
> I've worked with EEs who write massive applications (...) their codebases
> are all impossible to maintain masses of pain and suffering.

Fixed that for you :). EE/CS dual here. Some EEs can code, some CS folks can
design circuits, but if I had to bet... I wouldn’t expect great work to come
from either discipline working on the other side of the software/hardware
line. I’m decent at both, but I pretty much squeezed two degrees into a five
year programme and have put a lot of effort into maintaining competency in
both disciplines throughout my career. These days I’m mostly doing embedded
and am loving life :D

~~~
jki275
Yeah, that's a fair point in many cases.

Though I have seen some very nice C/C++ from some of them -- those that want
to write good code.

And as a CS guy, I know damn well I can't do circuit design to save my life.

------
GoblinSlayer
>Programming becomes more approachable to first-timers.

As you said, Scratch is used in education for exactly this purpose. Visual
programming fills the niche of high level scripting, when you have a system
and want to script it in Excel style. On lower levels text is easier to deal
with.

------
ripvanwinkle
It has not hit the ergonomic standards necessary. It's like when speech
recognition accuracy was 90% - that was high but still not ergonomic enough.

I bet there will be a breakthrough in interfaces (likely aided by AI) though
that will make visual programming a lot more widespread

------
scoot_718
Programming in Scratch or Access or whatever is an absolutely painful
experience. That's why.

Beyond a small limit you'll start wanting abstraction - so good bye to seeing
everything. Connections will become a mess, and moving between mouse+keyboard
to keyboard is annoying.

------
mpweiher
_Maybe Visual Programming is The Answer. Maybe Not._

[https://blog.metaobject.com/2020/04/maybe-visual-
programming...](https://blog.metaobject.com/2020/04/maybe-visual-programming-
is-answer.html)

------
divan
I wrote my thoughts on exactly this question here (long read, sorry):
[https://divan.dev/posts/visual_programming_go/](https://divan.dev/posts/visual_programming_go/)

------
koonsolo
Next to what other people already said, I want to add that textual programming
has a strong visual component. It's not just sentences separated by dots.

A quick glance at source code reveals a ton of information, thanks to
indentation and code blocks.

------
jasonlhy
I used Visual Programming in my previous job. It is pretty good for waterfall
logic and certain workflow. However, it is not scalable well for custom
pattern, custom architecture, and Async, operation and functional programming
paradigm

~~~
bmitc
> it is not scalable well for custom pattern, custom architecture, and Async,
> operation and functional programming paradigm

It is in fact useful for these things. For example, LabVIEW is wonderful for
asynchronous code, is automatically multi-threaded, and supports the actor
model. Dataflow languages are excellent for functional programming, and this
area is relatively unexplored.

~~~
barrkel
You're confusing the semantics of the language you're manipulating with its
form. In fact you're doing it repeatedly in your comments here.

The advantages you advocate come from the semantics, not the form.

~~~
bmitc
Could you clarify what you mean?

~~~
barrkel
Data flow languages are close to my heart, I've implemented several,
particularly in the context of data binding. They are a declarative way of
describing a computation, and the computation can be analyzed to enable things
like multi-threading, caching, incremental & lazy evaluation.

These properties come from the constructed data flow graph, which can be
analyzed by an execution environment and potentially prepared by a compiler.
They don't come from visual vs symbolic representation. They come from
modelling a computation as data flow.

~~~
bmitc
Thanks for the clarification.

And yes, I have implicitly bundled the semantics with the syntax here. I
suppose in doing so, I am arguing both that dataflow is a powerful and
interesting way of modeling computations that allow for a lot of neat features
_and_ that visual representations are more natural and give additional useful
benefits over text-based ones. I have not seen a compelling text-based
dataflow representation, but that doesn't mean one doesn't or can't exist.
LabVIEW actually compiles its block diagrams to what's known as DFIR, dataflow
intermediate representation, which is then compiled to LLVM IR. The DFIR is
helpfully visualized as a graph, although the dataflow graph is represented by
text underneath.

I first learned to program FPGAs with LabVIEW. Now I am learning HDLs, and
it's a bit painful. Understanding VHDL is not hard, but I find it to be a poor
representation of what it is describing. The text-based representation hides
the nature of the actual computation it's representing.

If you have any references to dataflow or any links to things you've done, I'd
love to take a look. :)

~~~
barrkel
Nothing public.

I wrote a server-side framework for data entry AJAX apps in 2004-2006 time
frame, canonical application was entering details for an insurance quote. The
UI was data bound on the back end, and a minimal update over the wire could be
calculated from the bindings.

The bindings were stored in an XML document pulled from a metadata server
referenced by id from the session state, meaning that we could do live
rollover of new versions. Previous pages with existing sessions would use the
old logic, whereas new logins would use the new logic.

This gave rise to a number of nifty properties. The session state was
serialized - in fact, it was never deserialized, it was a byte array that was
traversed as various attributes were accessed and updated. If there was a bug,
you could rehydrate the session state, and display all session variables etc.
and evaluate the current value of all data bindings, meaning that you could
figure out what the user was seeing on screen. A lot like Smalltalk or Lisp
images, but much smaller, because the code was static and stored externally.

Everything from disable state on controls to tab order around the screen was
controlled via data binding, dynamically updating in response to values
entered. The UI logic was completely declarative.

The framework was called Topoix, there are fragmentary references to it on the
interwebs. I wrote an HTTP server & debugger for it too.

The data binding language was called Gravity, structurally it was a subset of
C# in .NET 2 era, but it was declarative because it was not merely executed,
but analyzed for data flow purposes.

The experience of creating a language amongst other things lead me to Borland,
where I worked on the Delphi compiler, and partook in initial design of the
new version of the data binding for controls feature. I can't claim any credit
or otherwise for it, I was focused on other things, like rich reflection,
which in itself helps data binding work well with a native language.

More recently, amongst many, many other things, I designed the web side data
binding approach for the company I'm currently at, Duco. We're gradually
replacing it all with React now, but it was useful to make pages reactive and
automatically updating in response to new data at a time when people were
doing jQuery from their event handlers.

An interesting academic paper at the time for me was "Functional Programming
with Bananas, Lenses, Envelopes and Barbed Wire" by Meijer et al. He's a
decent chap to follow if you're interested in that seam:
[https://www.researchgate.net/profile/Erik_Meijer](https://www.researchgate.net/profile/Erik_Meijer)

I picked up one of my favourite jargon terms from his usage, "bijective", e.g.
with respect to lenses. In a data flow context this is reversible computation,
a nice thing to have if you're doing data binding against an editable control
(rather than something like a label). Not only does the value you're binding
to change in response to upstream changes in the model, but edits to the value
can be propagated back to the model. Bijective is the technical term for a
function which is mappable both ways, and lens is the CS term for the
abstraction which encapsulates a data flow transforms required for a scenario
like data binding.

------
mwerty
[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD696.html)

------
brainless
OK just think outside the whole context of programming. Would you write a
medium-sized story (God forbid a novel) as a flow chart or something visual
like that?

Why/Why not?

I am sure the first paragraph of that story as flow chart looks awesome, and
then...

~~~
brainless
"Since text is difficult to manipulate without a physical keyboard" \- do
novels get written on phones much?

If I want to build a house, I doubt I will start without right tools. Tools
matter and the keyboard is an excellent tool.

------
zubairq
At [https://yazz.com](https://yazz.com) we use visual programming to lay out
forms, which are then scripted with code. Used in large enterprises. Is that
not visual?

------
7leafer
Try programming 7 Billion Humans and you'll see why.
[https://tomorrowcorporation.com/7billionhumans](https://tomorrowcorporation.com/7billionhumans)

------
JoeAltmaier
It takes too much room. And you run out of symbols for detailed meaning. Try
writing the Gettysburg Address in emoticons.

It's essentially returning to ideograms like ancient Chinese or Egyptian
writing. Not a good idea.

------
thrower123
Because it is terrible. People keep trying to make it a thing, and bog down in
the inevitable complexity where it becomes unworkably clunky to manipulate
flowchart logic.

------
grandinj
It does not scale beyond rather simple programs. Text has much higher
information density for the specific problem domain of representing programs.

~~~
Firadeoclus
Code as text can be taken as a subset at one extreme end of the continuum of
visual programming. And if you take it that way it would seem to be always
possible to find a more dense representation than, say, 11pt monospaced text.

------
keithnoizu
I'd be happy with just being able to color code files, hyperlink inbetween
them and insert a few images along side code.

------
w_t_payne
Visual programming has the potential to be really useful ... but I haven't
seen anyone get the UX quite right just yet (although a few have come close).

I spent a few years programming mainly with Simulink, and more recently have
experimented a little with some graph-based game engine UIs. (Unity shader
graph etc..)

Now, as far as Simulink is concerned, I feel that it was (and possibly still
is) an ergonomic disaster zone. There's just too much mousing around and
adjusting routing between nodes. Also, merging is really difficult because of
the save file format. This is a significant problem for many engineering
organisations.

For any visual programming tool, the value of a visual graph diminishes as the
graph becomes more complex. It is most valuable when it is kept simple and
illustrates e.g. high level components only.

Now ... none of these problems is insurmountable, and having a relatively
simple high-level graph to show the relationship between the major components
of a system is an incredibly valuable communications tool -- but users do need
to resist the temptation to use the graph for everything down to every if
statement and for loop. These things are best used to explain the high level
relationship between major system components and the overall flow of data
through the system. Fine grained algorithms are better represented textually.

So, over the years I've developed a configuration-driven framework that's
designed to (eventually) be driven from a visual UI.

Computationally, the framework implements a Kahn process network. I.e. it's a
dataflow model, where the nodes represent sequential processes, and the edges
have queue semantics. (The queue implementation can be replaced with
implicitly synchronised shared memory in some situations, so it can be quite
efficient).

This enables me to e.g. implement intelligent cacheing of data flows, support
record-and-replay style debugging, co-simulation, and automatic component-test
generation.

The nodes can be arbitrary unix-philosophy binaries, Python functions, or
native shared libraries. (Eventually I plan to support deploying nodes to FPGA
as well), and because of the Kahn process network semantics, behave in the
same way irrespective of how the nodes are distributed across computers on the
network. This makes it an ideal rapid-prototyping tool for quickly integrating
existing software components together. E.g. throwing machine learning
components together that have been written in different languages or using
different frameworks. It's a bit like ROS in this regard.

The dataflow graph itself is represented by a set of YAML files (or data
structures that can be generated/modified at runtime), with different aspects
(connectivity, node definitions, layout) separated to make textual diffs and
merges easy to understand, and enabling teams to work more effectively
together.

Also, because the graph is explicitly represented as a design-time data
structure rather than being a runtime construct, it's easy to use GraphViz to
generate diagrams, allowing you to have documentation that's correct by
definition without spending ages adjusting untidy connections and layouts.
(Although I do want to spend some time improving the layout algorithms and
providing some mechanism for layout hinting).

Right now, you can only generate the visual graph from the textual
description. In future I want to make it possible to edit the graph visually,
so you can (for example), drag and drop compute nodes onto different computers
really easily; stop, rewind and replay history, and simulate the effects of
e.g. network contention and experiment with moving nodes around the available
hardware to optimise performance.

I'm also experimenting with hybrid symbolic/statistical AI techniques that use
the data flow graph to help with fault finding, automatic safety-case
generation etc.. etc..

------
narrationbox
Story time (and a post-mortem of sorts): a long time ago I built a startup
using Visual Programming for algorithmic trading.

[https://hackernoon.com/on-kloudtrader-and-visual-
programming...](https://hackernoon.com/on-kloudtrader-and-visual-
programming-5161e1e2b89f)

It wasn't the most novel idea or anything but most existing systems were
either clunky, expensive, or had UIs from the previous century. Ours was a hip
SaaS inspired by Robin Hood, Bubble.is, and all the new Bloomberg terminal
clones. We built an initial prototype using Google Blockly, did a ton of UI/UX
research (studied every visual programming system from Sutherland's Sketchpad,
to the Soviet DRAKON, to modern day MindStorms, Scratch, and the various PLC
control systems and LabView derivatives) and slowly built out the rest of the
algorithmic trading stack. It was tremendously difficult mainly because of
lack of labor, our small startup only had 3 people. We were essentially
translating by hand entire trading frameworks, backtesting tooling, and
blotters into virtual Lego bricks. It was my first startup and I was
inexperienced. We were fresh faced and between trying to raise funding, sales
and marketing, product development, progress was slow. Patio11's (Kalzumeus)
blog posts on Stockfighter were highly inspirational and we saw what they
managed with only a small team and we tried to replicate. But between Patrick
and the Ptaceks, they had several decades more engineering and business
experience than us, something we completely discounted. The tooling around
Google's visual programming system was like early Android development, works
in theory but tremendously difficult to use. Microsoft's Makecode (which is
also built on Blockly) had a magnitude more engineering manpower than us.
Visual programming was not easy to build quickly — a production quality system
wasn't something that you can clone in a weekend. We looked towards
automation, around the same time, a code synthesis YC company called Optic
appeared and we strongly considered leveraging them to allow us to build out
faster.

[https://news.ycombinator.com/item?id=17560059](https://news.ycombinator.com/item?id=17560059)

However, a couple months later, YC funded a similar company called Mudrex who
had a prettier UI and a founding team with a stronger fintech track record.

[https://news.ycombinator.com/item?id=19347443](https://news.ycombinator.com/item?id=19347443)

At that point we crossed the rubicon and pivoted to DevOps/PaaS, launching a
Heroku-style product.

[https://KloudTrader.com/Narwhal](https://KloudTrader.com/Narwhal)

Did the whole tour of Docker Swarm, Kubernetes, KVM etc. Built out our own
cloud almost from scratch. Signed a contract with a broker to offer comission-
free trading and everything. But it was a difficult product to sell in a
crowded enterprise market with only a few (but big) customers and we were
playing catch-up with companies like Alpaca, our product was being eaten from
the corners by new features launched by companies like Quantopian and
Quantconnect. Quantopian was where I cut my teeth on computational finance and
automated trading, it was what inspired me to build a fintech startup in the
first place, so in many aspects, our product being displaced was a validation
of product-market fit if nothing else. In retrospect, at that time we should
have switched to the ML Ops market instead, which is booming right now.
Algorithmic trading, or at least the consumer focused variety that we were
trying to sell, had a stack that is very similar to your usual ML stacks. In
the two years I learnt about enterprise sales, the various shenanigans
involved in FINRA, SEC compliance, and was tremendously valuable in terms of
growth.

These days however we are mainly doing productivity software for voiceovers
and transcriptions with a bit of ML thrown in (voice cloning research). Fast
growth, easy traction, great market. Not as lucrative as fintech, or at least
a lower ceiling, but at our current rate of growth I am certainly not
complaining. It is hosted mainly on Google Cloud, AWS, and other providers
(after having to build our own Heroku we have had enough of DevOps)

[https://narrationbox.com](https://narrationbox.com)

[https://twitter.com/narrationbox](https://twitter.com/narrationbox)

If you are looking for advice or suggestions on building your own visual
programming systems, I am available for consulting services.

(On second thought I should probably turn this comment into a Medium post)

~~~
AlexDanger
Certainly a medium post would be good - I'm impressed by your pivots trying to
find a fit with the visual programming paradigm.

What is your opinion on visual programming and diagramming libraries? Would
you recommend particular libraries/services for visual programming if looking
to build this type of UX/UI into a new product?

I'm interested in building a service that utilises a basic visual programming
interface but I have no interest in building out a complex visual programming
/ diagramming UI toolset myself.

Do you consider the visual programming aspect a commodity or a selling
feature? Or is that dependent on the market you are selling into?

~~~
narrationbox
There is a range of what goes for "visual programming" these days. If you can
tell me what market you are trying to target I can give you a better idea of
things (or email if you don't want post it here). A lot of visual programming-
esque products are not obvious visual programming systems but use the same
underlying designs and patterns. E.g. take Lobe AI (acquired by Microsoft and
their features are now in Azure Machine Learning), their neural network
pipelines are quite far from your LabView style imperative control structures
but the underlying data flow across nodes linked by splines UI pattern is
something that has been replicated across generations of game engines and
other design software. Zapier, GitHub Actions, Retool. Many a unicorn has
visual programming as a central part of their product. However, they are not
advertised as such, you do not see the same marketing language as Scratch or
Simulink. If you look at Trigger Finance or IFTTT, the word programming never
comes up. From a business/marketing point of view, it is probably for best not
to use the word "programming" to describe your product or UI (unless it is
educational software of sorts). Excel is fairly successful but most users do
not really consider it to be "programming" even though it essentially is.

Building a visual programming language in 2020 is fairly trivial and
straightforward if your environment is the browser. There is a ton of
libraries and open source reference projects out there: Xod, n8n, Makecode,
Rete.js, Blockly. On the engineering side, as long as you do not try to map a
mainstream programming language with its associated frameworks directly into
visual programming, one-to-one on the syntax level, but instead build wrappers
and simplified interfaces around it then it should work out pretty well.

------
golergka
Unreal blueprints and Unity playmaker are doing great, by the way.

------
aylmao
I've thought about this, and I think I have a general sense as to why.

First, let's look at the size of the keyboard in relation to the screen: it's
huge. In most laptops, it is about half the size of the screen. Keys easy to
use— they all work the same way, and we all grow up knowing how to use
letters. You can press several at once, press them in quick succession, plus
they are huge and haptic. You don't have to be precise with how you press one,
and you feel it when it presses: your eyes, fingers and ears are all telling
you that the information you're transmitting is making it to the screen.

You effectively have a very big, very good, dedicated toolbar and the rest of
your screen is either your canvas (the text area) or can be used for other
tools to augment your programming.

With visual programming languages, you have to reserve part of this screen
real estate, for the input. It's like having to put your keyboard on screen:
leaves you less space for the canvas or additional tools. Moreover, these UI
elements are often smaller than the keys in a keyboard, and hovering/clicking
something with a mouse doesn't provide haptic feedback. All this means a
little more mental effort when composing with the mouse, and doing all the
aiming, clicking, dragging and dropping. It's more "precise" and "delicate"
movements that require more "attention", if you will.

On top of that consider that there is no visual programming environment is
ever "fully" visual— there's always typing involved at some point or another.
You have to enter a number, a string, the name of a variable, etc. All this
switching between the keyboard and the mouse is even more movement, more
attention, more cognitive load to the layout of ideas. There is a reason all
these modern IDEs have a "VIM mode"— you'd think our new tools would replace
an older, more complicated way of doing things, but VIM has managed to survive
in the hearts of experienced programmers because it allows them to type
without reaching to the mouse.

Let's delve into this. For an unexperienced programmer, "wording" the idea
(remembering the syntax, etc) probably takes enough time that laying it out is
not the bottleneck. For an experienced programmer, finding the idea is the
bottleneck, but once found, wording it is quick, and so laying it down becomes
the bottleneck. Being able to express things quickly becomes important.

Moreover, revisiting this "finding the idea" and "wording the idea"— because
wording the idea is fast, an experienced programmer might write while it's
still looking for the right idea, as a way of "thinking out loud". They will
type something, delete it, type something else, backspace repeatedly, etc.
Seeing it in front of them helps the idea materialize, kind of how a musician
might play notes as it's thinking of a melody, or digital tools allows artists
to paint strokes and undo as they draw [1: notice in this link how a digital
artist works. They're constantly drawing and erasing. It's the first video I
found when searching "painting digital"].

This is harder and slower to do with current visual tools.

[1]:
[https://youtu.be/1lemi11CgLs?t=1200](https://youtu.be/1lemi11CgLs?t=1200)

~~~
aylmao
To follow up on my last two paragraphs, I think this is why visual programming
has proven successful in some fields. In computer graphics, a collection of
nodes with visual controls is perhaps a faster way to "think out loud" then
jumping across different lines in a file to change number values or colors.

In audio, and interface mimicking a rack [1] where you can again, turn knobs
or connect different signals visually is faster to "think out loud" in than
typing it out.

Overall, it comes to enabling the creative process in the most frictionless
way, wether or not is the most initially approachable way.

[1]: [https://youtu.be/tMh3ZEFyEmo?t=50](https://youtu.be/tMh3ZEFyEmo?t=50)

------
toxicFork
It's very early days.

------
anotheryou
2D space is quite limited in arrangement I think (and with 3D you always have
occlusion).

There are very established graphical programming languages in the art-scene.
They are easy to get in to, but become messy for complex projects.

For the erst stuff think they are a bit like jupyter notebooks, but even
better: everything compiles in real time and you can see it working. On the
other hand it quickly just becomes a spaghetti mess..

For everything else you need things with a limited scope. A single state to
track (workflows, conversations) or really basic logic and strong abstractions
(mangling two APIs together). Beyond that you need a programmer anyways and
his gain from a GUI is limited.

Generally there are a few things hard to represent, e.g. abstracting and
recycling code (writing functions), parallel processes, state and highly
interconnected things.

A few examples for the curious:

For Visuals:

\- quartz composer (old by now) [http://www.mactricksandtips.com/wp-
content/uploads/2008/03/n...](http://www.mactricksandtips.com/wp-
content/uploads/2008/03/networked-quartz-composer-patches.png)

\- touch designer (modern, very nice nesting, you can zoom in to groups):
[https://youtu.be/hbZjgHSCAPI?t=49](https://youtu.be/hbZjgHSCAPI?t=49)

Music:

Pure Data and maxMSP (not strictly for just for music):
[https://youtu.be/rTQgfhsQ7xo](https://youtu.be/rTQgfhsQ7xo)

Bitwig Grid (very skeumorphic, yet one of the modern "modular" ones. I dig the
look of the droopy bezier curves though):
[https://youtu.be/dNdhbHGeHPw](https://youtu.be/dNdhbHGeHPw)

More recent and really interesting to me is "no-code" environments that are
now gaining traction.

Business logic:

BPMN + Camunda (you still need to code everything in text, but you can shuffle
the flow around afterwards):
[https://youtu.be/HxtZf5VD6lQ?t=625](https://youtu.be/HxtZf5VD6lQ?t=625)

No-code API plugging:

Appmixer: [https://uploads-
ssl.webflow.com/5a9d00dba5e9fa00010cb403/5c8...](https://uploads-
ssl.webflow.com/5a9d00dba5e9fa00010cb403/5c8b82a6e9c13e6c33e03b7a_AM-2019-Screenshot-2.png)

AI-assisted chatbots:

Cognigy: [https://youtu.be/QSJ-nTwjn-c?t=1525](https://youtu.be/QSJ-
nTwjn-c?t=1525)

------
ajuc
I worked on a warehouse management system where we moved from C++/PLSQL to
J2EE solution with JBPM 3 library (it's a java library that implements
graphical language - looked like this [1]). It was very good fit for our
system, but it still had lots of problems.

The tooling was pretty bad (it was a plugin for older version of eclipse, had
pretty bad bugs). We added our functionality to that plugin, and these were
big usability wins, but also it made the plugin even less stable.

Sometimes the graphical editor would "desynchronize" and if you haven't
noticed you could be editing the process in graphical editor for 2 hours, but
the XML file would remain unchanged (or even broken).

So developers became paranoid about changing to the XML view every few seconds
to check, or they just edited the XML directly all the time :)

We had problems with several things that are solved in regular languages. For
example there were subprocesses (analog of functions) - you could invoke a
subprocess, pass arguments, wait for it to end, and receive return values from
the subprocess. That solved problem of duplication of common functionality,
but also required us to add support for pretty complex mappings when invoking
subprocesses.

For example we had a node that does a HQL query and shows the results in table
on the user terminal. These queries were parametrized, and some parameters
were constant in each process, just changed between processes. So we added a
way to specify these parameters. Then there were some processes where it would
be useful to specify additional conditions for a query in the graphical
editor. So we added that.

Then it would be useful to pass around that conditions between subprocesses,
and fill some parameters there in some subprocesses, while the others remain
to be filled by the app server.

It was much too complex (I am mostly to blame - I was straight out of
university and was amazed by this cool technology and wanted to add all the
bells and whistles I could think of :) ).

What we should do instead is just duplicate the subprocesses with different
conditions, or construct these conditions in java code on app server and just
call that from the graphical designer.

Eventully like half of all our business logic was encoded in the nodes that
invoked subprocesses in the mapping between inner and outer process variables.

The other half was mostly in the conditions on transitions between the nodes.

Both of these things weren't visible when you looked at the whole graph -
transitions only showed labels (which were sometimes outdated vs the REAL
condition code), and the invoke process nodes showed nothing until you clicked
on them.

So when you had a big process with 40 nodes you had to click on each node (and
scroll through sometimes 30 mappings) to see "who set that variable".

Same with transitions - you had to click on each transition to see the
conditions.

We tried to show the conditions in the main view but it wasn't easily changed
in that plugin.

Overall I think the graph language was great, the automatic transaction and
persistence support was the best things about it, but the visual programming
aspect of it was rarely useful, and very often problematic.

I would now go the other way - have the graph language be text-first (with
some nice DSL), and rendered to a nice graphical view on-demand.

Better tooling could probably help, but we spent maybe 3 man-years on fixing
and improving that plugin and it never worked great.

[1]
[https://docs.jboss.org/tools/whatsnew/jbpm/images/multiple-p...](https://docs.jboss.org/tools/whatsnew/jbpm/images/multiple-
processes-in-folder.png)

