
Visual Programming Doesn’t Suck - mathgenius
https://blog.statebox.org/why-visual-programming-doesnt-suck-2c1ece2a414e
======
gpm
Grab bag of miscellaneous thoughts on this topic:

\- Visual computing is pretty heavily used, unreal engine, blender, houdini,
etc. All have a very similar node based visual programming system. It seems to
work pretty well (better than text) for most of what they use it for. I think
because of the ease of jumping in the middle and making small understandable
changes.

\- Many programming languages today have a format that is like <tree of files,
each containing> <set of items, each containing> <list of expressions>. It
would be nice if that <set of items> step was treated as an unordered set
instead of an ordered one, with editors having a better understanding of how
to bring up relevant elements of the set onto your screen at the same time.
Split pane editors, "peeking" in editors at code definitions, etc. hint at how
this should work, but I don't feel like they do it as well as possible.

\- A small amount of "visual augmentation" might benefit most programming
languages, I'm not sure I can explain this better than linking to a few images
of what emacs auctex package does to latex
[http://lh6.ggpht.com/_egN-3IJO0Xg/SpIj6AtHOTI/AAAAAAAABj8/O1...](http://lh6.ggpht.com/_egN-3IJO0Xg/SpIj6AtHOTI/AAAAAAAABj8/O1aKgTb-
Njw/preview-latex.png)
[https://upload.wikimedia.org/wikipedia/commons/4/42/Emacs%2B...](https://upload.wikimedia.org/wikipedia/commons/4/42/Emacs%2BAucTeX.png)

~~~
_yawn
I feel like a lot of users on HN are producers rather than users of software,
and haven't used node-based systems like in blender or nuke. They are
extremely productive, and end up being very similar to functional programming
while being super easy to pick up. It's a great "inbetween" representation for
non-programmers that really need to do domain-specific programming.

~~~
mattnewport
Experience with tools like Blender or Nuke and particularly with visual
programming in games engines is actually where a lot of the better informed
dislike of visual programming comes from.

The biggest problem with these tools is scalability and maintainability. You
will hear many stories in the games and VFX industries of nightmare literal
spaghetti code created with visual programming tools that is impossible to
debug, refactor or optimize.

Visual programming seems easy for very small examples but it doesn't scale. It
has no effective means of versioning, diffing or merging and usually lacks
good mechanisms for abstraction, reuse and hierarchical structure. It doesn't
have tooling for refactoring and typically lacks tooling for performance
profiling.

Some of these problems seem to be more fundamental and others like they could
potentially be addressed with better tooling but that tooling never seems to
emerge.

I've got a lot of experience with shader programming and have never found node
based shader editors to be better than text over the long term, although there
are some nice visual debugging features which are rarely implemented in text
based IDEs (though I have seen it done). I've also found visual scripting to
all too frequently get out of hand and have to be replaced with code due to
being unmaintainable, undebuggable or unoptimizable.

I think there is possibly fertile terrain to explore in trying to get some of
the benefits of visual programming approaches while avoiding all these
downsides but many of us have been burned enough to be very skeptical of the
majority of visual programming systems that don't even try to fix the worst
problems.

~~~
candiodari
You shouldn't generalize like this. Even in massive projects visual
programming can be enormously helpful. Nobody seriously tries to script game
levels in an IDE. Nobody tries to match sprite parameters, which eventually
become code, to image files in an IDE. Nobody tries to design GUIs in IDEs.

The right tool for the right job. For 50% of a game's code visual programming
is absolutely the right tool. For some other parts it probably isn't.

~~~
mattnewport
You seem to be using a different definition of visual programming than the
normal one. The article and my comment are talking about visual programming as
a graphical representation of code / logic rather than a textual one. Examples
would be things like Unreal's Blueprint visual scripting, node based shader
editors (Unreal has one of those too) or the Petri nets described in the
article. Visual / GUI tools like sprite editors, level editors or GUI builders
are not what is usually meant by visual programming.

Unity is a very popular game engine that doesn't have an official visual
programming solution (they're previewing one in the very latest version).
Unity has a powerful level editor that is used to lay out the levels in a GUI
tool but no visual scripting / programming tools. The majority of Unity games
that currently exist therefore do all level _scripting_ in C# code. Many other
games engines have no visual scripting solution and all level scripting is
done in either a scripting language like Lua or in some cases in C++ code.
Unity has sprite editors, visual GUI builder tools etc. but those are not what
is generally meant by "visual programming". The closest Unity has had until
recently was its graphical animation state machine editor.

~~~
wires
awesome comments, really cool to read all of this.

anyway, I would argue both are valid examples is graphical progamming, but
they happen at different levels.

the "node based" tools usually define some sort of function or system, ie. a
"type"; for this you need category theory to describe how the diagrams look
and this is not what any editor I know does, but it means a whole of
difference.

And the map editors are for defining "terms of a type", given a definition of
a "map datatype" there is a graphical way to edit it.

when we talk about graphical programming we are initially focussing on the
first, well defined graphical protocol definitions. you can think of it as
type checked event sourcing, where the "behaviour" or "type" is described by a
(sort of) graph representing a (sort of) state machine)

but we have relatively clear idea's to extend this to the second case as well.

The difference with other (older) approaches is that in the last 20 years a
lot of mathematics appeared dealing with formal (categorical) diagrams or
proof nets, etc. that we leverage. I claim we (the world) now finally really
understand how to build visual languages that do no suck.

hence statebox :-)

~~~
mattnewport
Things like a GUI designer or level editor map a 2D or 3D domain to a 2D or
3D-projected-to-2D space. A 3D animation editor maps a 4D domain to a 2D
projection of a 3D representation plus a timeline representing the 4th time
dimension. These mappings are natural, intuitive and work well generally.

Visual programming tools attempt to map logic to a (usually) 2D domain where
there is no natural or intuitive general mapping. The representation has both
too many degrees of freedom (arbitrary positions of nodes in 2D space that are
not meaningful in the problem domain) and too few (connections between nodes
end up crossing in 2D adding visual confusion due to constraints of the
representation that don't exist in the problem domain).

I've been exploring colored Petri nets for our product and they do seem to
have promise for certain use cases though so I do think it's an interesting
area to explore.

~~~
Nigredo
> Visual programming tools attempt to map logic to a (usually) 2D domain where
> there is no natural or intuitive general mapping. The representation has
> both too many degrees of freedom (arbitrary positions of nodes in 2D space
> that are not meaningful in the problem domain) and too few (connections
> between nodes end up crossing in 2D adding visual confusion due to
> constraints of the representation that don't exist in the problem domain).

In general this is true, but the diagrams we use at Statebox are different in
the sense that there is a completeness theorem between the diagrammatic
language and an underlying mathematical structure (a category). In this case
the mapping is sound by definition.

Also, it is worth stressing that our diagrammatic calculus is topologically
invariant, meaning that the position of diagrams in space is meaningless,
everything that matters is connectivity. This is also the approach originally
used by Coecke and Abramsky in the field of Categorical Quantum Mechanics,
which is getting huge success to define quantum protocols :)

------
mimixco
This idea comes back around every few years. It was most popular in the 80's
when it was called CASE, Computer Aided Software Engineering. Since it's been
around for a while, we have to ask why it hasn't taken off in a more
mainstream way.

I think the best answer is that text, being more dense, is actually the
_simpler_ way to represent a _complex_ program. Big applications written in
diagrams tend to wind up being harder to read than the equivalent in text.
Visual diagrams are also difficult to search, scan, or replace
programmatically.

~~~
coliveira
I don't agree that text is the simplest way to represent a program. In fact,
from experience in other areas of knowledge, the opposite is true. Math and
physics has evolved from use of pure text to the use of diagrams and non-
textual symbols. The problem with CS is that we don't have a shared, simple
way to represent symbols and images. We feel that text files are simpler
because it has become the universal way to represent computer code, and
practically all tools we have are designed to work with textual
representations.

~~~
maxk42
Well perhaps it's mathematics that could benefit from an update to its
representation. Mathematical symbols have evolved over hundreds of years and
aren't really suited to modern systems of representation. When it comes to
computers is it really easier to look up the unicode symbol '∩' or its LaTeX
representation when you're trying to write 'A ∩ B' \-- or would it be better
to begin noting mathematics online in a portable way such as: (intersection A
B)

~~~
zozbot123
One of the reasons for infix notation in math is actually that it provides a
'visual' reminder of useful properties such as associativity, and possibly
others e.g. commutativity or distributivity. If all we ever used was a strict
LISP-like, function-based notation, such a reminder would be lost and
understanding or manipulating non-trivial expressions would be quite a bit
harder. The effort in OP is actually a way of generalizing this idea to
broader settings, where one is dealing with something more complex than a
single domain of number-like values, and a handful of operations on them. This
is arguably how one should think of "graphical linear algebra" as well: the
'diagrams' one's dealing with there can be thought of as generalized
expressions, so there's nothing overly strange in being able to manipulate
those formally according to well-defined rules of some sort.

~~~
kazinator
> _' visual' reminder of useful properties such as associativity, and possibly
> others e.g. commutativity or distributivity_

These are semantic properties that most binary operators do _not_ in fact
have.

E.g. C syntax:

    
    
      A << B
      A = B
      A - B
      A / B
      A % B
      A < B

------
analog31
I used visual programming many years ago, in the form of LabVIEW, and
encountered the following issues:

1\. The sheer physical labor involved in creating and maintaining programs. I
was going home each evening with severe eyestrain and wrist fatigue, due to
the fine mouse work and clicking through menus.

2\. Programs are more readable until they get bigger than one screen, then all
hell breaks loose. You can arrange things in sub-programs, and use the
equivalent of subroutines / classes. These are good techniques in any
language, but it compounds the physical trauma problem exponentially.

On a separate note, I wonder if text based languages persist because it's just
easier to create them. As a result, people are more likely to experiment with
new languages, libraries, and so forth, if the format of a program, and its
inputs and outputs, are text. If you want to invent a new graphical language,
you have to create a full blown graphical manipulation package, and make it
work on multiple platforms, just to get started. That's a huge amount of work,
and it doesn't necessarily attract the same people who are interested in
language development. The result is a more vibrant pace of development in
languages if you're willing to give up graphical representation.

~~~
Scarblac
I also spent four years long ago with LabVIEW.

Also 3, you really miss source control tools that are smart enough to merge
changes by different people in different branches. And useable diffs.

------
lmm
Doesn't it?

The Fibonacci diagram isn't any clearer than the code. The Petri net animation
seems as likely to obscure as it is to enlighten.

I think there's space for making better use of graphical environments, and
modern IDEs are already stepping up this kind of capability - code folding,
mouseover hints, small automatic parameter annotations. I still haven't seen
any case for visual programming.

~~~
radiorental
> I still haven't seen any case for visual programming.

Aerodef, automotive, etc all use some form of visual programming. Controls
engineers rarely write code, the systems are way too complex. E.g. I highly
recommend watching this video from JPL to give you an understanding of where
such tools excel. It's about simulating, iterating and then having scientists
and engineers autogenerate the code they couldn't possibly write or test

[https://www.youtube.com/watch?v=Ki_Af_o9Q9s](https://www.youtube.com/watch?v=Ki_Af_o9Q9s)

~~~
lioeters
I'm curious to see what you mean, but the link goes to "The Challenges of
Getting to Mars" which doesn't show any example of visual programming by
controls engineers?

~~~
radiorental
I don't have access to JPL's models obviously, but here's a simple student
project 'reverse engineering' the rovers in some of the same tools JPL uses.
Note the control diagrams towards the end. These are the very same languages
that are used to design & program in all but a handful of cars, are used by
every plane manufacturer, to name a few applications

[http://www.multibody.net/teaching/msms/students-
projects-201...](http://www.multibody.net/teaching/msms/students-
projects-2012/mars-rover/)

E.g. cruise control

[http://www.cds.caltech.edu/~murray/amwiki/index.php/Cruise_c...](http://www.cds.caltech.edu/~murray/amwiki/index.php/Cruise_control)

------
mschaef
Many years ago, I spent a fair amount of time coding in LabView. This is a
graphical programming environment initially written to allow (presumably
electrical) engineers to write code to drive various data acquisition and
control tasks. (I'm almost entirely a software guy, so I'm not the target
audience.)

The general approach LV takes is to model computation as a data flow graph.
Constructions like iteration, selection, etc. are (were?) all modeled as
rectangular regions within the graph where portions of a graph can be swapped
out for others or run multiple times. Graphs can also be nested to provide a
means of abstraction. Execution has gone through changes over the years, but
it's efficient: compiled to machine code with LLVM, and there are also
versions that compile LV code to run directly on FPGA's (on some of the
hardware products sold by the same company). It also takes advantage of the
implicit parallelism that sometimes crops up in data flow graphs.

All in all, LV is theoretically quite impressive. (And since it's been sold
for I think over thirty years, commercially quite impressive too.)

As a software engineer, though, I never fully acclimated to the way it worked.
If laying out textual code is a challenge, laying out a 2D graph is much
worse. The same thing goes for defining sub-graphs - 'naming a function'
becomes the much worse problem of 'drawing an icon', or maybe even drawing a
family of icons with a common theme. (Although I think LV's been extended with
a nicer icon editor to help with this.) Input is similarly a challenge...
textual tasks that can be split across two hands and ten fingers become
focused on a single hand/finger. (I had to rethink my input devices both
during and after the time I was using LV to avoid RSI issues.) And there are
also issues with source code control. LV has some stuff baked in, but there
are many years of industry-wide experience managing textual representations of
code and some good tools for doing it. Switching to a different representation
for code means, necessarily, deviating from that base of wisdom and practice.

So, while I think it's a powerful tool (and something more engineers should be
familar with) it's nothing I'd want to do my daily work in.

~~~
aDfbrtVt
As someone who is part of the target audience of LV, every experience I've had
with the software have been terrible. I guess it's mostly aimed at industrial
automation applications, but the development environment is buggy and coding
with rectangular regions gets old fast.

I anecdotally see people moving towards Matlab and Python for automation these
days, though its harder without the incredible amount hardware support
provided by NI

~~~
mschaef
> I anecdotally see people moving towards Matlab and Python for automation
> these days,

For a while, NI provided tools in that space too. There was a product called
LabWindows, which was centered around C, and a product called Hi-Q that I
remember as being similar to Matlab. I assume that the non-LabView story these
days is mostly a public API combined with other people's development tools.
(At least that's what I'd hope it would be, given the expense of developing
programming langauge tooling.)

> though its harder without the incredible amount hardware support provided by
> NI

Agreed... the hardware offerings are rather amazing and growing every day
(even into some fairly specialized and high-end domains).

------
gilbetron
Maybe we should spend a few thousand years coming up with a compact,
information-dense manner of expressing thoughts. We could call these
individual units something like "glyphers" or something, and we could combine
them into more meaningful expressions.

Text is a graphical, visual representation. While there are sometimes
alternate ways of expressing things, this idea that text is not visual is
weird. "Non-textual" representations is better, because we already have a
rich, complex capability in good ole symbols.

~~~
m_mueller
while text is 2D, together with rich formatting options, program code is only
1D. have a look at the 'subtext' programming language concept that combines
tables and graphs together with text based procedures, finally getting away
from the constraints of program code that is designed around teletypes.

[https://en.wikipedia.org/wiki/Subtext_(programming_language)](https://en.wikipedia.org/wiki/Subtext_\(programming_language\))

~~~
taeric
This seems wrong. A view of your program is 1d. Conceptualizing a program is
often n-dimensional. This is a large part of the difficulty. And is why some
visualizations work. They effectively act as dimensional reductions, and draw
on known visual metaphors.

~~~
m_mueller
That was exactly the point I was trying to make. If you have a 2D problem and
you want to represent it in today's commonly used code you either

1) flatten it down to 1D (e.g. a table becomes a JSON array of objects)

2) move it into a higher dimensional structure like a database. Now you have
two problems.

If your programming paradigm supports higher dimensions to tackle your
problems, it just gives you a higher level platform to start tackling your
problems. Before you could maybe deal with 4 dimensions at the same time at
most, now you can deal with up to 5 or even 6 - we don't yet know what new
solutions to problems smart people could come up with when being given such
tools.

Just as an example, how often do you see binary logic problems in the form of
complex if-then-else procedural structures - what if you could represent two
decision factors in a tabular form and let the IDE work out the missing cases
for you? That's one of the ideas behind subtext.

Point is I think we agree - if you think I'm fundamentally wrong I'd like to
know more exactly where.

~~~
taeric
I guess I just don't agree that those are your only two options. Consider, a
nested JSON structure is essentially N-dimensional. Even something as simple
as a list of people is effectively multidimensional. You have the dimension of
the list, and then you have the dimension of the the structure representing
the people. Which, itself, my have multiple dimensions.

Consider, a large part of the literature of machine learning is talking about
taking a hypercubes and consequences of most distance/volume measures.
(Quickly searching,
[https://datascience.stackexchange.com/questions/27388/what-d...](https://datascience.stackexchange.com/questions/27388/what-
does-it-mean-when-we-say-most-of-the-points-in-a-hypercube-are-at-the-bound)
talks about this.)

------
Waterluvian
When I was learning how to use a debugger, most of my "uhhh what?" moments
were discovering the next line the debugger would jump to when I clicked "step
into", "step out", "step over" etc.

I remember thinking, "this can probably be visualized as a little graph in a
frame in the corner."

Now that I'm far more comfortable, I doubt I'd use one. But I think visual
tools are a brilliant idea for spanning the gap between beginner and expert.
At some point you just stop using tools and you disable them in your editor.

Bonus anecdote: I vividly remember making a leap at around 9 or 10 years old
from the LEGO Mindstorms visual programmer to writing nonsense AppleScripts
and having one of those emotional floods of epiphany about my power over my
computer.

------
pjc50
People are (rightly) skeptical of this, but I think the first paragraph has
the right idea:

> diagrammatic reasoning in particular, is a formidable tool-set if used the
> right way. That is, it only seems to work well if based on a solid
> foundation rooted in mathematics and computer science

Code is _formal_ : the symbols have a nearly-unambiguous interpretation as a
running program. If there are to be diagrams they must be formal and
systematic in their use of notation.

In particular, the author is right that some control systems and interactions
are best thought of as state machines, and statecharts are the traditional
tool for reasoning about these.

However, the bitcoin ATM example gif in the middle also shows what the
weakness is: that's not a complete diagram! It lacks all the error and early
return states, timeouts etc that would be needed in a real system. Admittedly
this plagues traditional programming as well - qv. exceptions vs. Go style
mandatory error checking.

Trying to do a whole program this way also seems like a bad idea; that's the
hell of UML that was briefly a fad (qv Rational Rose). Maybe we can just
sprinkle a bit in the right places?

Having the diagram represent "control flow" seems to be an anti-pattern, since
it gets bogged down in detail. Either dataflow, event flow or state transition
seems to work better.

I sometimes wonder if the right way forward would be something a bit like the
old "pic" language: [https://www.oreilly.com/library/view/unix-text-
processing/97...](https://www.oreilly.com/library/view/unix-text-
processing/9780810462915/Chapter10.html#ch10) ; SVG is too complex and too XML
to cleanly interleave in a program, but pic might work well.

~~~
arianvanp
The Idea of statebox (From glancing over the paper on their homepage) is that
you draw architectural diagrams. And then these Petrinets have some kind of
category-theoretical encoding. Which allows us to interpret the high level
architectural diagram to a low level program, by telling what exactly the
nodes and arrows do. These interpretations then fill in the gaps on whatever
the architectural diagram has (like error handling, UI drawing, whatever).

This is similar to free monads, where the choice of specific interpretation of
the DSL is delayed, and even multiple interpreters can exist

~~~
zozbot123
> ...you draw architectural diagrams. ...Which allows us to interpret the high
> level architectural diagram to a low level program, by telling what exactly
> the nodes and arrows do

This is not a new idea at all - I'm strongly reminded of the whole Model
Driven Architecture brouhaha back in the 1990s and early 2000s. Needless to
say, it didn't work. You _can 't_ do "code generation" from a high-level
architectural diagram and expect to end up with a functioning system! _At
best_ , if you _really_ do it right, the high-level design might enable you to
specify _type_ -like properties that constrain the low-level implementation in
a broadly helpful way[1]. But you still have to write all the low-level code!

[1] And this is in actuality incredibly optimistic - MDA and UML didn't even
manage _that_! Instead, all the pretty diagrams (1) were inherently fuzzy, so
they did not embody any actual constraints on the implementation, and (2) even
in the best of cases, got immediately out of sync with the ground-level truth
of the actual implementation.

~~~
wires
yep UML and such don't have what we think of as sensible semantics.

> if you really do it right, the high-level design might enable you to specify
> type-like properties that constrain the low-level implementation in a
> broadly helpful way

so this is exactly what we do. we have a general way to specify boxes and
wires and if you give me some sort of type system and a functor and voila, we
can produce some well behaved code. nothing is hand-wavy about it, or
"complex", like specialized flags or properties of boxes, just some simple
mappings

------
jakeinspace
My experience with simulink and labview suggests that there can be a place for
visual programming at the architectural level. Taking Matlab modules and
linking them together into some sort of feedback loop pipeline can be very
useful for explaining to non-technical team members, and I find that little
bit of 2d visualization latches itself into my brain much differently than if
it were text. With simulink/Matlab, I think this is partially because Matlab
is such a bad language from a software engineering perspective (it's really
not meant for large programs).

Maybe there's a place for for text and visual languages together, I'm not
sure. Sometimes I have the feeling while staring at an editor with 3 text
files on screen that a huge chunk of my brain is sitting on the wayside
collecting dust. Yes, there is some amount of visual/spatial thought involved
in navigating text, and maybe even mapping that text/project onto an abstract
visual space, but I doubt this really takes full advantage of our extremely
powerful visual cortex.

------
mark-r
Yes it does suck. Not for writing but for maintenance.

I once got dumped a BI project that was written in a visual tool. Simple
things like tracing how a field was derived was impossible, because there was
no search function. I could search the XML file it produced but that was
impenetrable.

~~~
ken
It sounds like you're judging all of visual programming based on one project,
written in one language. You don't even say what language it was.

Am I allowed to condemn all textual programming based on the last textual
program a company asked me to maintain? It had a single function over 1500
lines long, which took over a dozen parameters, had well over a dozen exit
points, re-used variable names and values (sometimes unintentionally) between
otherwise independent sections, and had zero documentation. And I had to fix a
bug in it. "Impenetrable" does not begin to describe it. Text-based
programming is a disaster.

~~~
IshKebab
That is how it is in every visual language though. It's too much hassle to add
comments, so nobody does, and it's too hard to organise things nicely, so
nobody does that either. You end up with literal spaghetti code.

[https://www.google.com/search?q=labview+spaghetti+code](https://www.google.com/search?q=labview+spaghetti+code)

Yes you can make horrible code in text too, but it is a lot easier to not do
that.

------
stared
In the topic of diagrams, I did write a review "Simple diagrams of convoluted
neural networks" [https://medium.com/inbrowserai/simple-diagrams-of-
convoluted...](https://medium.com/inbrowserai/simple-diagrams-of-convoluted-
neural-networks-39c097d2925b).

While most people don't write networks visually, they often the most effective
tools to communicate one's results (whether it is a neural network
architecture or just a single block).

Moreover, in particle physics people you Feynman diagrams a lot. And they are
nothing more or less than a graphical representation of summations and
integrations over many variables.

When it comes to languages, while there are some interesting approaches (e.g.
[https://www.luna-lang.org/](https://www.luna-lang.org/)) the only one I
actually used was LabView (in an optics laboratory, where it is (or at least:
was) a mainstream approach). For some reason, even
[https://noflojs.org/](https://noflojs.org/) didn't catch enough traction.

------
api
I wonder if part of the problem with visual programming isn't I/O?

Coding uses the keyboard. Once you get good at using it a keyboard is a much
faster input device than a mouse or a touch screen, especially for complex
highly structured input like text or code.

Visual programming relies on the mouse or the touch screen so you spend an
inordinate amount of time clicking, tapping, dragging, positioning, etc., all
of which is irrelevant to what you're trying to achieve.

Maybe a visual programming system that used readily learnable keyboard input
or even some novel form of touch panel or mouse input that eliminates the need
to futz around with dragging and dropping and positioning would be the way to
go?

Or... [https://neuralink.com](https://neuralink.com) ??

~~~
cdtwoaway
If you are a very experienced programmer, you program LabVIEW (one of the
major visual languages) almost exclusively with the keyboard (QuickDrop).

Let me show you an example (gif) I press "Ctrl + space" to open QuickDrop,
type "irf" (a short cut I defined myself) and Enter, and this automatically
drops a code snippet that creates a data structure for an image, and reads an
image file.

[https://github.com/b-ploetzeneder/MachineVisionCodeSnippets/...](https://github.com/b-ploetzeneder/MachineVisionCodeSnippets/blob/master/docs/assets/images/quickdrop/3.gif)

If you are efficient at this type of input, the
"dragging/dropping/rearranging" is similar to refactoring that you would do in
an IDE.

The only difference is that there is something called secondary notation in
many visual languages (people are not aware of that, I'm only familiar with it
because I've done research on the topic - it is how the code is visually
arranged). How code is arranged is kind of a quality parameter for visual code
(google examples for "spagetti code"). There are typical patterns that are
instantly recognizable to an experienced user, ways of using distance and
direction to group connected parts..

I actually played with alternative forms of input for LabVIEW, mainly gesture
control and "drawing" on tablets. It sounds like a fantastic idea, but only
for 5 mintues. After that, your hands start to hurt. The reality is that
keyboard and mouse are heavily optimized tools for input (minimal movement of
fingers, and we have lots of muscle memory), and don't restrict you. It's like
saying "I can type xxx word per minutes" and thinking that typing faster would
help you code faster..

~~~
wires
I totally agree with you & thanks for that nice GIF! I used LabVIEW a lot and
always enjoyed it... so anyway we have keyboard input for the diagrams /
blocks, its very important. also a minimum of wires, they are annoying to draw

------
snidane
You have two camps of people in data processing.

The first one is concerned about solving problems (domain experts). They don't
really know much about programming, they know how to solve problems though.
Once in a while someone clever creates a visual tool for them and they become
magically super productive relative to their peers. Be it business process
automation (BPMN, workflow automation), signal processing, simulation
programs, webscraping, even Excel. However as these people get proficient, in
their top-down learning of programming, they start to hit limits of the tool.
Then you can see the typical spaghetti code, because the visual tool lacks
basic programming constructs like looping, functions and conditional which
would nicely compose the mess away. Additionally it can't scale beyond RAM and
is hard to put into version control, because they are not in control of the
text representation of objects they work with, even though the software uses
it under the hood.

The second group of people are programmers. They start learning bottom-up, ie.
from conditionals, loops, functions, threads, etc. to actual problem solving.
They know all the stuff about proper branching, version control, how to
structure the code, programming paradigms, etc. They don't get stuck in
spaghetti code, because they have super composable functional languages, where
any pattern or duplication can be abstracted away as a function.

There is a huge gap between the problem-solvers and program-creators.

Anything which can be represented as a visual language can be also represented
as a text. Unfortunately we don't have textual programming languages powerful
or intuitive enough to cater to top-down folks.

I would go as far to suggest that our current formalisms are insufficient for
this task. Lambda calculus is very bad abstraction for working with time and
asynchronous processes for example. Workflow automation, where 99% of cpu time
you just wait for real world tasks, doesn't map to lambda calculus well. Other
formalisms like pi calculus or petri nets are much better suited for this and
unsurprisingly the visual programming tools often resemble a petri network.

~~~
davidkpiano
> They don't get stuck in spaghetti code

This is quite the assumption!

Bottom-up text-based programming leads to much greater complexity because most
programmers don't properly model their software (e.g., with state machines,
statecharts, Petri nets, activity diagrams, etc.).

But it's not entirely their fault -- code is inherently linear. Mental models
are not - they're graph-based (i.e., directed graphs, potentially
hierarchical). Text-based code is merely trying to shoehorn graph-based mental
models of what the code should do into a linear format, which makes it less
intuitive to understand than a visual approach.

~~~
snidane
> This is quite the assumption!

Correct, it was an exaggeration. Bottom-up programmers are supposed to have
the tooling not to end up with convoluted code, but they somehow manage to do
it anyway.

> Bottom-up text-based programming leads to much greater complexity because
> most programmers don't properly model their software (e.g., with state
> machines, statecharts, Petri nets, activity diagrams, etc.).

I'd argue that these text-based programming languages and computation models
don't correspond to human intuition when they solve a problem and that is the
main problem.

> But it's not entirely their fault -- code is inherently linear. Mental
> models are not - they're graph-based (i.e., directed graphs, potentially
> hierarchical). Text-based code is merely trying to shoehorn graph-based
> mental models of what the code should do into a linear format, which makes
> it less intuitive to understand than a visual approach.

This is one of the limitations of bottom-up code. It is easy to represent
linear program flow. It is not sufficient though, as you point out, problems
in real world are graph based in general.

On the other hand not all code is linear. Eg. looping is a typical cycle in a
computational graph, or petri net when you represent a data flow graph.

    
    
      init --> loop body --> end
      |                       |
      \_________<_____________/

~~~
zozbot123
I'd describe parent as a control flow graph, not a data flow graph. Control
flow makes clear the interpretation of that cycle as an iterative "loop". In
data flow, the cycle shown in your parent comment would instead represent an
arbitrary fixpoint: the output of 'end' would be some value _x_ =
end(loop_body(init( _x_ ))). This inherent ambiguity where the same constructs
are given different semantics is actually one reason why visual
representations can sometimes be confusing.

The same applies to parallelism - does it represent divergent choice, or a
fork/join structure where independent computations can be active at the same
time? You can't make _both_ choices simultaneously within the same portion of
a diagram! Of course you could have well-defined "sub-diagrams" where a
different interpretation is chosen, but since the _only_ shared semantics
between the 'data flow' and 'control flow' cases is simple pipelining that's
so limited that it isn't even meaningfully described as "visual", it's hard to
see the case for that.

------
dugluak
Regex is one of the things I prefer to visualize

/^[a-zA-Z0-9.!#$%&’ _+ /=?^_`{|}~-]+@[a-zA-Z0-9-]+(?:\\.[a-zA-Z0-9-]+)_$/

Vs.

[https://regexper.com/#%2F%5E%5Ba-
zA-Z0-9.!%23%24%25%26%E2%80...](https://regexper.com/#%2F%5E%5Ba-
zA-Z0-9.!%23%24%25%26%E2%80%99*%2B%2F%3D%3F%5E_%60%7B%7C%7D~-%5D%2B%40%5Ba-
zA-Z0-9-%5D%2B%28%3F%3A%5C.%5Ba-zA-Z0-9-%5D%2B%29*%24%2F)

~~~
wvenable
I had slightly more difficulty understanding the diagram than the regex
string. But then I'm pretty fluent in regex and that visual representation was
new to me.

The big advantage I can see is isolating the visual noise. I'm now surprised
that we don't have syntax coloring for regex built into our editors...

~~~
Retra
Syntax coloring for regex adds noise. When you're reading the non-regex code,
having rainbow strings interspersed is not helpful. The best way to approach
this problem is to build your regexes in a dedicated regex building
environment and then move them into your code at text. Integrating them in-
line is not great. (But some people love IDEs, so maybe if you click on a
string and have it open up a regex editor...)

~~~
wvenable
The issue with that regex and others I've used is you might be matching on
characters that match regex control characters. In that case, it would be nice
to know if you're looking at a control character or a properly escaped value.
Two colors would be sufficient.

But I agree a full regex builder sort of makes that entirely moot.

------
azhenley
I think visual languages have been vastly unexplored. Sure, textual languages
do have some great features (e.g., can be edited by many, many different
tools).

In academic research, visual languages have largely been used to study non-
programmers performing programming-like tasks (e.g., [1]), but I think that is
just the tip of iceburg!

[1]
[http://eusesconsortium.org/projects.php](http://eusesconsortium.org/projects.php)

~~~
collyw
Under explored?

When I started out there was talk of Java Beans that could be used to program
visually. Since then I have encountered a number of visual programming tools,
and none of them have been much good and generally don't make things any
easier.

The best visual programming I have used was MS Access in that you could
actually produce something useful without touching a lot of code. But you did
need to understand database design to anything useful with it.

The access query builder is a perfect example, it took an understanding of
relational databases to use it. Instead of SQL you would drag and drop the
tables together. And set up the conditions in dropdown menus. So basically
swapping typing for dragging and dropping. Overall it was about as much
cognitive overhead as SQL (maybe a little less for not needing to remember the
exact syntax for a left join).

~~~
azhenley
Plenty of [bad] visual programming tools have existed. They are often tacked
on top of some existing system.

I'm saying that there has been little work that investigates how to design
effective visual programming tools. The visual component has to be a high
priority in the system's design, not just an afterthought.

~~~
arethuza
I once had a demo from an architect at a <well known technology company> where
he freely admitted that the "visual" part of their product was only there as a
sales tool so that they could claim it "didn't require developers" when
demonstrating it to non technical executives.

~~~
collyw
The famous promise of no developers needed. What non techies don't realize is
that code syntax is actually a relatively easy part of software development.

~~~
nmjohn
> What non techies don't realize is that code syntax is actually a relatively
> easy part of software development.

I don't disagree, but it absolutely is the most overwhelming part to newbies,
and is responsible for keeping more people out of the field than pretty much
any other aspect of software development.

------
chasing
It's interesting that a lot of sound programming environments are primarily
visual -- I'm thinking pd, Max/MSP, and Reaktor. Probably because many audio
synthesis concepts trace back to when people literally had to wire hardware
modules together with patch cords.

Anyway, I don't think the problem is that the idea of visual programming
hasn't been around. It's all over the place, if you look. And has been for
ages. The problem is likely that it doesn't solve a problem that developers
who use text-based programming languages have.

My main concern would simply be: What do I lose by going visual?

With text-based code I can read every single character and make sure it's all
precisely how I want it. Would that be easier or harder in a visual
environment?

Or: What if I need to do something not explicitly planned for in the visual
environment? Am I stuck cracking open the visual modules, anyway, and writing
in hacky Javascript or something that then becomes difficult to see because
it's stuffed awkwardly inside of some visual element? (This happens fairly
often in the above-mentioned sound programming environments.)

If the gains can't definitively offset those sorts of costs, then I'm not
going to be very interested.

~~~
jcelerier
> It's interesting that a lot of sound programming environments are primarily
> visual -- I'm thinking pd, Max/MSP, and Reaktor. Probably because many audio
> synthesis concepts trace back to when people literally had to wire hardware
> modules together with patch cords.

Visual sound environments are also tools that are taught to music students in
conservatories, who don't have any programming ability. These are tools to
make sound / art, and people who make sound / art aren't necessarily
programmers.

> With text-based code I can read every single character and make sure it's
> all precisely how I want it.

and when you make art you aren't necessarily looking for the precise. Some
people just throw random objects on their patches and wire them almost
randomly until it sounds good ; writing the patch is entirely part of the
artistic process. That's a completely different mindset that "client wants
feature X in Y time, what's the most easy way for me to achieve it".

~~~
chasing
> and when you make art you aren't necessarily looking for the precise. Some
> people just throw random objects on their patches and wire them almost
> randomly until it sounds good ; writing the patch is entirely part of the
> artistic process. That's a completely different mindset that "client wants
> feature X in Y time, what's the most easy way for me to achieve it".

You're right in that art is guided by a different set of goals than client
work -- it's inherently more exploratory.

But it's inaccurate (and unfair) to call it imprecise. (Also unfair to assume
artists work "randomly.") If you're an artist who cares about your work, you
put a tremendous amount of effort into achieving your vision just as you see
it. If a tool fails to work as you need it to, you'll abandon the tool,
whether it's a paintbrush, a chisel, or Max/MSP.

The fact that two major commercial sound programming environments are visual
doesn't necessarily mean people who use them don't understand computers and
are just randomly throwing crap at the wall: It means they work best for the
professionals who use them to get their creative work done. They are, after
all, relatively expensive pieces of software.

(It's also inaccurate to assume artists and musicians don't ever have
programming ability. I'll point to myself as an example.)

~~~
pessimizer
I don't think that art and precision in the way that the comment you're
responding to are compatible. I haven't experienced any artistic situations
that haven't involved throwing random impressionistic shit at the wall, then
precisely shaping, reducing, and exaggerating aspects of that random shit to
make something new.

Precision comes in after the randomness, in the craftsmanship; that's what
differentiates a skilled person from an unskilled person; both can do the
first part.

When you're programming, or participating in any craft that doesn't prioritize
uniqueness or expression, the precision starts from the beginning, though. The
only randomness sometimes is _where_ you start, not _what_ you start.

~~~
wires
also, ideally the precision in the statebox kernel is hidden from the user..
nobody needs to really know about profunctors or monoidal categories, unless
you want to work on the language tooling itself.

anyway, interesting discussion

------
monetus
Pure Data, VVVV, they have shown that data-flow programming works. They even
occasionally look like circuits. Max/Msp, Reaktor, luna-lang, they show there
is real demand for it.

I hope more people bring those languages into the rest of the programming
communities.

~~~
wires
totally what statebox is trying to do, we take some theory to put the diagrams
in the right place and take it from there! thanks for mentioning these tools,
they are all awesome to use

------
vnorilo
As a linear medium, text is good for chronology. Visual is multidimensional
and much better for topology. It's no accident that many succesful visual DSLs
mentioned in the comments are dataflow langs. I also think some of the
pessimism regarding visual languages is due to how much better text editor
tooling is. Still, it would be a good idea to avoid convincing ourselves that
our current local minimum is the deepest of 'em all.

~~~
bsaul
Thanks for putting it in such clear words. I thought about "micro" vs "macro",
but chronology vs topology is much much better. There are micro algorithms
that i wouldn't try to put in diagrams, and huge data mappings that i would
perfectly understand with simple arrows and boxes.

Are you working in that particular field ?

~~~
vnorilo
Thanks for the kind words! My doctorate was about a DSL for musical signal
processors [1]. Current post doc is about programming interfaces - I hope to
Show HN in a couple of months!

1:
[https://www.researchgate.net/publication/289495172_Kronos_A_...](https://www.researchgate.net/publication/289495172_Kronos_A_Declarative_Metaprogramming_Language_for_Digital_Signal_Processing)

------
empthought
There is a long history of research that the author would benefit from
reviewing and citing, e.g. this article from 1995: "Why Looking Isn't Always
Seeing: Readership Skills and Graphical Programming"

[http://www.cs.toronto.edu/~chechik/courses18/csc2125/paper12...](http://www.cs.toronto.edu/~chechik/courses18/csc2125/paper12.pdf)

------
taneq
Visual Programming doesn't suck, but it doesn't really help either. It's still
programming. Non-toy examples are going to be just as tricky to understand and
debug as text-based programming.

I think text scales slightly better but it could just be that we don't have
the tools to scale graphical programs yet. But in the end, the hard bit is
figuring out what you want the computer and then explaining it correctly to
the computer. The exact symbols you use to explain it are much less important.

~~~
bhargav
I agree. Regarding tooling (and lack there of) and mention that Unreal Engine
has a visual programming language called Blueprint [1] and part of why that is
popular is that it reduces the initial friction for non-developers and has
good tooling (auto-complete and other IDE like functionalities). So I do think
that with more tooling might pave the way for visual programming.
Additionally, Blueprints are conducive to prototyping. So I think there is a
market value in visual programming; though still not sure how stuff "scales"
as you mentioned. There is also a consideration of "focus". I think Unreal
Engine team is focusing more on Blueprints which has made some people feel
like its hard to GSD [2] with programming as the support/documentation is not
as good.

Edit: Added more points

[1]: [https://docs.unrealengine.com/en-
us/Engine/Blueprints](https://docs.unrealengine.com/en-us/Engine/Blueprints)

[2]: [https://forums.unrealengine.com/community/general-
discussion...](https://forums.unrealengine.com/community/general-
discussion/117749-blueprints-vs-c-for-experienced-c-dev)

~~~
codetrotter
I think UE4 and its Blueprints are very nice and cool but with regards to how
stuff might or might not scale I feel compelled to link this site called _UE4
Blueprints from Hell_ haha
[https://blueprintsfromhell.tumblr.com/](https://blueprintsfromhell.tumblr.com/)

Of course anyone can make spaghetti in any language, text-based or visual, so
don’t think of that site as “proof” or anything. Just a little bit of
entertainment that’s all.

~~~
ModernMech
I don't know blueprint, but I'm wondering if after some time using it you
begin to pick up patterns from looking at zoomed out code. One thing that
strikes me is that each of these "nightmare" examples has a very definitive
and recognizable form. There are certain structures that one can pick out and
recognize between diagrams. It reminds me of being able to discern nested
loops and long switch statements in traditional text code minimaps. It might
be the case that these code maps look like a spaghetti mess to the untrained
eye, but the variety of structure in the code allows skilled developers to
zoom in on a bit of code the way you can zoom in on your house in Google Earth
without any help.

------
wvenable
Why is this a text article not a video? The same reason this _isn 't_ a video
is the same reason that visual programming does, in fact, suck.

~~~
kodablah
Disagree. While there are reasons, this comparison isn't apt. Videos suck
because of their linear format and inability to search/reference, which
doesn't plague visual programming. I could just as easily write "Why are there
images and not all texts in these articles? That same reason is why visual
programming can, in fact, not suck".

~~~
wvenable
Visual programming suffers from the same low information density as video.
I've actually had to work in visual programming environments and anything of
moderate complexity (lets say a single average source file length) is simply
visually untenable. A simple class you can page through in 1 minute is 20
screens in each direction represented visually.

~~~
kodablah
I think that's more of an indictment on the specific environment/software you
used than visual programming in general. One could argue thinking in terms of
classes is invalid. Visual programming has value at a higher level of
abstraction e.g. workflow management/stitching of components.

~~~
wvenable
I used class as an example but the product was for workflow management and
stitching of components and it was, frankly, ridiculous. Nothing was wrong
with the product itself, it was sound. But the entire concept of representing
working software visually that is unmanageable.

Honestly people can read and process text much easier then they can follow
active visual nodes and lines. And I'm not talking about an abstract diagram;
we are talking about visually designing something detailed enough to be
executable. I think most people imagine visual programming based on pretty
high-level abstractions but that's not the reality of programming.

Anything even moderately complex is too big to see all at once but could
easily fit on a few screens of text. Making changes is even worse. I can
easily move to the top of this paragraph and add another paragraph (which I
just did). Have you ever tried moving around 20 visual nodes? Almost
impossible to do easily. I just re-wrote a few sentences; easily 6 clicks and
typing in a visual environment.

~~~
wires
You are talking about deficits in the tools to work with graphical
representations and lack of language support to contain behaviour to a single
box (like a decent type system and separation of effects from pure
computation)

I am the first person to admit that visual programming goes wrong about 95% of
the time, but it doesn't have to... also nobody is expecting everyone to swap
out their tools for statebox haha

------
king_magic
None of the examples he shows are clearer than text code. None of the examples
are quickly and easily modifiable - call it malleable, if you will - as text
code for debugging.

Sure, visual programming might “not suck”, but it sure as heck isn’t a viable
tool for serious software development outside of a few specialized cases.

------
spitfire
This thread is a little perplexing. There's some _great_ commentary going on
here, and some fantastic references.

However, why does it have to be a binary Text OR visualization? Code gets
loaded from a file into an AST, from there you can transform between textual
AST representation and other visual representations.

I expect the right answer in the end will be both AST based code views and
visual tools to show flow, time/space requirements, interfaces/coupling and
such.

Imagine writing code, then using the scroll wheel to reference the overarching
project graph. Perhaps with:

    
    
        - edge size/colour representing function call frequency, 
        - vertex size/colour representing time/space complexity
        - some other cue to represent datatypes.
        - another view to represent and inspect datatypes in your application/query them. Ala Quantrix or Lotus improv "spreadsheets".
        - Another view showing state changes and functional code.
    

Things don't always need to be binary decisions. Instead of saying "visual
code doesn't work", perhaps we should try "Great artists steal!".

~~~
goodmachine
Luna attempts this:

Dual syntax representation

"Luna is the world’s first programming language featuring two equivalent
syntax representations, visual and textual. Changing one instantly affects the
other. The visual representation reveals incredible amounts of invaluable
information and allows capturing the big picture with ease, while the code is
irreplaceable when developing low-level algorithms."

[https://www.luna-lang.org/](https://www.luna-lang.org/)

------
vincent-toups
It might not suck, but the cost benefit analysis ends up washing out any value
it might have.

Text is glorious and portable. I can run emacs in a terminal. I need a
software stack just to look at my code like I need a hole in my head.

The thing I really hate about visual programming is that programs are
fundamentally tree structured and 2d space introduces extra degrees of freedom
which, for me, impose additional cognitive load.

~~~
mbrock
Christopher Alexander (the architect) wrote a paper called _A City Is Not A
Tree._ I don't know of any paper called _A Program Is Not A Tree_ , but one
could be written. Programs are fundamentally messy tangles of references, as
far as I can tell! Compile-time structures are utterly different from run-time
structures. Some languages have both dynamic and lexical scoping. Etc.

~~~
beaconstudios
you can reduce an individual function to a a directed graph (not a tree). But
once you have more than one function? That's graphs referencing graphs. That
definitely doesn't reduce to a tree.

The only parts of a program that reduce to a tree are incidental to their
functionality - the nesting-block structure of procedural, C-derived
languages, and the tree structure of a filesystem.

------
Pamar
Disclaimer: I haven't yet read the article.

My experience: a few years of (intermittent) work with webMethods Flow
Services (which is "Visual").

Conclusions: text-based programming allows you to:

\- grep for all files where you used "Inc x".

\- use diff to compare two or more versions of the same module

\- comments are easily integrated with "code".

None of this is true for Visual programming.

Also, in my experience even if you are "visually arranging blocks" and
"connecting these with arrows" in reality a lot of properties or parameters to
the blocks themselves are specified by manipulating strings and numbers.

Except that those cannot reliably be grepped or diffed...

~~~
vnorilo
Well, I'll just say that a graph is not bad datastruct for diffing and
versioning [1]. The other problems may be true for now but also seem pretty
low-hanging fruit?

1:
[https://en.m.wikibooks.org/wiki/Understanding_Darcs/Patch_th...](https://en.m.wikibooks.org/wiki/Understanding_Darcs/Patch_theory)

~~~
Pamar
I do not understand if you have actual practical experience with medium-
complexity projects with visual languages.

Also, wM have been around for more then 10 years now. I am not using it
anymore but I have colleagues who do. The "low hanging fruit" stays unpicked,
so maybe it is not so easy to provide the things I listed?

~~~
vnorilo
From the end-user perspective, you are likely correct. I was musing with my PL
designer hat on. The difficulty is the huge amount of work that has gone into
the entire unix way or developing software. That's the seductive local
minimum.

------
tanilama
I think Visual programming is OK when your control flow is linear, and doesn't
have too many branches. However, when your control flow contains primitives
like break/continue/try-catch, it can get pretty messy very quickly.

~~~
AnIdiotOnTheNet
On the other hand, I think visual programming is much better suited to tasks
where your dataflow is multidimensional. For instance, linear pipelines are
simple to construct and read in commandline form, but try reading a
multidimensional dataflow with commandline.

------
openfuture
Lots of people seem to think that Visual / Textual programming is an either
or. The idea behind statebox is to build a bijection between a (structured
text)-editor and a Petri net that shows the program diagrammatically, editing
either will edit also change the other so you can pick the perspective that
best fits the change you'd like to make.

The original paper defining Petri nets was intended as a new foundation
information systems based on communicating automata so that the fallacious
"global state" idea is not a prerequisite. It makes a lot of sense for there
to be a programming language based on this formal system for expressing
computation.

There has also been a lot of progress in understanding the mathematical
structure of Petri nets which is something like a symmetric monoidal category
but I'm no expert so fact check me.

~~~
zozbot123
That's nice, but all I see on the statebox.org website is vaporware. Where's
the MVP? Even just having one component available (structured text editor _or_
diagrammatic editor for the underlying Petri net) would help a lot.

~~~
wires
we are working on the MVP, getting there, this wasn't supposed to blow up yet,
don't want to give any wrong impressions.

currently a lot of our efforts are going into a core component of statebox,
[https://github.com/typedefs](https://github.com/typedefs) it is library
similar to protocol buffers, but it fits well with proof assistants /
functional languages / category theory

------
nargella
I agree. I've been working for a few years on my own interpretation of whether
it's feasible to have a visual system aid in creating a real program output.
Here is an example that I'm still refining:
[https://github.com/kyleparisi/pagination-
layout#pagination-l...](https://github.com/kyleparisi/pagination-
layout#pagination-layout).

The translation of the above project really caused me to think hard. My hope
was that visual programming would make the cognitive load easier. This might
have been because I had not determined the subtle rules required to write a
flow. I need more practice, but I think it will be a different flavor of
thinking. Not better, not worse, different.

------
neurocline
I appreciated reading all of this; it's a topic that obviously brings lots of
opinions to the table. Here are my thoughts on the matter, honed over many
years of using visual programming languages and system, starting in the mid
1980s.

Consider this. The visual system in animals has been in development for
hundreds of millions of years. And yet, most animals don't think at a high
level, demonstrably.

The biggest jump in human cognition is tied to the invention of speech. Speech
is a fairly slow mechanism, and serial. Communication could be multi-channel
and use position, tone, color, odor and movement. And yet, communication
through speech dwarfs all of that. It's what let us bootstrap ourselves above
other hominids and other animals. While multi-channel communication holds out
the promise of high bandwidth, it's also incredibly imprecise. What exactly is
being communicated in non-speech channels? That's up to far more
interpretation and guessing than through the apparently more limited mechanism
of speech.

Then, sometime around 10,000 to 5,000 years ago, writing was invented. Writing
is even more constrained - most people can speak faster than they can write,
and yet the increase in pace of development of human abilities is tied to the
ability to express and manipulate thoughts in writing. It's not just that
writing can be one to many (far more so than speech). It's that writing is
more precise, and we can build up far bigger thoughts in writing that we can
comprehend. It's likely that analogies are bootstrapped through use, first in
speech, and then in writing.

So, in this framework, visual programming is doomed. It's a throwback. Despite
the very large number of neurons devoted to visual processing, the amount of
summarization and guessing in the visual system employed to reduce the flood
of data to something manageable is also part of its weakness when it comes to
forming precise and complex thoughts, and to manipulating them.

We will always visualize things in order to help understand them, because we
use more of our brain when we do that. But it's the very limited and narrow
mechanism of speech (and writing started as "record that speech") that makes
it far more powerful when it comes to complex thoughts. If you look at all the
visual programming systems that have been developed, they only work in narrow
and prescribed modes. They are not open-ended, and they literally fall apart
at moderate levels of complexity.

Without text (speech being the system that jump-started text), we would not
really be thinking animals.

------
ummonk
Every visual programming system I’ve seen makes easy things even easier and
hard things much harder if not impossible.

Visual programming makes simple logic easier to code, as well as making it
quicker and more intuitive to compose plug and play modules / functions. But
that isn’t the hard part in ordinary programming. The hard part is adapting
libraries that don’t just plug and play with your system out of the box, or
dealing with asynchronous calls / services, or trying to transform complex
data formats into what you need. That is where most coding work goes, and
visual programming systems offer nothing of value.

~~~
hinkley
I've said a lot about the struggle with diffing visual code but I think one
the other beefs I have with visual coding is ultimately the same reason I hate
whiteboard coding:

If you try to build code top-down, even happy path first, or iteratively (over
a long enough time horizon, all code is developed iteratively), the whiteboard
is completely unforgiving about squeezing more code into the middle of a block
of existing code. This is basically a non-issue within a text editor.

How do you avoid invalidating your 2d layout of visual code while adding new
conditional behaviors to the middle of an inner block of code?

And on a related topic, any tool that fights 'extract method' is not for me.
Refactoring is painful enough as it is without adding speed bumps along the
way.

I think some of these problems are solved within game AI path finding
heuristics, but I know of only a couple of visualization tools that had an
automatic layout algorithm that wasn't worse than nothing (dbviz as I recall
had a fairly decent heuristic for 'flattening' a graph so that very few lines
crossed).

------
SonOfThePlower
Used to work with Rational Rhapsody. Big project with a lot of fancy state
charts - that was nightmare. Visual programming does suck (unless it's
Simulink).

~~~
humanrebar
Same.

Some of what the author advocates for was incorporated in Rhapsody and other
tools (LabView) a decade ago or more.

The response to critics is basically:

> ...tooling they’re accustomed to can’t be used properly.

I think that understates the pain of peer reviewing WSYWIG diagram changes
(the closed diamond is now an open circle!) that affect production software.

------
rezmason
Look, I love text a whole lot. But I want to program without a desk and
without sitting. In 2019, programming should be as portable as guitar playing.
The thing holding us back is reading and manipulating text.

We should take the things that text does well, and find an alternative that
offers the same functionality in non-desktop contexts.

~~~
heywire
You captured my thoughts perfectly. Many times I've been sitting outside with
my phone or tablet in hand, wishing there was a natural way for me to get
application logic from my head to the computer without a keyboard.

~~~
jerf
Well, I can guarantee that's not going to happen anytime soon. Phones &
tablets _fundamentally_ have lower bandwidth into the device. You need
something that doesn't cost much, or even increases the bandwidth while still
being portable. A touch-screen only interface isn't going to be it.

Note I'm making an _information theoretic_ argument here, not a wishy-washy
humanish discussion of "information". You literally have fewer bits/second you
can input meaningfully on a touchscreen. This is something objective, not just
an opinion. The fundamental sloppiness of the inputs, which is the fundamental
reason for things like mobile interfaces having larger buttons than non-mobile
interfaces, for instance, contributes to the lower bit rate.

(I would remind people who wish to dispute this of at least the following
things: Information is a log-based number, so for instance "I have four types
of swipes" does less than you might think, and you need to account for _real_
information that can go in by a human, not just what theoretically something
else could do. You are not a robot individually manipulating every capacitance
sensor on the screen, nor does your finger teleport around and tap ten
distinct selections per second at an accuracy of less than two pixels. Nor can
you build an interface where a swipe in _this_ direction does _that_ , but if
it's two degrees farther counterclockwise, it does something else entirely,
nor are you going to be using an interface where every slight variation of a
curve is meaningful. Swype-like tech is pretty much the upper limit of what
you can count on, and honestly, at times that pushes it.)

~~~
corysama
I wish I could remember the name, but there was a demo of an experimental IDE
for phones and building up blocks of odd shapes and directions that fit
together like a stone wall. It wasn’t Scratch based. I think it was
declarative rather than procedural.

Meanwhile, I also hold out hope for coding in VR.
[http://primitive.io](http://primitive.io) is currently on top of
[https://www.reddit.com/r/HMDprogramming/](https://www.reddit.com/r/HMDprogramming/)
But, that subreddit doesn’t get much action.

------
gabriel34
In ETL there are some tools to visually manipulate the data flow from one end
to another. Some ETL software even allow you to visualize the effect of the
changes on the fly, much like the Mario game. Solutions developed like this
are easily understandable and maintainable by others even with minimal
documentation (but require understanding of the business problem). But, much
like normal programming, once you want to do something that exceeds the
capabilities of the ETL software you are using, or when performance is an
issue, you have to understand how the underlying software works under the
hood. You can become really good at solving one set of problems with a ETL
software once you have mastered it, but this is limited to one domain of
problems. Likewise, specialized software allowing easy visualization and
manipulation is usually very domain specific.

You can tackle complexity in programming hiding it behind libraries and
databases that do the heavy lifting while the programmer integrates the pieces
and accounts for particularities of the problem he is solving. I could
envision representing library functions as black boxes and connecting arrows
to integrate them, having input validation and strong typing or automatic
typecasting. Still, when you get too far from the machine, you miss the edge
cases, the things you can't imagine when you visualize whatever you are
creating in your head, the problems that only arise when you externalize and
codify the knowledge. I think this is at the core of the issue; in order to
instruct the computer to do something, you have to externalize tacit
knowledge. In doing so you come across problems you just can't see from too
high up.

------
bitL
It probably depends on whether you are visual type (40%), emotional type (40%)
or audio-textual type (20%, dominating academia). I am clearly a visual type
with perfect color vision and my brain can do "search" on the screen in
literally one "frame", i.e. localizing term I am searching for instantly
instead of going line by line. So YMMV and whatever works for you instead of
"there is only one true way and everybody needs to follow it".

~~~
zozbot123
It's interesting to explore what an "emotional type"-optimized programming
paradigm might look like. Perhaps something like word problems in school math,
where you target understanding by rephrasing what's originally a mathematical
statement into a "social" problem, involving real-world agents (such as people
or firms) who might interact with one another in some well-defined way? I
assume this is an underexplored area, albeit social scientists have no doubt
started addressing it with things like "game semantics" and the like.

------
devonkim
Visual programming systems I’ve used seem to have problems that are still
present in traditional text based languages. Most of the value seems to be
entirely built around helping non-programmers provide value to programming
related output

\- Modularity / reuse is tough

\- Tracing is difficult to do as a system increases in complexity.

\- Distributed systems including primitives like locking semaphores are
difficult to express visually and are thus still difficult to author. How
would you demonstrate to anyone that you don’t have deadlocks and race
conditions across modules?

Data flow and perhaps control systems (AI scripts and GUIs built around
something declarative and event-driven come to mind) indeed seem to be the
only commercially successful examples of visual programming thus far.

If these visual programming environments serialize to normal languages then
the tooling to validate them is pretty straightforward. Instead, most visual
programming systems seem to be entirely domain-specific and are made custom
for a specific set of needs so formats tend to be proprietary and not
applicable across different domains. It’d be cool to see a visual programming
language based upon something like Squeak but this approach seems to have not
caught on.

~~~
hiker
> Data flow and perhaps control systems (AI scripts and GUIs built around
> something declarative and event-driven come to mind) indeed seem to be the
> only commercially successful examples of visual programming thus far.

I strongly disagree. Here's an incomplete list of commercial software in VFX
for visual "programming" for artists:

Houdini by SideFX

Katana and Nuke by The Foundry

Massive
[http://www.massivesoftware.com/massiveprime.html](http://www.massivesoftware.com/massiveprime.html)

Gaffer
[https://github.com/GafferHQ/gaffer](https://github.com/GafferHQ/gaffer)

~~~
al2o3cr
IMO "GUIs built around something declarative and event-driven" covers systems
like Houdini and Massive - they're "declarative" in the sense that the scene
graph is defined beforehand, and "event-driven" in that they react to external
inputs with everything from behaviors in Massive to demand-driven lazy
calculation in Katana.

------
Eli_P
I know a rookie who have been playing a game called _While True: learn ()_ on
steam. It looks like circuit board simulation, player is supposed to direct
colored blocks to the specific destinations depending on what's exercise
about. It appeared to be more of a railroad station simulator to me. When I
was making my first steps, in DOS, all programming for rookies was about chess
boards. Almost every programming environment for beginners was a derivative of
a chessboard, we moved pieces, found paths, backtracked here, beeped with 0x07
there... Today, when chess-autists are retiring, maybe the new waist-deep-
into-networks generation coming is more lenient on how developer tools should
look like. I don't believe they will've substituted actual coding with
grandma's embroidery, but a handful of daredevils who will create more
powerful tools like DDD debugger are surely welcomed with open arms.

------
Pxtl
To me the thing that visual programming excels at is parallelism. Traditional
procedural languages are fundamentally one-dimensional, and parallelism is
difficult to view in 1D.

The biggest challenge is how much of our tech stack is based on raw text.
Anything that isn't text-obsessed means you can't use diff, git, grep, vi,
etc.

~~~
sebastos
Though it's important to remember that this doesn't necessarily preclude the
existence of a "visual git", for instance.

------
kome
I am sure that visual programming is perfect for specific applications, I see
it very well in data analysis for example.

But I never heard of a general purpose visual programming language...

~~~
optevo
[https://www.outsystems.com/home/GetStartedForFree.aspx](https://www.outsystems.com/home/GetStartedForFree.aspx)

~~~
Chetan496
Outsystems is good. I have programmed in it for 2 years, and we also built a
product having some very dense UI. But in my experience it is still not
perfect. It lacks some mainstream features and has limitations. It does not
model web apps as SPAs but for mobile app it does. Granted we can mix JS but
that only complicates things.

How do I do multithreading in Outsytems? I know there is BPT - but BPT and
timers have their own limitations.

I could go on.. also Outsystems these days only keeps C# as backend stack. I
should have the choice of Java as backend with multiple other runtimes.

Yes its not vendor lock in, but this is technology lockin.

I had great hopes from Outsystems and to an extent it hits the sweet spot. But
cant recommend it for modern, complex distributed web apps .

~~~
optevo
Hi Chetan,

Disclaimer: I work for OutSystems :)

On the SPA front, I believe there will some movement on that front this year.
Look out for "Modern Web Apps" as a new application category.

For multithreading, I take your point and agree. A heavily multithreaded
component is not the best use case for OutSystems and if you want to use one,
it is probably best written as an extension in C#.

As for dropping Java support, here's the rationale: Trying to support .Net and
Java was costing engineering resources and also leading to inconsistent user
experiences. Additionally, there are lots of other languages/platforms out
there beyond C# and Java so the approach going forward to integrate with
heterogeneous languages/platforms is to integrated into them using containers
which is supported from version 11.

As for lock in, that's true of any proprietary product. However, it's worth
noting that if you terminate your subscription you get all the source code and
can run it up independently of OutSystems. Of course, you might argue that
it's not as easy to change, which is true - but if you want to do lots of
changes quickly, why not just stay with OutSystems? :)

So finally for complex web apps, I would argue you can do it (and you can
check lots of references of places that have) but the tech is old and this
will be improved starting this year. You can also build hybrid (i.e. Cordova
based) mobile apps easily too - almost a third of new apps built on the
platform are the latter.

------
moron4hire
Visual Basic 6 was a real sweat spot of low-barrier-to-entry visual tools and
simple scripting with enough of an escape hatch through COM to do really crazy
stuff if you needed to. Complete pain in the ass to manage large projects, but
then, it wasn't that much easier in Visual C++ 6 at the time.

~~~
mschaef
Visual Basic is one of those tools that looks very different when it's mostly
in the past than it did when it was mostly in the future. For its time, VB was
a huge, amazing, incredible simplification of what had formerly been about
that much of a pain. This is part of why it tends to be reviled today, but it
opened up the door for a large number of people to address a large number of
problems that were heretofore mostly unreachable for them.

That type of tool... the tool that lowers the barrier to entry and makes more
problems addressable by more people... that's a good thing.

(But, FWIW, VB is not really a visual programming language... more of a visual
development environment built around an almost completely textual language.)

~~~
moron4hire
I think that was the point of the original article. That we can use more
visual told to augment code, rather than trying to reinvent programming from
scratch.

Have you played with Racket at all? There is another concept from the DrRacket
tool that I really enjoy. Images have their own print mode, i.e. getting
displayed in the REPL just by referencing them. Similarly, Matlab and Octave
make printing matrices and graphs just a natural part of the experience.

There are probably a lot of good reasons for storing code as text, not the
least of which are diffing as well as transmitting to/from other authors with
their own toolsets. Merge visualization is a type of visual task related to
programming.

Unix and its shell concept were supposed to, kinda, be this. But I think it
got lost in the need to build production server environments. Who is the
modern Symbolics with their LISP machines?

~~~
mschaef
> Have you played with Racket at all? There is another concept from the
> DrRacket tool that I really enjoy. Images have their own print mode, i.e.
> getting displayed in the REPL just by referencing them.

I haven't played much with Racket, but I have worked with other systems that
do this. (I've also developed a feature like this feature myself, many years
ago, in a system I was building as a small-scale data analysis tool.) And yes,
I agree that the combination of a REPL that both uses the full capabilities of
the output device and renders objects that 'know what they are' is
transformative. It makes entirely different classes of work that much more
interactive.

In addition to the graphics, it still seems like a miss that I can't type 'ls'
or 'find' at a terminal prompt and drag a filename from the list into a finder
window to move or copy the file. (But getting to that point is work on several
different levels... ls would need to produce output that somehow annotates
each output filename with the fact that it's a file with a specific full path.
Then the terminal would need to be able to intelligently work with those sorts
of annotations.)

> There are probably a lot of good reasons for storing code as text, not the
> least of which are diffing as well as transmitting to/from other authors
> with their own toolsets.

I think part of it is history... both inside and outside the field of
computing. It was technical limitations at the beginning of the field that
made text more or less the only choice for early programming systems. (Unless
you count the even earlier plugboard machines.)

Outside the field, images alone don't tend to be used to represent the
sequences of events that are so common in programming. There's always a
convention for imposing a sequence layered atop the visuals. This text, by
convention, reads left to right. Pictographic languages have an ordering by
convention. Comic books have both a convention to the ordering of frames and
(mostly) the text within the frames. I mentioned LabView elsewhere in this
thread... it's a visual programming language, but it has an explicit sequence
construct (that models a sequence visually as frames of film that can be paged
through).

I guess my point is that textual representations have a lot going for them out
of the box... including an historically natural modeling of the notion of
sequence and a huge volume of tooling and process optimized for the purpose of
manipulating text. For a visual language to succeed, it has to offer
compelling enough advantages to outweigh the cost of walking away from those
things.

------
wires
Hi, Statebox founder here :-) awesome to see this on the HN frontpage. Will
read your comments and give feedback, meanwhile, feel free to ask me things!

~~~
mathgenius
Thanks for linking to graphicallinearalgebra.net . I finally sat down and read
it today, and (loveheart-emoji). This stuff _looks_ more complicated than just
writing a bunch of matrices, but it really does go to the deep of linear
algebra.

~~~
wires
Awesome stuff right? Pawel is great at explaining things!

It is certainly more complicated in the beginning, partly because it is a
paradigm shift, and partly because of the abstract nature. But then again,
linear algebra isn't particularly natural when you see it the first time, I'd
say..

But graphical linear algebra gives other powerful insights, such as the
meaning of division by zero. btw. we use a very similar calculus

------
crankylinuxuser
Visual Programming also isn't just for "programming". The tool that Matt
Keeter made, called Antimony, is a node -and- graphical interface for making
3d designs.
[https://www.mattkeeter.com/projects/antimony/3/](https://www.mattkeeter.com/projects/antimony/3/)

One really cool effect this tool has, is that it shows you the direct
dependencies on geometries and how shapes interact with each other.

So far, it brings me to 3 tools I use on a weekly basis that are node and flow
based: Node-Red, Apache NiFi, and Antimony CAD.

------
inglor
We do visual programming at Testim.io for automation and the feedback is
pretty positive.

I think there is a lot of merit to the approach if used in a smaller and more
constrained problem and if there is always an escape hatch.

------
jeffreyrogers
Visualizing how the process you want to express in software works is a great
technique for clarifying what you want to accomplish, but I'm skeptical that
it is a good way to actual do programming. The simple reason why is that error
conditions and corner cases are hard to express visually. That said, I think
drawing out what you're trying to accomplish is helpful. It's basically a form
of lightweight specification.

~~~
wires
it is in fact very helpful to model out your error states, esp. if you can see
it on a screen in some sort of flow. this is why people do modelling, to
understand better what you want to code.

But you are right, there are many things that are difficult to express
graphically (at the moment) but I think this can be solved in the near future

------
scoutt
If "Visual Programming" is the possibility of generating code from a
"Diagrammatic Reasoning" design, then I (and I suppose everyone here) do it
all the time. Like drawing sketches, boxes and arrows in paper before/during
programming.

But... I doubt there would be a system able to pick any kind of design and
generate code/programs.

~~~
wires
Stay tuned! It is not any kind of design but specific diagrams and for a
certain large class of programs we can do it. It might take a while before
we've implemented all the needed runtimes, but JS should be release sometime
this year

------
fizixer
It does suck. It totally does. Because when you sell VP, you sell VOP (visual-
only programming).

If you offer a visual interface as an add-on to my favorite programming
language, be my guest. I don't have a problem with that and I might even try
it on a nice sunny weekend afternoon.

Just don't try to lock me in on a VOP platform and expect me to sing praises
for it.

~~~
wires
the diagrams are just different ways to draw expressions, you are actually
editing text, it's really the same data. one diagram implies many different
but behaviourally equivalent expressions, so the diagram is actually the more
efficient way to encode the thing

this is ignoring all the UX issues of course. but we are quite confident
eventually everyone will use such diagrams as at least sidekicks in their
text-based-code

------
tim333
Cache
[https://webcache.googleusercontent.com/search?q=cache:shDDXu...](https://webcache.googleusercontent.com/search?q=cache:shDDXuk62tkJ:https://blog.statebox.org/why-
visual-programming-doesnt-suck-2c1ece2a414e+&cd=3&hl=en&ct=clnk&gl=uk)

------
hugs
Sometimes I wonder if a hybrid approach could work. Specifically, a language
that generally looks like Python or JavaScript, but also allows emoji as
variable or function names. You could keep the advantages of text-based
languages, since you can still search and diff, but also lets you use higher-
level visual abstractions that images provide. I've started experimenting with
something like this, but it's still not ready for real-world use:
[https://github.com/hugs/wildcard/blob/master/example/konami....](https://github.com/hugs/wildcard/blob/master/example/konami.wild)

------
rcarmo
I’m a bit late to the discussion, but having adopted Node-RED for home
automation and some bot prototyping work, I concur that it’s very quick to get
some stuff done, but that modularity and versioning are a constant pain.

I don’t see visual programming at sucking, but I think the idea of having
diagrams as a quick panacea for the mental overhead of maintaining complex
processes just doesn’t scale (I’ve had a number of discussions about this kind
of thing since the rise of UML, for instance, and never saw UML docs being
maintained after their creator left a project...)

------
fenwick67
One place where visual programming works really well is in Blender,
specifically the shader editor comes to mind. Connecting and adjusting shader
parameters could be done in code, but the UI works very well.

------
keyle
Just look at Unreal Engine's blueprints. You can code an entire game in visual
programming without writing a single line of C++. And it will perform really
well as it gets compiled down in production.

------
aidenn0
> It’s easier for non-technical people to contribute to the modelling of the
> process in a meaningful way

The author presents this as an advantage, while in truth it is more of a
mixed-bag.

~~~
wires
FWIW, I found that we can develop a "business process" with a client and get
very valuable feedback from the domain expert, who might know very little
about computers. I don't really see myself live coding with a non technical
client

------
mehh
Frameworks don't suck if they meet your requirements either, however, you
often don't find out a killer requirement until your up to your neck in
framework!

------
Nigredo
Hey everyone, we published a blog post today where we try to answer to some
questions and remarks that popped out in this thread. Check it out!
[https://blog.statebox.org/visual-programming-what-went-
wrong...](https://blog.statebox.org/visual-programming-what-went-wrong-and-is-
there-room-for-improvement-86fdf06f74f7)

------
sevensor
I think the Petri net is a great example, but I disagree with the conclusion
that we should express something like a Petri net primarily as a diagram. The
ideal model is still textual, but as a DSL that compiles both to something
like DOT format and to a programming language of your choice. This gives you
all the many benefits of textual formats, along with the visual economy of a
diagram.

~~~
wires
I think the conclusion is because one diagrams has many different syntactical
expressions, but the theory tells us they all behave the same, so we can just
pick the most efficient, most compact, whatever. ie. the diagram is the best
representation, not the syntax.

so we can go: diagram -> expression -> code of your choice from expression we
can go back to diagram, but code to expression not necessarily.

the problem now is that if the picture is the leading representation, then we
need to lift all the tools to the diagram realm, so comments, higher order
diagrams, grouping, closures, variables, etc

------
awkward
I think that this type of tool can produce a much readable program without a
large change to how writeable the program was. Tools like state diagrams can
be similar - by using an automata of some type you generate an artifact that
can make your logic very easily explained to a nonprogrammer, but the
restrictions of the format aren't easily understood.

------
jhallenworld
No, it sucks, but I make money rewriting labview programs in real languages
(most recently in Python/ROS for a machine controller).

------
devj
Can Excel be categorised under Visual Programming?

------
syoc
Not sure if I would call it programming, but Apache NiFi can do some cool
stuff using drag and drop.

~~~
mehh
I like Nifi, but would be a lot easier for me to write code to achieve the
same results from many of my experiances. Also ended up with the typical, how
do get something that this doesn't have a plugin for to work ... ohh I need to
go understand their plugin architecture and etc invest in a load of stuff
which quite frankly isn't really of interest to me.

Its well worth using though because at the other end I've seen all sort of
hand cranked code and maintenance for something that I know Nifi could achieve
(however I only know that from using Nifi and learning where it doesn't fit).

Its the above pain which makes many people reticent to try out such tools.

------
NicoJuicy
Any good libraries to build a node based visually programming editor.

Something like node-red, rete,..

