Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why isn’t visual programming a bigger thing?
185 points by remolacha 9 days ago | hide | past | web | favorite | 294 comments
Visual programming seems to unlock a ton of value. Difficult concepts can more easily be grokked in a visual form. Programming becomes more approachable to first-timers. Since text is difficult to manipulate without a physical keyboard, visual programming opens the doors to doing development on mobile devices. And yet it only seems to be mainstream in education (i.e. Scratch). Why?

Symbols are pictures too; and they have denser meaning than diagrammatic pictures.

It's not that difficult concepts are easier in visual form. It's that concepts are verbosely described in visual form. Verbosity puts a ceiling on abstraction and makes things explicit, which is why things seem simple for people to whom everything is new (experts, on the other hand, find it harder to see the wood for the collection of tall leafy deciduous plants).

When you need abstraction, you need to compress your representation. You need to replace something big with something small. You replace a big diagram with a reference. It has a name.

Gradually you reinvent symbolic representation in your visual domain.

Visual programming excels in domains that don't need that level of abstraction, that can live with the complexity ceiling. The biggest win is when the output is visual, when you can get WYSIWYG synergies to shorten the conceptual distance between the visual program and the execution.

Visual programming is at its worst when you need to programmatically maniplate the construction of programs. What's the use of building all your database tables in a GUI when you need to construct tables programmatically - you can't script your GUI to do that, you need to learn a new paradigm, whether it's SQL DDL or some more structured construction. So now you've doubled what you need to know, and things aren't so simple.

Yeah, I've been using labview for almost twenty years. I use it because I can hand it to a biologist and have a modest chance they can understand it, plus for me it's easier to make a UI with it since I just never learned that.

But trying to do much in the way of complex logic, unless it's already a built in function, is horrible. Imagine writing a math equation in a visual programming language out of three terminal visual functions for "+". It's hard to be sure you even have the order of operations right! Of course they bandaid this by offering inline equations (which are clunky but I use sometimes) and some support for incorporating textual code for functions, but after twenty years I've never tried to do it because it is more complex than an inline code editor.

It's great for showing people just above the level of excel who need to do something custom and don't know a real language or need a quick GUI. But I guarantee they will screw it up badly since the clean visuals cover a lot of hidden behavior that will screw you up. I usually write them 98% of it since the design is hard to guess a good paradigm for, and let them practice on modifying the remaining 2% as they change their mind about what they want it to do.

I think this hits the nail on the head but I wanted to add a few examples of what I believe to be this effect in practice:

* Roman numerals vs Hindu-Arabic numeral system [1] (aka base 10 number system)

The Roman numeral system is essentially a 'base-1' number system with other symbols attached to represent larger numbers. There are only a finite number of these larger numbers (V, X, C, etc.) so, in effect, the representation of a number grows linearly with the number itself, compared with base-10 which grows logarithmically.

* Logogram [2] vs. Alphabet languages

That is, pictographic languages vs. alphabet-like languages. HJK languages have upwards of 3k [3] pictographs to memorize compared with less than 100 characters for English and other alphabet like languages.

* "Unix vs. Windows"

Maybe a weak comparison, but Unix systems prefer small tools that compose well with each other via pipes using standard text. Unix's focus is on the command line and lends itself to automation. Windows environments focus on single person usability with pictograms, visual aids, etc. Obviously automation can be done in a Windows-like environment (including X) but, as the old adage says, "those who don't understand Unix are condemned to reinvent it, poorly." [4]

[1] https://en.wikipedia.org/wiki/Hindu%E2%80%93Arabic_numeral_s...

[2] https://en.wikipedia.org/wiki/Logogram

[3] https://en.wikipedia.org/wiki/Chinese_characters#Number_of_c...

[4] https://en.wikiquote.org/wiki/Unix

Your overall post is very insightful, but the specific example of Windows is would have been always wrong and especially now is spectacularly outdated.

Windows for decades has had a concept of "object oriented" composability, first through simple message passing, and later through things like the Component Object Model (COM), DCOM, Object Linking and Embedding (OLE). Among other things, this allows quite high level composability, such as Excel graphs in Word documents. This includes system management APIs such as the Windows Management Instrumentation, which is inherently object oriented and has been available for over a decade through the VB or JS shells, and even CMD/Batch shells using "wmic.exe"

Now, UNIX people will argue that this isn't the same as the typical GNU toolset and a shell like bash. They would be right. In Windows, there's a gap between "I can drag and drop an Excel spreadsheet into Word" and "I can use C# to automate COM APIs". It's a big gap. I crossed it, I've seen commercial products that also do various forms of COM automation too, but I can count on one hand people that I have met "in the field" who have ever automated Windows in this manner.

In other words, this gap was wide enough that many people -- especially those most familiar with UNIX -- would often claim that there's nothing on the other side. But this just isn't true.

These days, everyone uses PowerShell, because it bridges that gap. It wraps all of those OO APIs that have existed for a long time in a unified platform. It can natively call WMI APIs, COM, WS-MAN, .NET, and has wrappers for the legacy DCE/RPC management APIs too.

If I can leave you with one takeaway sentence from this is, then it's this one: PowerShell is more UNIX than UNIX.

Seriously. In UNIX, if you want to sort the output of "ps"... sss... that's hard. Sure, it has some built-in sorting capabilities, but they're not a "sort" command, this is a random addon it has accumulated over time. It can order its output by some fields, but not others. It can't do complex sorts, such as "sort by A ascending, then by B descending". To do that, you'd have to resort to parsing its text output and feeding that into an external tool. Ugh.

Heaven help you if you want to sort the output of several different tools by matching parameters. Some may not have built-in sort capability. Some may. They might have different notions of collations or internationalisation.

In PowerShell, no command has built in sort, except for "Sort-Object". There are practically none that do built in grouping, except for "Group-Object". Formatting is external too, with "Format-Table", "Format-List", etc...

So in PowerShell, sorting processes by name is simply:

    ps | sort ProcessName
And never some one-character parameter like it is in UNIX, where every command has different characters for the same concept, depending on who wrote it, when, what order they added features, what conflicting letters they came across, etc...

UNIX commands are more an accident of history than a cohesive, coherent, composable design. PowerShell was designed. It was designed by one person, in one go, and it is beautiful.

The acid test I give UNIX people to see if they really understand how weak the classic bash tools they use is this:

Write me a script that takes a CSV file as an input, finds processes being executed by users given their account names and process names from the input file, and then terminates those processes. Export a report of what processes were terminated, with ISO format dates of when the processes were started and how much memory they used into a CSV sorted by memory usage.

Oh, there's a user called "bash", and some of the CSV input fields may contain multiple lines and the comma character. (correctly stored in a quoted string, of course!)

This kind of thing is trivial in PowerShell. See if you can implement this, correctly in bash, such that you never kill a process that isn't in the input list.

Give it a go.

Thanks for this, a very interesting insight into the Windows ecosystem.

I won't be able to speak with any authority on Windows systems and I certainly don't want to get into a language/OS war but I would like to point out a few issues that raise red flags for me:

* "Powershell is more UNIX than UNIX"

Again, I don't have enough knowledge to really speak on PowerShell specifically but I will point out that the shell is one aspect of a Unix system, even if it's one of the bigger ones. The concept of files, filesystem, memory, in addition to the ecosystem of tools, are part of Unix. Obviously Windows has solved these aspects in some way shape or form but, at least from my perspective of the outside looking in, Microsoft has re-invented a lot of the Unix philosophy in order to make a Windows box behave more like a Unix box.

* The mess of 'simple' tools vs. a cohesive philosophy in Windows

While it's true a lot of the tools in the Unix ecosystem have arbitrary names and non uniform output syntax, the diversity of tools allows for a large range of solutions to a given problem. As you point out, the hodge-podge nature is most likely a historical artifact of the Unix evolution but I would argue this is kind of a natural by-product of systems that are heavily used and improved.

I have no doubt that there are some cleaner interfaces in PowerShell but my concern is the reason for the clean interface is because it's a walled garden with Microsoft as the gate keeper. I think this gets into the old conversation about quality of code for FOSS projects vs. commercial projects.

It's my belief that, after a certain threshold, having a quantity of software to choose from will lead to better quality software overall compared to software that has less quantity but higher proportion of quality. I think this might be a contentious view but my opinion is that it's better to let many "amateurs" experiment, with the best tools from those experiments filtering to the top, than to rely on a few experts to solve the same problems.

Maybe there's a stronger economic argument as it relates to FOSS, I don't know.

I will admit that I have a lot of sunk-cost/first-language-bias that makes me stick with Unix. If Microsoft ever open sources their operating system, I might reconsider that position though.

You can't say that the "hodge-podge nature is most likely a historical artifact of the Unix evolution" and then say that it is "quality". You can't have it both ways. Random, unpredictable, historical quirks do not add up to quality. They add up to a mess.

It's like pointing to a shanty town and saying it is "quality housing" because it has a high population density.

I'm reminded of a recent interview with Jim Keller, who's famous for being one of the architects of the x86 instruction set, Apple's ARM CPUs, AMD's Zen architecture, and Tesla' AI chip: https://www.youtube.com/watch?v=Nb2tebYAaOA

An interesting throwaway quote, but actually a deep insight, is that he thinks a lot of things should be "redesigned every 5 years or so", but unfortunately are redesigned only every decade or longer.

I totally agree. PowerShell was a clean-slate design, and was called "Monad Shell" originally. It's elegant, and is based on a cohesive concept. It's now 10 years old and starting to accumulate... inconsistencies. Warts. It's in need of a clean-slate design again.

Bash and the GNU tools are the most random mess imaginable that can still function. Its components can trace their roots back to the 1960s. Ancient code that should have been rewritten from scratch a dozen times over. Like you said, it's "evolution", but evolution gave us the trigeminal nerve and the inverted retina. I want a designed system that doesn't take the scenic route and have things back to front because it was "always done that way and is too hard to change now".

If you got sat down and told to come up with a set of composable command-line tools for a shell, the end result would have nothing at all in common with the bash/GNU tools. There is nothing there worth repeating.

I've seen that interview before and that quote in particular resonates with me but at the same time there are basic economics to consider.

I want to be clear, what I meant by 'quality' above is functioning systems. If there's a hodge-podge solution that gives you a 10x improvement at the cost of 1/2x the productivity vs. an "elegant" framework that gives you a 2x improvement at 1/10x the productivity by requiring to re-invent all the tools, the 'hodge-podge' solution wins.

To me, this is the same basic argument of Dvorak vs qwerty keyboards. Dvorak might be a 1-2x improvement over qwerty but qwerty wins because of cultural momentum. The improvement isn't worth uprooting the infrastructure already in place.

I think programming languages suffer the same fate, where the size of code base, libraries, etc. secures it as the lingua franca even if there are other languages that are marginally better.

As a rule of thumb, I think a replacement needs to be at 10x the improvement before it has a chance of uprooting entrenched systems.

The Unix system might be chaotic but it's had a lot of time to adapt to different needs. There's also a culture of freedom, sharing and experimentation that isn't present in the Windows world. These all have real world implications on what tools are available for experimentation, use and improvement.

> If you got sat down and told to come up with a set of composable command-line tools for a shell, the end result would have nothing at all in common with the bash/GNU tools. There is nothing there worth repeating.

Sorry, no. This sounds like an emotional plea rather than anything rooted in critical thinking. Sorting text files isn't worth repeating? Searching text files isn't worth repeating? Piping text from one process to another isn't worth repeating? I've tried to be even keeled in my responses but this is ridiculous.

> I have no doubt that there are some cleaner interfaces in PowerShell but my concern is the reason for the clean interface is because it's a walled garden with Microsoft as the gate keeper

This is simply not true. Many third-parties have powershell modules to interact with their product. Anyone can write them.

> UNIX commands are more an accident of history than a cohesive, coherent, composable design. PowerShell was designed. It was designed by one person, in one go, and it is beautiful.

I'm curious about that design and would love to learn about it.

Note, that I'm not interested in learning how to actually use powershell as a developer, but understand the abstraction design choices, that went into it.

Every time I tried to learn about that, I'd find recommendations about Books with 1000+ pages and similarly time consuming resources.

Do you have an recommended read for understanding the design choices, that went into powershell, while not learning the details, that a everyday windows would need?

Things I care about: -How do Lamdas work? -Arrays are Objects. How exactly? -Typesystem -Are there native wrappers for the kernel APIs how do they fit into the objectmodel? -Since to my understanding no shortflags exist, how is verbosity avoided? -How thight are interfaces defined. If a tool provides a new interface in a new version, can old tools still work with it?

There are no good references other than a handful of blogs from over a decade ago. Google "Monad Shell", which was the original name during which the formative ideas were still congealing.

Generally, PowerShell simply embraces the UNIX concept, but with a clean slate. Being based on the .NET runtime, it inherits much of its low-level functionality, such as being able to call into WMI, COM+, and Win32 APIs.

But what it's really about is keeping things in strongly typed objects until the last second when they are formatted for display. The original UNIX pipes formatted output immediately, because C is not object oriented. So everything ends up as a byte/char stream, and then to glue commands together you often have to parse & split the output. There's ways around this, but it's inconsistent and not at all general.

So whereas in bash you would see only 2 or 3 commands in a row with a pipe symbol in between them, in PowerShell you would often be able to chain 10 or even 20 commands in a row in a single pipe. You build up pipelines almost like SQL queries, which is easy because the data passing between each step is structured.

The other big win that I see over legacy shells that got built up over decades is the discoverability and consistency. The naming is consistent, and all the commands publish detailed metadata that integrates with both the help system and tab complete.

Issues like verbosity are simply sidestepped. PowerShell is verbose to read, which makes it readable. Typing it is fast, because you can tab complete not just the command names, but the parameter names, and even the parameter values in most cases. No other shell can do this. It has aliases, so if you prefer 'ps', you can use that instead of the canonical 'Get-Process' command name.

Generally all commands simply extend a .NET class called "Cmdlet", which defines functions such as Begin(), Process(), and End() that handle pipeline input. Declarative parameters can be defined, which the shell engine hooks up for you automatically. For calling external APIs, you can do whatever .NET can do, which is pretty much everything in Windows and Linux with only a handful of exceptions (no inline assembly language).

In terms of interfaces, it's a little bit too weakly typed for my taste. When binding parameters or pipelines, the engine goes through a complex process of figuring out what you meant. It can go wrong. The classic example is that some search or query commands can either return a single object or an array, and making this consistent is a bit of a pain.

It supports lambdas, they're simply a block of code with squiggly brackets around them, e.g.: { echo "foo" }. You can do all sorts of fun things with them, like capture a closure with simply: { ... }.GetNewClosure()

Install PowerShell 7 on Linux, type in {}. and press tab to cycle through the list of functions you can call on an expression block. I like toying around with things like .Ast. The engine exposes some crazy powerful things directly, such as being able to generate code for remote RPC wrappers automatically, etc...

Yes, it's not particularly difficult either, as long as you have the appropriate tools installed.

    csvjson path/to/infile.csv \
        | jq -c ".[]" \
        | while read LINE; do
            UNAME="$(echo "$LINE" | jq -r ".uname")";
            CNAME="$(echo "$LINE" | jq -r ".cname")";
            if id -u "$UNAME" &> /dev/null; then
                # Find processes with this command name belonging to this user
                comm -12 <(ps -C "$CNAME" -o "pid=" | sort) <(ps -U "$UNAME" -o "pid=" | sort) | \
                    while read PID; do
                        # rss=Resident Set Size, i.e. the amount of memory this process is currently
                        # using, not including swap. You might want vsize instead, depending on your needs.
                        RSS="$(ps -o "rss=" "$PID")";
                        STARTISODATE="$(date --date="$(ps -o "lstart=" "$PID")" --iso-8601="seconds")";
                        # If you actually plan to use this you need to make sure pid isn't anything important
                        kill "$PID";
                        jq -n -c \
                            --argjson "pid" "$PID" \
                            --argjson "rss" "$RSS" \
                            --arg "start" "$STARTISODATE" \
                            --arg "command" "$CNAME" \
                            --arg "username" "$UNAME" \
                # Print an error to stdout if the user does not exist
                echo "warning: user $UNAME does not exist" >&2
        done \
        | jq -s 'sort_by(.memory_kb)' \ # sort_by(-.memory_kb) if you want it in descending order
        | in2csv --format=json \
        > path/to/outfile.csv

I'm guessing you meant "parse the CSV correctly using only bash", and I agree that's difficult, which is why you should not do that. Instead, you should use a tool that has been specifically designed to do that one task, and do it well (in this case, that task is "convert a csv into a format that's easier to work with, namely json"). This is the unix way.

How extensible is PowerShell? With bash, you can write a new command that takes text input in a certain format and produces text output in a certain format, in whatever language you want, and plug it in in the appropriate place in the pipeline. Is the same true of PowerShell?

I am properly impressed. Wow!

Most people I give this challenge to fail to account for the complexities of even a simple format like CSV, and typically simply use "grep" to find processes, which would cause havoc with simple user names like "Bob Ash = bash".

For comparison, the equivalent in PowerShell is this:

    #require -RunAsAdministrator
    $i = ConvertFrom-Csv @'
    $ps = Get-Process -IncludeUserName
    Compare-Object $ps $i -PassThru -Property UserName,ProcessName -ExcludeDifferent -IncludeEqual  `
        | Stop-Process -PassThru `
        | Sort-Object PeakWorkingSet64 -Descending `
        | Select-Object `
            ProcessName, `
            @{ Name='StartDate'; Expression={ '{0:o}' -f $_.StartTime } } `
        | ConvertTo-Csv
Note that it's easy to substitute "Import-Csv" and "Export-Csv" to deal with CSV files instead of CSV formatted text.

It's also a trivial extension to convert this snippet into a "ps1" script that can take formal parameters as input from anything that has those two columns, not just CSVs. And those parameters can be invoked manually, validated by the engine (not code!), and so forth.

What I hope this PowerShell version shows off is just how "neat" it is compared to bash. It's just a pipeline of little processing commands: "get,compare,stop,sort,select,convert". As impressive as your bash solution is, it's... messy. You were forced to write manual loops, use JSON for processing CSV, and do all sorts of admittedly pretty fancy trickery. Ask yourself: Could a junior tech make changes to your script and not break it? Could they even read it? I know I certainly couldn't, and I know more than a little bash!

As to how extensible PowerShell is: More than bash!

The "shell API" is just a byte stream in, and a bunch of byte streams out. That's it.

The PowerShell API is very rich, but it's also very easy to get started. You can simply write scripts or write a "binary module" in any .NET language such as C#. Simply derive from the System.Management.Automation.Cmdlet class, override the processing methods as required, and start writing code. The PowerShell engine will take care of things for you like hooking up input parameters, matching wild cards, expanding file paths, validating inputs, generating error messages in the user's OS language, etc...

In my experience, I can write a "proper" command line tool with all the bells & whistles using C/C++ in about a few thousand lines of code. With modern command-line parsing libraries, it's about 500 lines if I'm lucky. Meanwhile, it's possible to build the equivalent using PowerShell and C# in about 50 lines, of which 40 is just generic C# class boilerplate.

Yeah, the bash ecosystem looks fairly different with regards to dealing with structured data since jq became popular. Basically jq means that as long as you can safely get your input from whatever custom format into json, and from json to whatever output format, you can safely work with structured data.

Incidentally, I think it would have been possible to use only `csvkit` commands and skip the format munging to and from json, but I have an allergy to working with csvs since the format is a bit underspecified and there exist "csv" files that break even the few rules the format is supposed to have. You would be horrified at some of the things I have seen that called themselves "csv".

> In my experience, I can write a "proper" command line tool with all the bells & whistles using C/C++ in about a few thousand lines of code

Yeah, in my experience trying to write command line tools in low level languages is an exercise in pain. I usually write new tools in Python, where similar to your experience writing a simple command is ~20 lines of code, ~10 of which is boilerplate that is the same for every command.

That said, powershell does look pretty cool. And I see that it's possible to install on linux as well[1], and as far as I can tell even to write my commands in python[2], so it almost looks like "a unix-style shell but with types" which would be useful.

[1] https://docs.microsoft.com/en-us/powershell/scripting/instal... [2] http://pythonnet.github.io/

Most visual programming is two-dimensional. What if it were three-dimensional instead, arranged in some sort of https://en.wikipedia.org/wiki/Axonometric_projection with https://en.wikipedia.org/wiki/Platonic_solid 's representing your modules/nodes, chosen according to the amount of connections they have with other modules/nodes, the connections styled and color coded, maybe even unobtrusive/discreetly animated to show the direction/speed/volume of signals/dataflow, width of datatypes/buses, all in an abstract fashion similar to the way large metro/rapid-transit systems are mapped?

Able to encapsulate other modules/nodes, folding large graphs into one, popping into an https://en.wikipedia.org/wiki/Exploded-view_drawing only on demand?

Turnable/viewable from any direction you like, but usually snapping to some virtual 3D-grid, in say steps of 45° ?


This is even usable the other way around for reasoning about stuff you have NO source code for. Like in https://www.youtube.com/watch?v=4bM3Gut1hIk (skip to about 14m30sec, the show starts there) and https://sites.google.com/site/xxcantorxdustxx (same guy) and others, but similar https://codisec.com/binary-data-visualization and even more so https://arcan-fe.com

That looks messy, but that is because it is in reverse, with no known sources.

Now imagine how tidy it could look when done originally, from first principles.

Your own cybernetic Zettelkasten/Memex/Dynabook/whatever in a world of common and open (architecture neutral) data & document formats, automagically tagging, indexing and sorting all your stuff.

OFFLINE if you like to.

Usable on conventional desktops, tablets, large surfaces, overlaid onto some augmented reality, VR, holographics, i don't care.

With the end goal of partially dynamic reconfiguration/JIT of some fpga-like logic fabric, according to what you click, tap, swipe in that environment, while hosting itself.

Without crashing. Formally verified at the same time.

I was also thinking about this the other day. Essentially we should be able to take advantage of the brain's superb ability to comprehend/remember locations in our day to day programming experience. This would lower the cognitive load of translating the codebase into a mental model, since locations are easier to remember (memory palace) and reason about.

On a related note, the Unison language [1] and its data structure of the codebase lends itself perfectly for this kind of visualization (since you get the dependency graph between functions for free). Imagine being able to see the whole codebase as some kind of large, coarsely detailed factory/assembly line, linking all your higher-level functions together. Only when you "get" sufficiently close, you can make out the inner connections between the functions.

[1] https://www.unisonweb.org/ - a programming language where the definitions themselves are content-addressed

Hm. I remember reading about it, but it isn't bookmarked. So i probably missed something. What i didn't write in my post, was different symbology at different levels of zoom. But you got the general idea, and I what you probably mean.

edit: Would maybe look nice, but not very useful, to only have platonic solids, however they are connected. I'm thinking of icons/thumbnails also, like in airports, stations, symbolising data types, (line)encoding schemes, combined with something like https://en.wikipedia.org/wiki/Shunting-yard_algorithm / https://en.wikipedia.org/wiki/DRAKON depending on context/zoom level and simply NOT fitting to each other, if you try to couple them in incompatible/unsafe ways.

2nd edit: the same for common algorithms, have some box/body, define the range/width of possible inputs via some sliders/knobs, which change the possible plugs, ready.

I had dreams about this ~2006, FWIW.

I think there's something to be done there, even if it's only for visualizing existing textual code, via some kind of VR, using human spatial memory and intuitions to better effect than tabs and treeviews and back buttons for navigating files and code structure.

I have led the eviction of LabView from three different engineering organizations now. It’s seductive how easy you can get started with building a bench top system. Then it grows from there and pretty soon people are doing linear algebra and trying to automate tests by writing data into a database - screen after screen of multiply nested windows. It’s not pretty, it’s not fast (you need a much more powerful computer to run LabView than a comparable Visual C++ or Matlab program). It’s also very hard to document it well, really hard for a third party to bootstrap into your code base, doesn’t play well with common version management tools, requires a full license for every machine you run it on, and if your developer leaves, it can be hell to hire someone to maintain it - LabView consultants cost a mint and are hard to find.

We had a bake-off once, once you have the instrument control DLL’s and link them into Matlab, you can code up a new experiment in Matlab in about 10% of the time it takes to do the same thing in LabView. And from there, you can do pretty much anything that needs doing in terms of data work-up. I’m sure these days you could do the same in Python, which has become the new science and engineering language.

When I started my PhD in 2009 I was greeted with a really complex experiment control system written in LabView. Being the new guy I at first tried to figure it out and build my experiments on top of it, but I found it so frustrating that I eventually started rewriting it in Python. My colleagues & advisors did not approve of this and thought I would have to spend years getting the same amount of functionality they realized in Labview. So I did it secretly in the beginning while continuing to use the old system as well. After 6 months I had completely replaced the old system with a new one based on Python and C++ (for time-critical data analysis tasks that numpy was too slow for). I implemented a graphical IDE in Qt similar to Matlab where you could edit code snippets in multiple windows and execute them interactively. It was similar to Jupyter lab (which didn’t exist back then). I also wrote instrument front panels in Qt, class based abstractions for all instruments in our setup and a management layer that took care of initializing instruments and logging all relevant parameters. It was great fun and my first contact with Python, still amazed that the language allowed me to become productive in it so fast. The system is still in use and has spread to multiple labs by now, and my colleagues mostly acknowledged that Python is actually a good way to control measurement equipment and run experiments (though they still use LabView for simple measurements).

> LabView ... writing data into a database

You know, this is actually pretty easy to do if you know SQL and use LabView's FFI to call into a DLL for accessing the database. I did this once nearly a decade ago and was able to replace a system that took a (horrifyingly, horrifyingly incompetent — seriously, these people were so awful they needed state-level protection) team a year (?) to develop in a single afternoon. Oh, and mine didn't crash; theirs did. Anytime I think "this is the worst code I've ever seen" I stop, remember that code, and shake my head... that particular chunk of LabView just keeps winning (losing?). (The fact that they somehow had some sort of custom C extension with networked I-don't-think-I-ever-knew-but-it-sure-didn't-work in there probably makes it unassailable.)

That said... kill LabView with fire. Kill it dead. There's a reason that it's not listed on my resume and I'll deny any knowledge of it when people come asking.

(Personally, my biggest issue with LabView is its complete resistance to any form of version control. But there are so, so many other issues to choose from that I won't be sad if you pick a different one to hate on. Just don't use LabView and we're cool.)

My personal mission is to produce a version-control-friendly and open-source LabView/Simulink killer. Visual display for the managers, textual editing to save the poor engineers who have to program the thing from mouse-hand RSI. Do the whole thing in Python/C to make it both hip and fast.

Maybe you want to look at Modelica [1] for inspiration? It is an open modeling language standard with open source implementations. It should do what you want. maybe abit too declarative however (you can always use the algorithm block instead of the equation block though).

[1] https://www.modelica.org/

I was aware of OpenModelica, but hadn't looked at the project in any great depth. I think it's a lot more ambitious than my project, which is restricted (at present) to a single model of computation.

I remember as a kid that there was a thing called Visual C++ which sounds like what you're describing. It let me draw a GUI and connect functions to button actions or entry field contents.

Now I just use emacs and gcc because I write embedded code mostly but I'm trying to understand why your idea is different? Being open source?

Honestly .. I would love for this to happen.

I was recently involved in a LabView eviction. I was shocked to discover that one of the modules which looked superficially complicated was actually "short circuited" and most of the logic was not contributing to the final output. At a glance it looked like everything was wired up, but looking closely there was a single missing link that completely changed what the module did. That was enough to convince me that it's a toy and nothing serious should be attempted with it.

How is that different though from forgetting to pass in an argument into a variadic function, or having a spurious early return? To the untrained eye, both those scenarios would present the same.

Any compiler or linter would likely complain about unused variables or code in such a situation.

The labview "linter" is of course weirder but I think this is basically one.


PS. Please don't take this as an endorsement of labview. It has its place in non-CS fields when there's high turnover so it's really bad if only the group's current programmer understands it. Talking academia here, I probably wouldn't use it in a more stable environment.

Not if a part of the variable is used

I just had this problem in a GUI. There was an edit view, correctly created and initialized with a default value. Then the user could enter a value, but whatever the user entered would be ignored.

How did you manage that?

TLDR -- Event structures are my favorite new feature. Try them out and see if you still hate LabView nearly as much. I also suggest not actually using any of your GUI variables. Treat these like declarations and use local variables everywhere else. Otherwise the wires are impossible to trace and buggy to change.

You've probably fixed it, but you can configure it to execute a code block when a value changes with Event structures.

This structure is a little new, but I started LabView before it existed so I only started using it in the last year or two.

It lets you define code to execute based on interrupt style events like "mouse release" on a button whereas in older versions I would wrap all my button checking logic inside of a big timed or while loop because otherwise as you've noted it won't update the value when doing the logic check.

So my old standard style would be that there's a timed loop around each case structure that is checking buttons at 1kHz against a local variable version of the button's boolean, with a sequence inside the case structure that always resets the button state to zero at the end. Otherwise it doesn't check for button presses if one has already been pressed and is running something, or if you have no while/timed loop it doesn't even check at all since it's a run-once program.

LabView patterns are really not very clear, and my least favorite bit is that there's at least ten ways to do anything, and only one or two are good but the others all logically seem like they should work until you look really close.

Edit: Or maybe my least favorite part is that despite being a system designed for instrumentation and controls and data acquisition, my god is it a pain to plot anything.

> How did you manage that?

I probably started coding, and then forgot to finish it ...

(not in LabView to be clear)

Indentation errors in Python can similarly short circuit control flow and are easy to miss for less trained eyes. Disclaimer: I actually love Python.

I guess I'll give an opposing view of LabVIEW since I've worked on and off with LabVIEW over the past 7 years. I don't think it's too different from other text-based languages where it can be turned into a huge mess if the code is not organized properly. This is probably compounded by LabVIEW being targeted towards people who don't start with software backgrounds and don't understand best practices of software design. Using design methods and frameworks like the actor framework (comes with LabVIEW) can go pretty far to develop clean code that can rival or exceed the best examples of text-based source code.

Definitely LabVIEW has it's strong areas though. Anything to do with more complex UIs is probably not something you want to use LabVIEW for. I'd probably say LabVIEW is definitely strongest when paired with National Instrument's hardware (CompactRIO) which gives you access to data faster than you can do with any other system that I've seen. It's also super easy to develop FPGA applications which is a big plus when working with high-frequency data processing.

Overall I think it fills its niche quite well and don't think it's dissimilar to any other language where you can also make a mess of things if you don't know how to best structure the code.

Yep, LabVIEW is just a gateway drug for NI hardware. The integration with their hardware stack is top notch (as you would expect) and can go from scratch to sampling a signal _very_ quickly.

National Instruments does not make (tons of) money with LabVIEW licenses. The real dollars are in their hardware offerings, which are very good, but also very expensive.

The hardware isn't even that good. What you're really paying for is the idiot proofing (i.e., input protection) and the warranties. Which -- even speaking as someone who has designed competing hardware -- is often a good tradeoff.

Interesting. Are you talking about the cRIO line or the PXI one?

My experience with the PXI line was that it was absolutely top notch. I’ve seen some really cool things done with that. Most electrical folks I’ve worked with loved the hardware but absolutely despised the price gouging. They were constantly looking for alternatives yet NI hw remained for most things.

The cRIO line.... it’s getting better and it has improved a ton, but in the early beginnings it was extremely buggy, both hw and software.

I'm mostly referring to the analog sections of a few PXI cards I've looked at. This was in the context of "turn this benchtop prototype built with PXI into a form-factor product". I was surprised at how simple most of it is. There are a couple TI/BB highly-integrated PGA/ADCs that NI gets a lot of mileage out of.

But even if it's not as complex or interesting as, say, a Keithley electrometer, NI's execution really is top notch. Technical brilliance alone does not a product make.

I see. My experience was with medium test stands (couple of thousands of channels) in a more industrial setting, we didn’t really need high-accuracy or specialized ducers (there were maybe one or two of those and we usually wrote custom software for talking to then). Reliable, accurate enough and with strong platform support was killer for us. I don’t think anyone beats NI at that level of integration.

I have done the same. Labview is confusing and destroys computers, old and new.

What I recall from the 90's was using DOS labview to make a control UI and it was pretty remarkable for it's time, clean and lightweight and easy to implement.

When I saw what it had turned into a couple years ago my jaw dropped.

sadly, I think that Python isn't really useful in any ecosystem sort of way last I checked. You probably can ffi the dlls into it, but that's it. In my undergrad days I once did a "UI", then interned at a company with some horrible labview applications... and to this day I am horrified how you can sell this, considering you need ages to setup data processing/IO. 10 lines of Python for doing the same with a webservice/any other data-provider, 100 ones in C - 10 screens with Labview...

It’s really easy controlling most lab equipment using Python. There is good support for GPIO (which most simple instruments are using) and it is also easy to use ctypes to call into external DLLs if you have to. During my PhD I controlled more than 30 different instruments in real-time using Python, so it’s definitely possible. The learning curve is much steeper than for LabView of course if you don’t have experience as a programmer, but in most scientific fields programming is becoming more important anyway so most students learn it.

I use Python extensively in the lab. In addition to what you say, Python plus Arduino is a killer combination. You can use an "advanced" Arduino compatible microcontroller as a GPIO on steroids with real time functionality, then talk to it using a Python program.

While it might be slightly more awkward at first, having to hook up to external DLL's yourself, it really begins to shine when you can integrate it with all of the scientific programming tools such as data analysis and graphing.

don't you mean GPIB?

Yes that was a typo.

Having worked with LabVIEW a fair amount, the problems I have with visual programming are:

1) takes up a lot of space

2) each subroutine (sub-VI) has a window (in the case of LabVIEW, two windows), so you rapidly get windows spewed all over your screen. Maybe they've improved that?

3) debugging is a pain. LabVIEW's trace is lovely if you have a simple mathematical function or something, but the animation is slow and it's not easy to check why the value at iteration 1582 is incorrect. Nor can you print anything out, so you end up putting an debugging array output on the front panel and scrolling through it.

4) debugging more than about three levels deep is painful: it's slow and you're constantly moving between windows as you step through, and there's no good way to figure out why the 20th value in the leaf node's array is wrong on the 15th iteration, and you still can't print anything, but you can't use an output array, either, because it's a sub-VI and it's going to take forever to step through 15 calls through the hierarchy.

5) It gets challenging to think of good icons for each subroutine

6) If you have any desire for aesthetics, you'll be spending lots of time moving wires around.

7) Heavy math is a pain visually (so you use the math node that has it's own mini-language, and then you're back to text)

as someone who's worked a lot with LabVIEW and automating some considerably old instruments...i hate it i hate it i hate it. I hate it with the fire of a thousand suns, most every paradigm I know is wrong in this bizarro-land willy-wonka very proprietary and very expensive IDE. Not a fan. Not worth the opportunity cost of not learning other languages that are useful in society. Its the school of thought it subscribes to, the way things are designed just makes zero sense. Learning something that makes you unlearn the right way to learn a way that only works with one company's MATLAB-priced software, is no fun at all.

Came here to mention space. I have tried to build "mobile" graphical programming. Looking at the factorial example at flowgrid.org, it should be easy to see how a one-liner takes up the whole screen...

> the problems I have with visual programming are

The problems you have with LabVIEW are. This seems LabVIEW specific, not visual programming specific. Let me contrast your experience with LabVIEW to my experience with Max/MSP:

> takes up a lot of space

Yeah, but, my Max code only took up a bit more space than my C++ code ever did. Sure, I now use Clojure which is a lot denser, but I didn't find the space it took up to be a problem when I used Max. I did aggressively factor code into subroutines though.

> each subroutine has a window

Agreed, but I think there is room for experimentation here. Synthmaker/Flowstone has a visual map thing at the top where you can see the zoomed in/out (iirc) views and subroutines opened in the main window, which then navigated this map. That is, subroutines didn't open in new windows and the map let you quickly navigate up and down the abstractions. It worked well in my brief time with Synthmaker, so I could envision LabVIEW or Max do something similar.

> debugging is a pain

I found debugging in Max, where you add probes to connections and can see the data that flows through them, much nicer than debugging C++, Java or Python (three languages I've used extensively).

> debugging more than about three levels deep

The trace/probe window let me see all probes, no matter the depth, so I didn't have this problem.

> It gets challenging to think of good icons for each subroutine

Max used words, but I didn't have to give them names if I didn't yet know what they would do: I could write the code without subroutines to experiment, then select any areas I wanted to abstract (to simply or for DRY) and it would put it in a subroutine and then I could name it or not as I felt necessary.

> If you have any desire for aesthetics

I did, but I didn't find I spent too much more time than I o aligning text, maybe a little. I factored things into subroutines aggressively and generally just kept things aligned for nice straight lines.

> Heavy math is a pain visually

True, I agree with this point, except that I don't think using the math/expression node is a problem and that going back to text for that is a problem. Right tool for the job: visual for dataflow, text for math expressions. I thought it worked really well.

I'm not saying visual languages are perfect, but I do think that there is lots of room for improvements and just because someone has had a bad experience with one visual language, doesn't mean the whole paradigm has those problems. That's like trying Haskell and complaining that all textual languages are hard because thinking in monads was challenging.

For an interesting take of something I'd love to see explored more: https://vimeo.com/140738254 (Jonathon Edwards' Subtext demo), there are many ideas left to explore and not enough visual languages to explore them, compared to the many textual languages we see that try new ideas.

It's pretty main stream in the CG industry. Houdini is a great example of a very interesting take on visual programming in a non-destructive procedural workflow. Visual programming is a staple at Pixar and features heavily in all their internal tools for shading, animation, set dressing etc. At Pixar they push it to its limits, expressing all manners of development patterns through these tools to artists who don't know any software development at all. I often think of a modified version of your question which is "Why isn't visual programming a bigger thing for trained programmers?" What needs to improve for that to happen? It's already a big thing for visual artists in the CG industry.

Just adding a bit to how pervasive this is: Blender, Cinema 4D, Max/Pd, Reaktor, Foundry's Nuke, Quartz Composer/Origami and Logic's environment are all examples of creative software that use some form of visual programming language. In fact, I'd even go as far to say that, outside of software engineering, it's the default approach (scripting interfaces are more common but less used, e.g., I'd guesstimate that over 90% Blender users use the node editor, while less than 1% of Photoshop users use its scripting interface).

From my analysis, most of the evidence points to visual programming languages actually being better. But.... there's just one problem, it's not plain text. The advantages of plain text are so powerful, and so pervasive, that they're like oxygen: We're so accustomed to them that it's hard to imagine a world without it, where we'd all be walking around without helmets on. Why is plain text so important? Well for starters it powers version control and cut and pasting to share code, which are the basis of collaboration, and collaboration is how we're able to construct such complex systems. So why then don't any of the other apps use plain text if it's so useful? Well 100% of those apps have already given up the advantages of plain text for tangential reasons, e.g., turning knobs on a synth, building a model, or editing a photo are all terrible tasks for plain text.

Visual programming works great when it all ends up in a set scene, a stable form that you just want to use code to create.

Once the task is to loop over time and do a bunch of processing, all the visual tools break down. They are two separate categories, two separate functions that shouldn't really be treated by the same tool just because they both use 'code'.

Don't think it is that simple.

For example the visual programming languages for music and multimedia Max/MSP and Pure Data.

While you could use them to create a stable form, they are in my opinion much more useful and interesting for rapid prototyping, live coding, create or connect interactive systems, etc.

Sure, they are a bit niche, but certainly don't break down as soon as you loop over time and do a bunch of processing.

I had a quick look at Max and PD. I see what you mean, you can constantly run those programs, produce new outputs and edit it on the fly.

My meaning is more narrow I guess. Desktop apps and video games are built to run themselves post delivery. Music/film/3d models/textures end up as arranged bytes that are used as content for the apps and games to display. I guess textual programming languages work best when building the hierarchies and branches of loops that runtime apps have to self-navigate. Visual programming languages work well at symbolically representing many many many different rabbit holes one can draw from to produce an arranged set of bytes.

The visual stuff breaks down when you have to bake in navigating hierarchy after the product is out of your hands. The text stuff sucks at building 3d models vert-by-vert or with pattern matching. I haven't thought it all the way through.

I think we agree.

As a wrote in another comment, where visual programming shines in my opinion is as a way to interact with an underlying complex system. An alternative to scripting languages.

To take the Max example further. Ableton Live integrated it deeply into their digital audio workstation. Although Ableton itself, not even talking about all the third party plugins, provides enough tools to make music with for a life time, having Max integrated opened up a whole new world of exploration beyond what the UIs created by the host and plugin vendors offer and it allowed non technical people to create and share their own extensions to a software.

I think even for trained programmers there is maybe more potential for visual programming. Maybe not for the core software we are writing, but to at the same time interact with and visualize the software and the ecosystem. I sometimes feel we are a bit stuck to have to either use cli and textual tools or graphical tools with limited and vendor specific UI.

I don't know how this would look like, since I'm a person that almost always defaults to cli and textual tools, because I know I can get the job done with them. But having used visual programming quite a bit outside of work, I'm intrigued to give it more attention to.

Visual for-loops work just fine for iterating over geometry chunks, and can even be parallelized with "compile block begin" nodes in Houdini.

High-level iteration is less intuitive, eg. iterating over random seeds for a sim and rendering preview animations, but the process is still easier than switching the entire thing to text input. The toolkit for this is called PDG or TOP networks.

It's no surprising that visual artists feel at home with visual programming.

There are more examples. Some people have mentioned Matlab/Simulink. In fact, many procedures are defined using some kind of visual programming. For example, in CAD and modeling software (like SolidWorks), you have a tree of operations that is just a visual program. The way in which effects are handled in PowerPoint is visual programming too. Even it could be argued that, when playing a game, you are just doing visual programming.

Mandatory mention of Grasshopper for product design and architecture! Amazingly powerful, can easily compete with most C++ geometry processing libraries

> "Why isn't visual programming a bigger thing for trained programmers?" What needs to improve for that to happen?

Because it's inefficient as hell. That may be mostly a problem with existing tooling, but it is a problem.

For context, I'm currently doing a side project with Unreal Engine 4, which uses visual programming for scripting and defining shaders; I'm sticking strictly to these tools, because I can't be arsed to set up yet another C++ dev environment on yet another machine. Because of that, the pros and cons of visual programming are on the top of my mind right now. Here are some thoughts:

Visual programming is fun while you assemble high-level blocks together. But god forbid, you have to do any maths. You end up spending 10 minutes assembling an unreadable mess of additions, multiplications, LERPs, etc. taking half of your screen, to build an equivalent of three lines of code you would've typed in 30 seconds in your editor.

(This presents an opportunity for improvement: why not give a generic "math" node, where I could just type a math expression by hand, and have the free variables show up as input pins? There's really no point in assembling things like "lerp(A x B, C x D, alpha) * -beta" from half a dozen nodes.)

(EDIT: turns out this exists in UE4, see https://news.ycombinator.com/item?id=23256317. I learned about it just now.)

In general, all the dragging quickly becomes annoying. As a trained programmer, you can type faster than you can move your mouse around. You have an algorithm clear in your head, but by the time you've assembled it half-way on the screen, you already want to give up and go do something else.

Visual programming is fun while your program fits on the screen. Because of the graphical representation, you'll usually fit much less code this way than you would in text. This, plus the fact that anything nontrivial won't form a planar graph (i.e. edges will have to cross somewhere), makes you want to abstract your code hard, otherwise it gets impossible to read. And here is where Unreal Engine starts to fail you. Yes, you can make perfectly fine functions - which are their own little subgraphs. But that also means they're different "tabs"; it's hard to see the code using a function and the code of a function simultaneously on the screen, it's hard to dig in and back out, jump around the references.

As an improvement in that particular case, I'd love if I could expend individual blocks into their definitions "inline", as a popup semi-transparent overlay, or something that otherwise pushes other nodes out while still keeping them on the same sheet. But given the performance of the editor, that would just burn my CPU down.

I think visual programming works well for artists, because when building shaders, you mostly stay on a single level of abstraction - so a flowsheet you can zoom around makes sense. But when you start coding logic and have to start DRYing code, dividing it up into layers of abstraction, the ergonomics starts to wear you down. Then again, I hear good things about LabVIEW, so maybe it can be made to work.

It doesn't have the be 100% visual programming or not. Same as a program does not need to be entirely written in one programming language.

Where I think visual programming shines is, as a way to interact with an underlying complex system, like the artists you mention. As and intermediate step between a rigid and often limiting classical user interface (forms, knobs, buttons) that someone needs to define and implement and the "real" code.

And I think trained programmers as well can have use cases for this. For example ETL tasks, continuous integration, etc. For me the important part is, that there is an escape hatch, and I can drop down to writing code if the blocks get in the way.

>>Visual programming is fun while you assemble high-level blocks together. But god forbid, you have to do any maths

Meh, I've worked on two games made in Snowdrop, where pretty much most of the game logic is made directly in the editor, all of the shaders, all of the animations, all of the UI is made using a visual scripting language and it's been more than fine. It meant that we had people who didn't know much about programming "coding" entire logic for animations for instance.

>>You end up spending 10 minutes assembling an unreadable mess of additions, multiplications, LERPs, etc. taking half of your screen, to build an equivalent of three lines of code you would've typed in 30 seconds in your editor.

Yes, which I appreciate is frustrating if you actually know how to type those 3 lines of code, and where to type them and how to compile the project and run it from scratch(so on the projects I worked with yes, it would take you 30 seconds to type in the code, but then 10 minutes to build and another 10 to run the game, while making those changes in the editor-based visual script you'd see the changes instantly). Our UI artists for example never even had Visual Studio installed and in fact I don't think they even synced code since they never had to - visual scripting was such a powerful tool for them that there was no need.

> so on the projects I worked with yes, it would take you 30 seconds to type in the code, but then 10 minutes to build and another 10 to run the game, while making those changes in the editor-based visual script you'd see the changes instantly

That's orthogonal to visual programming, and just the symptom of most programming languages being edit-compile-run. Compare that with UE4 (as far as I know) not allowing you to modify blueprints in a running game. Contrast that with different programming languages. I do some hobby gamedev in Common Lisp from time to time, which is a proper REPL-driven language. There, I can change any piece of the code, and with a single keystroke in my editor, compile it and replace it on a living, running instance of my game.

Visual programming is fun while you assemble high-level blocks together. But god forbid, you have to do any maths

I don't know whether a spreadsheet counts as "visual programming", but they are a much more accessible way for non-programmers to manage mathematical concepts.

Indeed, spreadsheets can be thought of as a kind of visual functional programming.

Another field where it's very common is with systems lab tests and simulations, notably with Labview and Simulink respectively.

This comes up in comp sci every so often since the 80s when it first gained traction. I think the short answer is that boxes and wires actually become harder to manage as the software increases in complexity. The "visual is simpler" idea only seems to hold up if the product itself is simple (there are exceptions.)

To my mind, this is analogous to textual writing itself vs drawing where text is an excellent way to represent dense, concise information.

UE4 Blueprints are visual programming, and are done very well. For a lot of things they work are excellent. Everything has a very fine structure to it, you can drag off pins and get context aware options, etc. You can also have sub-functions that are their own graph, so it is cleanly separated. I really like them, and use them for a lot of things.

The issue is that when you get into complex logic and number crunching, it quickly becomes unwieldy. It is much easier to represent logic or mathematics in a flat textual format, especially if you are working in something like K. A single keystroke contains much more information than having to click around on options, create blocks, and connect the blocks. Even in a well-designed interface.

Tools have specific purposes and strengths. Use the right tool for the right job. Some kind of hybrid approach works in a lot of use cases. Sometimes visual scripting is great as an embedded DSL; and sometimes you just need all of the great benefits of high-bandwidth keyboard text entry.

Exactly. Take as an example something from a side project of mine that I recently did:


Compare that with this bit of pseudocode:

  function inputAxisHandleTeleportRotation(x, y, actingController, otherController) {
    if(actingController.isTeleporterActive) {
      // Deactivate teleporter on axis release, minding controller deadzone around 0
      if(taxicabDistance2D_(x, y, 0, 0) < thumbstickReleaseDeadzone) {
        sendEvent(new inputTeleportDeactivate(actingController, otherController);
      else {
        actingController.teleportRotation = getRotationFromInput(x, y, actingController);
Which one is more readable for someone who's even a little bit experienced in programming? Which one is faster to create and edit?

That nested if statement, in particular, looks especially awkward in the blueprint.

Blueprints gain so little from adding spatial movement to code blocks. A normal language has a clear hierarchy of flow from top to bottom, with embedded branches that give a stable backbone to what otherwise becomes a mess of wires. I think a DSL with a proper visual editor like JASS in war3, starcraft2 or visual LUA editor works better in the long run because it "fails gracefully" into learning a programming language, which ultimately should be the growth curve & workflow for using a visual scripter.

Blueprints are great for the material editor where everything is going to terminate in a specially designed exit node and there is little branching logic, but even for basic level scripting it is more messy than it used to be in the industry.

I agree about using the right tool for the job. And I know UE4 Blueprints are popular so they must be doing something right. Personally I tried using them for a while and gave up. My own experience was:

1) It took me longer to create blueprints than it would have taken to write the equivalent code. I kept finding myself thinking "I could do this in 5 seconds with a couple of lines of code"

2) The blueprints quickly became unwieldy. A tangle of boxes and wires that I couldn't decipher. I find code easier to read, as a rule.

3) I didn't find it any simpler than writing code. A Blueprint is basically a parse tree. It maps pretty much 1:1 to code; it's just a visual representation of code.

> And I know UE4 Blueprints are popular so they must be doing something right.

I'd argue that 90% of it is just that they work out of the box. You download the engine bundle and you can start doing blueprints. Being able to write normal code requires you to compile UE from source, which is a non-trivial thing for any large C++ project.

I'm pretty sure if they embedded Lua or Lisp directly into the engine, and provided a half-decent editing environment in-editor, that it would meet with success just as well.

I use Unreal Engine Blueprint visual scripting a lot, and I like it too.

Regarding your point about complex logic and number crunching can be alleviated by using a 'Math Expression' node... which allows math to be written in textual form in a node.

Holy hell, I didn't know about Math Expression node. Thanks! Now I have a bunch of blueprints to clean up from half a screen of hand-dragged math each.

To come full circle, hardware today is mostly described by textual descriptions instead of diagrams with boxes and wires exactly because at high enough complexity the schematics get totally unreadable. And even on the board/component level the schematics for almost any piece of non-trivial hardware made after 1990 consist mostly of boxes with textual net labels.

I don't think it is an issue of complexity since in both text and graphics you can encapsulate parts into a hierarchy of sub-blocks. Having to break things up from a single level into little pieces because of the screen size and letter/A4 paper is the problem.

In the 1970s and 1980s I did many projects with huge schematics that you would spread out on large tables and focus on details while seeing the whole thing at the same time. It is an experience that can't be captured at all by a series of small pages, each with one to four boxes and lots of dangling labeled wires going to the other sheets. At that point you might as well just have a textual netlist, which made structural VHDL/Verilog become popular replacements for schematics.

Perhaps VR/AR will bring back visual hardware design?

I think the complexity could be reduced in the same ways that we reduce complexity in text based code. We no longer write programs as one giant file after all.

That is exactly right. I don't understand the dichotomy that arises from text-based programmers.

I have written and maintained LabVIEW applications that exceed a 1,000 VIs. I would argue that such well-architected applications are actually easier to maintain for all the reasons that people expound functional languages for. The reason is that LabVIEW is a dataflow language, and so all the benefits of immutability apply. Most data in LabVIEW is immutable by default (to the developer/user). So the reasons why people prefer languages like F#, Elixir, Erlang, Clojure, Haskell, etc. overlap with visual languages like LabVIEW. I can adjust one portion of the program without worry of side effects because I'm only concerned with the data flowing in and then out of the function I am editing.

Somehow people form the opinion that once you start programming in a visual language that you're suddenly forced, by some unknown force, to start throwing everything into a single diagram without realizing that they separate their text-based programs into 10s, 100s, and even 1000s of files.

Poorly modularized and architected code is just that, no matter the paradigm. And yes, there are a lot of bad LabVIEW programs out there written by people new to the language or undisciplined in their craft, but the same holds true for stuff like Python or anything else that has a low barrier to entry.

> Poorly modularized and architected code is just that, no matter the paradigm. And yes, there are a lot of bad LabVIEW programs out there written by people new to the language or undisciplined in their craft, but the same holds true for stuff like Python or anything else that has a low barrier to entry.

That’s very insightful, and I think nails part of my bias against specifically LabVIEW, as well as other novice-friendly languages. My few experiences with LabVIEW were early on in my EE undergrad. At that point I had been programming for about a decade and had learned a ton about modularity, clean code, etc. The LabVIEW files were provided to us by our professors and, still being a naive undergrad, I assumed our profs would be giving us pristine examples of what these tools could do; instead, to my programmer brain, what we had was an unstoppable dumpster fire of unmanageable “visual code” that made very little sense and had no logical flow. Turns out it’s just because our profs might be subject matter experts on a particular topic, and that topic wasn’t “clean long-term LabVIEW code”. Later on I ran into similar problems with MATLAB code that our profs handed out, but by that time I had clued into my realization. At one point I was accused by my Digicom prof of hardcoding answers because there’s no way my assignment should be able to run as quickly as it did. (I had converted a bunch of triply-nested loops into matrix multiplications and let the vectorization engine calculate them in milliseconds instead of minutes)

Just like LabVIEW, my bias against PHP comes from the same place: it’s obviously possible to write nice clean PHP, but every PHP project I’ve had to work on in my career has been a dumpster fire that I’ve been asked to try to fix. (I fully admit that I haven’t tried to do a greenfield PHP project since about 2001 or so and I’m told the ecosystem has improved some...)

I lucked out with Python and started using it “in anger” in 2004, when it was still pretty niche and there were large bodies of excellent code to learn from, starting with the intro tutorials. Most of the e.g. PHP tutorials from that era were riddled with things like SQL injections, and even the official docs had comment sections on each page filled with bad solutions.

What did the peer review process look like? This is part of the complaint about maintainability.

It looked liked any other code review process. We used Perforce. So a custom tool was integrated into Perforce's visual tool such that you could right-click a changelist and submit it for code review. The changelist would be shelved, and then LabVIEW's diff tool (lvcompare.exe) would be used to create screenshots of all the changes (actually, some custom tools may have done this in tandem with or as a replacement of the diff tool). These screenshots, with a before and after comparison, were uploaded to a code review web server (I forgot the tool used), where comments could be made on the code. You could even annotate the screenshots with little rectangles that highlighted what a comment was referring to. Once the comments were resolved, the code would be submitted and the changelist number logged with the review. This is based off of memory, so some details may be wrong.

This is important because it shows that such things can exist. So the common complaint is more about people forgetting that text-based code review tools originally didn't exist and were built. It's just that the visual ones need to be built and/or improved. Perforce makes this easier than git in my opinion because it is more scriptable and has a nice API. Perforce is also built to handle binary files, which is also better than git's design which is built around the assumption that everything is text.

I think there's a lot of nice features to be had in visual programming languages with visual compares. Like I said in another comment, I view text-based programming as a sort of 1.5 dimensional problem, and I think that makes diff tools rather limited. If you change things in a certain way, many diff tools will just say you completely removed a block and then added a new one, when all you did was re-arrange some stuff, and there's actually a lot of shared, unchanged code between the before and after blocks. So it's not like text-based tools don't have issues.

It may be that I'm just not a visual person, but I'm currently working on a project that has a large visual component in Pentaho Data Integrator (a visual ETL tool). The top level is a pretty simple picture of six boxes in a pipeline, but as you drill down into the components the complexity just explodes, and it's really easy to get lost. If you have a good 3-D spatial awareness it might be better, but I've started printing screenshots and laying them out on the floor. I'm really not a visual person though...

In my experience with visual programming discussions, people tend to point at some example code, typically written in a visual programming tool not aimed at professional software engineers (tools aimed at musicians, artists or non-programmer scientists usually), which generally are badly factored and don't follow software engineering principles (like abstraction, DRY, etc), because the non-programmers in question never learned software engineering principles, and use that as an example of visual programming being bad. They tend to forget that the exact same problems exist (sometimes much worse, IMHO) when these same non-programmers use textual languages.

We're just used to applying our software engineering skills to our textual languages, so we take it for granted, but there exists plenty of terribly written textual code that is just as bad or sometimes worse than the terribly written visual code. That's why sites like thedailywtf.com exist!

You know, I've encountered various systems in my life that I thought were a silly way to do something.

And then I set out to replace them with something "better". And during the process of replacing, I came to understand and form a mental map of the original system in my mind. And at some point I realized that the original system was actually workable. And it was easier to just use the old system now that I understood it enough.

yes the flexibility of visual programming gets complex and hard to understand when you try to scale up and have more people involved

> I think the short answer is that boxes and wires actually become harder to manage as the software increases in complexity. The "visual is simpler" idea only seems to hold up if the product itself is simple

Why? Can you give concrete examples of this? I see this sentiment a lot but never any particular examples.

When you start off with a prototype in a text-based language for something "simple", you don't extend it by continually adding in more and more input arguments and more and more complex outputs. You breakout the functionality and organize it into modules, datatypes, and functions.

Same thing goes in a visual programming language like LabVIEW. When you start adding in more functionality to move beyond simple designs, you need to modularize. That reduces the wires and boxes and keeps things simple. In fact, I liken well-done LabVIEW to well-done functional-code like in Racket, Scheme, F#, SML, etc., where you keep things relatively granular and build things up. Each function takes in some input, does one thing and returns outputs. State in LabVIEW is managed by utilizing classes.

You’d probably like Bret Victor’s inventing on principle talk: https://m.youtube.com/watch?v=PUv66718DII

He demonstrates some good examples.

I’m not quite sure it’s what you mean by visual, but it seems obviously valuable and tools that enabled some of what he shows would be really useful.

I’m not sure why we don’t have these things - I think Alan Kay would say it’s because we stopped doing real research to improve what our tools can be. We’re just relying on old paradigms (I just watched this earlier today: https://www.youtube.com/watch?v=NdSD07U5uBs)

Thanks for the Alan Kay talk, I really enjoyed that.

Bret's talk was great! Thank you for sharing it.

His website also has some great stuff: http://worrydream.com/#!/LearnableProgramming

I discovered TouchDesigner [1] a few months back- it’s an incredibly powerful visual programming tool used in the entertainment industry. It’s been around well for over a decade and is stable enough to be used for live shows. Deadmau5 uses it to control parts of his Cube rig [2]. I’ve seen a few art installations based around it as well [3].

There are some really amazing tutorials and examples here: https://youtu.be/wubew8E4rZg

[1] https://derivative.ca

[2] https://derivative.ca/community-post/made-love-touchdesigner...

[3] https://derivative.ca/community-post/making-go-robots-intera...

TouchDesigner is indeed super cool. And I correctly guessed what that tutorial was going to be before I clicked on it. :) Mathew Ragan is an excellent tutorial maker. He's also relaxing to listen to.

TouchDesigner really showcases the enabling nature of visual programming languages. You can see your program working and inspect and modify it while it is working. These are very powerful ideas, and visual programming languages are much better platforms for ideas like this.

People, i.e. traditional programmers, are really hard and down on visual programming languages. Meanwhile, people who use LabVIEW, TouchDesigner, vvvv, Pure Data, Max, and Grasshopper for Rhino are all extremely effective and move quickly. People who are experts in these environments cannot be kept up with people using text-based environments when doing the same application.

Text-based programming is limited in dimensionality. This can become very constraining.

> Text-based programming is limited in dimensionality. This can become very constraining

In contrast, I've often thought that visual programming tools are much more limited dimensionally, and that's why they can become difficult to manage beyond a low level of complexity: you only have two dimensions to work with.

With the visual programming tools, the connections between components need very careful management to prevent them becoming a tangled and overlapping web. In a 2D tool (e.g. LabVIEW), you could make a lot of layouts simpler by introducing a third dimension and lifting some components higher or lower to detangle connections - but then you'd face similar hard restrictions in 3D.

Text based programs suffer from no such restrictions; the conceptual space your abstractions can make use of is effectively unlimited, and you can manage connections and information flow between pieces of code to maximize readability and simplicity, rather than artificially minimizing the number of dimensions.

> the conceptual space your abstractions can make use of is effectively unlimited, and you can manage connections and information flow between pieces of code to maximize readability and simplicity, rather than artificially minimizing the number of dimensions.

How does this not apply to a visual language like LabVIEW? Just because you draw the code on a 2D surface doesn't prevent abstraction and arbitrary programs. The way I program LabVIEW and the way it executes is actually very similar to Elixir/Erlang and OTP. Asynchronous execution and dataflow are core to visual languages. You are not "bound" by wires.

When you write text-based code, you are also restricted to 2 dimensions, but it's really more like 1.5 because there is a heavy directionality bias that's like a waterfall, down and across. I cannot copy pictures or diagrams into a text document. I cannot draw arrows between comments to the relevant code; I have to embed the comment within the code because of this dimensionality/directionality constraint. I cannot "touch" a variable (wire) while the program is running to inspect its value. In LabVIEW, not only do I have a 2D surface for drawing my program, I also get another 2D surface to create user interfaces for any function if I need. In text-languages, you only have colors and syntax to distinguish datatypes. In LabVIEW, you also have shape. These are all additional dimensions of information.

> Text-based programming is limited in dimensionality. This can become very constraining.

> When you write text-based code, you are also restricted to 2 dimensions, but it's really more like 1.5 because there is a heavy directionality bias that's like a waterfall, down and across.

These are two really good points. Text-based code is just a constrained version of a visual programming environment. So far most attempts at VP have attempted to represent code in terms of nodes and lines (trees), but that does not necessarily need to be the case.

The interesting about VP is that is presents a way to better map the concept and structure of programming as an abstract concept to how we physically interface with our coding environments.

It's far from likely that character and line-based editing is the mode of the future. Line-editing maps to the reality of programming in, to my eyes, such a limited way that is seems the potential for new interfaces, and modes of representation is wide open.

It's not that, as some people stubbornly say, there's no better alternative to text-based programming, but I think we just haven't conceived of a better way yet. We're biased to think in a certain way because most of us have programmed in a certain way almost solely by text and most of our tools are built to work with text. But that doesn't necessarily mean that the way we've done things is the best way indefinitely.

There's so much unexplored territory. VR opens up new frontiers. What if the concepts of files, lines, workspaces were to map to something else more elemental to programming as an abstract concept. What if we didn't think so much in terms of spacial and physical delineations, and instead something else? Blind programmers have a different idea of what an editor is. Spreadsheet programs "think" in terms of cells in relation to one another. What about different forms of input? Dark Souls can be beaten on a pair of bongos. Smash Bros players mod their controllers because their default mode of input isn't good enough at a high level. Aural and haptic interfaces are unexplored. Guitars and pianos are different "interfaces" to music. Sheet music is not a pure representation of music.

I think there's the mistaken belief that text == code, that text is the most essential form of code. Lines, characters are not the essential form of code. As soon as we've assigned something a variable name, we've already altered our code into a form to assist our cognition. Same with newlines, comments, filenames, the names of statements and keywords. When we program in terms of text, we're already transforming our interpretation of code and programming; we've already chosen and contrained ourselves to a particular palette.

What is the most essential form of code are (depends on the language, but generally) data structures and data flow. So far, our best interpretation of this is in the form of text, lines, characters, inputted by keyboard onto a flat screen - but this is still just one category of interpretation.

All this is to say is that text is not necessarily the one and only way, and it's too soon to say that it's the best way.

These are all excellent points, and I agree whole-heartedly. I'm glad someone else gets it. :) I'm going to favorite this comment to keep it mind.

The way I see it is that we've had an evolution of how to program computers. It's been:

circuits -> holes in cards -> text -> <plateau with mild "visual" improvements to IDEs> -> __the future__

I think many programmers are just unable to see the forest through the trees and weeds, but visual languages see a lot of power in specific domains like art, architecture, circuit and FPGA design, modeling tools, and control systems. I think this says something and also overlaps with what the Racket folks call Language Oriented Programming, which says that programming languages should adapt to the domain they are meant to solve problems in. Now all these visual languages are separate things, but they are a portion of the argument in that domain-specific problems require domain-specific solutions.

So what I believe we'll have in the future are hybrid approaches, of which LabVIEW is one flavor. Take what we all see at the end of whiteboard sessions. We see diagrams composed of text and icons that represent a broad swath of conceptual meaning. There is no reason why we can't work in the same way with programming languages and computer. We need more layers of abstraction in our environments, but it will take work and people looking to break out of the text=code thing. Many see text as the final form, but like I said above, I see it as part of the evolution of abstraction and tools. Text editors and IDEs are not gifted by the universe and are not inherent to programming; they were built.

This has already happened before with things like machine code and assembly. These languages do not offer the programmer enough tools to think more like a human, so the human must cater to the languages and deal with lower-level thought. I view most text-based programming languages similarly. There's just too many details I have to worry about that are not applicable to my problem and don't allow me to solve the problem in the way that I want. Languages that do provide this (like Elixir, F#, and Racket) are a joy to use, but they begin to push you to a visual paradigm. Look at Elixir, most of the time the first thing you see in an Elixir design is the process and supervision tree. And people rave about the pipe operator in these languages. Meanwhile, in LabVIEW, I have pipes, plural, all going on at the same time. It was kind of funny as I moved into text-based languages (I started in LabVIEW) to see the pipe operator introduced as a new, cool thing.

In general, I have many, many complaints about LabVIEW, but they do not overlap with the complaints of people unfamiliar with visual programming languages, because I've actually built large systems with it. Many times, when I go to other languages, especially something like Python, I feel I've gone back in time to something more primitive.

When VP is mentioned I think people automatically assume hairy nests of nodes, lines, trees. Slow and inefficient input schemes.

Text-based representations have tremendous upsides (granularity, easy to input, work with existing tools, easy to parse), but they also have downsides I think people tend to overlook. For example, reading and understanding code, especially foreign code, is quite difficult and involved; involves a lot of concentration, back and forth with API documentation, searching for and following external library calls ad nauseum. Comments help, but only so much. Code is just difficult to read and is expensive in terms of time and attention.

> Text editors and IDEs are not gifted by the universe and are not inherent to programming; they were built.

Bret Victor has some good presentations that addresses this idea. One thing he says is that in the early stages of personal computing, multitudes of ideas flowered that may seem strange to us today. A lot of that was because we were still exploring what computing actually was.

I don't dislike programming in vim/emacs/ides. Is it good enough? Yes, but... is this the final form? I think it'll take a creative mind to create a general-purpose representation to supersede text-based representations. I'm excited. I don't really know of anyone working on this, but I also can't see it not happening.

I'd propose that LabVIEW is just as dimensional as any text language while still presenting a 2nd dimension of visual reference.

LabVIEW has subVIs which are effectively no different from any subroutine or method in another language. LabVIEW has dynamic dispatch so it can run code with heterogeneous ancestry. You can launch code asynchronously in the background, which isn't even necessary to accomplish multi-threaded execution in LabVIEW (though there are plenty of other gotchas for those used to manual control of threading along with a couple of sticky single threaded processes that might get into your way when trying to high level reusable code). You can even implement by-reference architectures adding yet another way to break out of the 2D-ness of its diagrams. Perhaps a new development for most here will be that LabVIEW is now completely free for personal use (non-commercial & non-academic). Still, like some have pointed out, LabVIEW really shines with its hardware integration. It's the Apple of the physical I/O world. The only reason I avoid it for anything larger than small projects is it needing it's not tiny run-time engine which isn't any different from .Net distributables just more.... niche?

With text you get top to bottom lines of text (a single dimension) and any additional dimensionality has to be conceptualized completely in your mind... Or in design tools like UML... which display relations in a 2D manner. SQL design tools these days provide 2D visualizations in a graph-like manner to relate all the linkages between tables. User stories, process flow, and state diagrams are (or at least should be) mapped out in 2D in a design document before putting down code. How does the execution order of functions and the usage of variables provide any more freedom?

All I want to establish is that LabVIEW is another tool in the toolbox. People used to text are used to SEEING a single dimension and thinking about all the others in their head or outside the language. LabVIEW places two dimensions in front of you which changes how you can/have to think about the other dimensions of the software. With skilled architecture of a LabVIEW program the application will already resemble a UML, flow, or state chart. I do agree that some stuff that feels much simpler in text languages such as calculations are much more of a bear in LabVIEW; tasks that are inherently single dimensional in their text expression suddenly fan out into something more resembling the cmos gate logic traffic light circuit I made at uni.

I do embedded uC development with C/C++, I do industrial control systems and automated test in LabVIEW, and I even subject myself to the iron maiden of kludging together hundreds of libraries known as configuration file editing with a smattering of glue logic AKA modern web development (only partially sarcastic, if I never have to look at a webpack config file again I'll die happy). I (obviously by now) have the inverse view of most in this thread. For simple stuff I use C#. Microcontroller based projects I use C/C++. For larger projects I'll use LabVIEW.

Then, when something has to run in a browser I stick my head in the toilet and smash the seat down against my head repeatedly. Then I'll begin to search google for the 30 tabs I'll need to open to relearn how to get an environment setup, figure out which packages are available for what I'm trying to do, learn how to interact with the security of the backend framework I'm using, learn the de facto templating engine for said framework, decide which of the 4 available database relation packages I want to use for said backend, spend a week starting over because I realize one of the packages I based the architecture around was super easy to start with but is out of date; has expired documentation; conflicts with this other newer library I was planning on using for some other feature... Now I need a cold shower and a drink.

Cheers mates!

P.S. I do find a lot of modern web development fun, but the mind-load on top of all my other projects and professional work can be a bit much. I'm sure someone that started out in webdev has the same exact vomitous reaction to something like LabVIEW.

I'm curious as to which difficult concepts become easy to understand when presented visually. To me, the difficulty of programming has never been in its textual representation. Just as in mathematics, the real challenges have always been related to conceptual understanding. Is there a good example of visual programming making difficult concepts easy?

A good example would be Max/Msp. It's used by a lot of musicians creative coders. Visual representation abstracts the boring pars of programming and enables experimentation much more rapidly. Quickly rearranging nodes is more intuitive than changing how you pass around objects.

For me, personally, I think in boxes and lines and when I program, I spend a lot of times scribbling on paper to map relationships between data, to show the data flow between modules or concepts, to show the pipeline stages, that kind of thing. I do it mainly as a way to organise or solidify my thoughts, less to document them.

When I used Max/MSP some years ago, I had the interesting experience that I found I didn't need to do this on paper because doing it in Max code was enough. I've wished for a Max-like language more geared towards general purpose programming[1] ever since, but alas, I've yet to find one.

[1] Richer data structures (I think it may have got these since actually), tools for unit testing, stuff like that.

> Is there a good example of visual programming making difficult concepts easy?

Analog computers with patches and plugs, ancestors of Max/Msp and similar software, were successful because they familiarized users with differential equations much more intimately than other tools, like symbol manipulation. Also they were often more powerful, as analytic methods were limited to, say solving linear equations.

After the fall of electric analog computers and rise of digitals, there were simulators of analog computers, but in text form. So you would trick the digital computer to perform continuous calculations, but you still had to write code. This code would represent a graph where nodes were analog functions, and edges were 1 dimensional values. (tensorflow is somewhat similar).

In regular modern programming, you don't usually try to make a graph of value transformations, so fitting different concepts to visual space does not make them more understandable. That's my guess for why visual programming did not succeed in wide areas.

If there was anything that would fit visual programming, my bet is analog computing and control theory, as they were traditionally very connected. I've been working on a little toy graph editor for a while, event handling gets complicated in javascript very quickly, and there is svg issues, etc.

Instead of having to compute the code in your head and stepping through the changes mentally it’d be easier to see it visually.

This doesn’t mean replacing the code since text is an efficient way to write. I’d want the code to generate a real time representation of its flow.

Imagine writing a graph and running Dijkstra’s algorithm and seeing the nodes appear as you write them, seeing edges appear and seeing the traversal next to the code.

We do this anyway in our heads or poorly with debuggers. We do this on whiteboards when trying to understand the problem.

If this was built into our tools we could offload the computing of our code itself to the computer and get immediate feedback. It’d be a lot easier to reason about.

Tree structures, recursion, etc. In the end the intuitive understanding of code good programmers have would come, but it could come a lot faster and easier. As a bonus you could see non obvious patterns you might otherwise miss.

Showing the "big picture", the data-flows, the relations, etc. are all prime-candidates where visual presentation can sell better than text. A well made diagramm can be more informativ than any amount of text.

On top, having your code in a structued source, you can easily manpulate the structure itself and get alternatice views. Ther eis literate programming which kinda goes into this direction, there you detache the sourcecode-organisation from it's technical organisation. Thouhg, this of course is not directly linked to "visual" programming, but work on the same conditions.

I rarely see Grasshopper [1] mentioned in these threads. This is a very successful visual programming tool used by designers and engineers, primarily for generating geometry.

Where I work (structural engineering firm) some of the engineers do use it for general purpose programming. I see the appeal of it, but keeping the layout tidy and all the clicking is just too much effort. However for generating geometry it's quite a useful tool.

[1] https://www.rhino3d.com/6/new/grasshopper

Written text IS a visual medium. It works because there is a finite alphabet of characters that can be combined into millions of words. Any other "visual" language needs a similar structure of primitives to be unambiguously interpreted.

You say visual programming seems to unlock a ton of value. What can you do with a visual language that is much easier than text? Difficult concepts might be easier to understand once there is visual representation, but that does not imply creating the visual representation is easier. And why should pictures be more approachable than text? People might understand pictures before they can read, but we still teach everyone to read.

The term “visual programming“ generally refers to spatial diagrams (usually 2D, but 3D especially for 3D subject matter).

Think coordinates, graphs, nodes, edges, flows, and nested diagrams.

“Visual” is especially meaningful in that many relationships are shown explicitly with connected lines or other means.

So yes, for many things a diagram, tree or table structure actually layer out in 2-dimensions to match what it represents is easier to understand.

Surely you appreciate diagrams in educational material despite the text. Surely you have drawn graphs or other kinds of diagrams when you need to visualize (spatially) relationships between parts of something you are designing?

If not, you just have a different style of thinking than many other people.

That contrasts with text code where connections are primarily discovered by proximity of code or common symbols.

Of course text is visual in that it’s a visible representation.

Spreadsheets are a good example of a combination of text “code” embedded in a visual table representation.

So writes are painters too because they have eyes? In a sense yes, but it's not what most people would accept.

> What can you do with a visual language that is much easier than text?

Experiment an order of magnitude faster than you with text. What might take me 2s with a UI might take you 20s in text ore more. You also don't have to care about coding style, naming a ton of variables (just your nodes) so it removes lots of boilerplate.

Visual programming is usually domain specific so the UI fine tuned towards a certain problem. So comparing C++ for example to Max/Msp is missing the point. Visual programming is about solving domain specific problems while text based programming is general (to an extent... don't write device drivers in PHP).

I am working (with a grant) on 2 different concepts of visual programming; one is kind of more traditional wires/boxes dataflow type but with some novel twists and the other is something completely new. I have built many of these in the past (some of them ran production for years before getting phased out in favour of a 'normal' PL) and it never felt right, but I keep trying. Normally I started top-down; first visual tooling then, as an afterthought, the storage of the language (xml for instance, jikes). Now I started with the language in textual form (not meant to program in directly but readable) and that works much better; so there is a language, runtime, debugger and introspection tooling all done and now we are putting 2 different visual frontends on top to see if it works. So far the results have been good; our sponsor is looking to launch something as a product and it's quite cool work.

Lame question: how do you find such grants?

I am not very silent about what I think is broken about programming and sometimes I find people with the same feeling. This one is from a company I bumped into; they have a db/codeless type of thing and they want to make it easier for people to use. It is particularly not an investment; maybe that comes after if it works; it is money to play around with different ideas which they give out for this type of purpose.

Got it, thanks! It's just struck me that my project about visual programming have stallen exactly due to the lack of vision how to find time/money for it, and I've never thought about grants possibility.

If you don't mind sharing some of your papers/opinions here, what do you think is broken about programming?

Is there any demo to look at ?

Not yet, unfortunately (and again) I went for money so i'm under NDA etc. But there will be before the summer ends (deadline is end of august).

I'm taking a very very similar approach -- starting off with a computational model driven by an easy-to-diff data structure that can be serialised to YAML/JSON/XML and then looking for ways of building a UI on top of that. I'd love to share experiences with you -- NDA permitting, of course.

Can you drop me an email ? It's in my HN profile. It sounds we are indeed doing similar things.

In practice, most visual tools aren’t compatible or easily usable with things like Git for diffing. This gets tough for large projects.

Labview does have a visual-diff tool, but when I was using labview regularly on a complex project, no-one used the diff system. They just checked out the entire file and compared it visually to another version.

Another thing: you can’t ctrl-f for control-flow structures. You end up mousing around for everything.

Another problem, all major graphical languages I’ve used are proprietary (labview, simulink, Losant workflow system).

Successfully working in teams in LabVIEW means having experienced architect(s) that can effectively divide up the design into something where developers won't step on each other. When you inevitably have to merge efforts that do collide you usually just punt, pick one of the two, and do some rework. There are some options available in LabVIEW for making parts of how it works more compatible with source control but at the end of the day they're still binary files and the prescribed option can impede with some other work flows that are sometimes needed. If LabVIEW source was saved in some hierarchical structure such as XML (that would have to be another layer learned so not really much of a fix) it could play better with patch/merge. The diff and merge tools really aren't terrible I just think that people are so used to not needing to configure external tools for this that they don't even know it's possible. I can double click a VI in a git log and see highlighted differences between versions just like people can in any textual tool.

If you label structures (you can turn on a structure's label and then give it some unique text) it becomes searchable. You can also create # bookmark comments that link to structures/nodes/anything on the block diagram.

NI just released a completely free for personal use version of LabVIEW (unlike their former "cheap" home edition that was watermarked and lacked features like compiling executables) though my belief is that they don't have an outwardly evident plan of long term strategy which is only likely to further the majority opinion of the platform. Also this ultimately doesn't help people that may be interested in using it in a work / academic environment or anyone that's used to a less expensive hardware platform. Ladder logic programming feels more akin to chiseling into stone, and the platform support in the NI ecosystem can be quite nice for a lot of applications.

I've been developing a visual reactive programming language, Bonsai (https://bonsai-rx.org/) for the past 8 years and it's proven successful in neuroscience research. It is an algebra for reactive streams and addresses some of the issues raised in this thread about GPVPLs:

a) it's amenable to version-control;

b) you can type/edit a program with the keyboard without the need for drag-drop;

c) the auto-layout approach avoids readability issues with too much "artistic freedom" for freestyle diagramming languages like LabView;

d) because its mostly a pure algebraic language w/ type inference there is little complicated syntax to juggle;

e) the IDE is actually faster even though Bonsai is compiled because there is no text parsing involved, so you are editing the abstract syntax tree directly

It definitely has its fair share of scalability problems, but most of them seem to be issues with the IDE, rather than the approach to the language itself. I've never probed the programming community about it, so would be curious to hear any feedback about how bad it is.

To represent a program as a graph you need a hyper-graph. That is a graph where every edge is also a node, for example:

There is no good way to draw this visually without losing the ability to be able to draw all possible programs. The grammar of both natural and programming languages use various strategies to get around this, and they are actually very efficient at it (from an information theoretic point-of-view).

Thats not to say one can't improve on the current culture of writing programs in what is basically ASCII text, but visual programming is just not powerful enough to describe arbitrary computations.

Could you go into detail why you think that a hypergraph is needed? I think every program can be represented by a directed graph of basic blocks.

First-order functions maybe?

The following function has an obvious visual representation as a processing block:

    int add(int a, int b);
But how do you represent a function allowing other functions in its signature, e.g:

    void transformInPlace(int* input, int (*func)(int));

The same way the processor does. The function argument becomes a pointer you jump to when you want to call the function.

In LabVIEW I wire in a reference to a VI that provides the functionality or I use a design similar to the command pattern where there's an override method and I specify the class instance as the argument. Depends on the granularity of the design needed and how many different classes I want to define (usually not that many).

I don't think you are wrong about the DAGs, though in programming language parlance they are called abstract syntax trees (AST). Sure you can do like Scratch, which is basically an AST editor instead of a code editor. Not a bad idea in of itself, but what if you start writing those basic blocks in your language of choice? Keep looking up the function definition in your IDE and at some point you end at some assembly with long jumps all over the place.

The point I wanted to make was that no matter what design choice you make as a language author, to make this DAG/AST as palatable for the end user, be it with a imperative, functional, logic or concatenive style of programming, a price is aways payed, in terms of complexity at some level (time, space, ergonomics), no matter what you do.

I'm not thinking of the AST, but of a graph of https://en.wikipedia.org/wiki/Basic_block

You didn't answer the question. ASTs are not DAGs, they are just trees. They also do not represent the computation, but the language syntax.

Many visual programming languages are based around DAGs and they do represent the computation itself. The edges are usually not nodes either, so those DAGs are not hypergraphs.

hi wrnr,

can you explain your graphic? what is the difference.between (2) and [2]?

It is a bit like neo4j's cypher, (2) is an edge, like a circle, [2] is a label, like in a property graph.

Sort of like that, cypher does not support this type of expression though.

Just dropping in DRAKON to the mix. https://en.wikipedia.org/wiki/DRAKON

Its actively designed to both prevent several typical problems of visual programming:

First, the overlapping wire mess getting out of hand. It doesn't let you overlap wires, if your program gets this complicated it forces you to break it down into a smaller unit. It takes some adjustment, but you get benefits from it.

Second, the "cant use it for general purpose stuff" problem is solved by the fact DRAKON can output to multiple different programming languages, meaning you can use it to build libraries and other components where you want without forcing you to use it for everything.

Difficult concepts can't be represented in visual form. There's a hard limit to the amount of complexity you can represent visually. On the far side of that line everything starts to look like spaghetti.

But... this can be improved with good tooling and clear abstraction layers. Some of the problems with visual systems are caused by poor navigation and representation systems. Making/breaking links and handling large systems in small windows makes for a painful level of overhead.

This also applies to text - jumping between files isn't fun - but visual representations are less dense than text, so you have to do more level and file hopping.

If someone invented a more tactile way of making/breaking links and you could develop systems on very large screens (or their VR equivalent) so you could see far more of the system at once, hybrid systems - with a visual representation of the architecture, and small-view text editing of details - might well be more productive than anything we have today.

I love this idea and I'm already picturing a mashup of LabVIEW class hierarchy/VI call hierarchies but it's dwarf fortress with multiple z-level view enabled and you can mousewheel up and down through the pyramid of the application architecture. I'm just lost as to how you'd relate/visualize the dynamic instances of classes/modules so that you could have a unified debugging environment, like you do with the regular block diagrams.

I've recently debugged LabVIEW code that resulted in the taskbar listing of windows names to overflow. Definitely less than optimal.

When I first got into programming, I asked myself the same thing. The low hanging fruit task it creating something like a CRUD app. Trivial drag & drop CRUD forms are pretty achievable.

Let's say you whip up something cool using visual programming, and then you have a business requirement that requires something you can't easily squeeze into your CRUD app: maybe a database join, a query that doesn't cleanly fit into your access patterns, or you just wanna make a certain thing faster.

Then you design a scripting console, and now you have something that lets you build custom solutions. Well, at that point you're basically implementing non-visual programming. And at a certain point you reach the limits of what you can script in a hacky way. or you become more comfortable with the console than they UI, and you just chuck the visual programming altogether.

As I'm writing this, I'm thinking that I actually do visual programming, except I'm doing IN MY MIND. Who needs a body brain interface when the goal of using your hands is to get it into your brain?, but it's already in my brain?! Well, it'd be nice to get some stuff out of my brain cuz cognitive load. And as much as I'd like to develop tech that makes it easier to get stuff out of my head, I'm mired down in trying to get the latest feature from the product team to work at all :)

How about visual representations at different levels and yet stick to non-visual programming. I find that a static visual helps here and there but a full on visual programming would be a pain to use in my use cases.

Try making a complicated program in a visual programming language and you'll see why pretty quickly.

My company used Azure ML Studio, which is a great program for making quick ML predictions. But making any kind of reasonably complicated data processing pipeline takes a lot of effort. I switched to writing code to process and run my predictions and my life became much simpler.

Language is extremely expressive and you can pack a lot of meaning into a small space.

I will just add some practical experience, we work with WebMethods flow languages (used for business integration). It looks cool because you can make "code" which is more readable for non IT people. The problem is that some task which would require one or two lines of codes needs several graphical "nodes" and just too many clicks. The more complex the diagram gets the easier is to make mistake and eventually some things has to be implemented in Perl, Python or Java because the flow languages also has it's limitations. I would say it is great for simple or medium complex solutions but very complex solutions tend to be messy and developers tend to avoid them. They say it is easier to iterate and skim through complex text code, where you can search and use some IDE features, than to expand and click through the whole graphical diagram. The graphical notation also does not show all the information so you cannot get it by scaning the code, instead you need to manually click and open the nodes to get e.g. the connectiong interface name. The graphical notation needs to abstract from some information otherwise it would be messy and hard to read.

Let me refer you to a 2018 post called "Visual Programming — Why It's a Bad Idea:


As soon as anything gets beyond basics, and you require a "power user" to either comprehend, fix or add features - suddenly visual programming becomes a fucking pain in the arse.

And you're required to go into code / the "source" representation or deep "configuration" of the visual elements, which just takes 10x longer than writing code in the first place, suddenly the last mile takes months to get right.

Unreal engine has a visual scripting system called blueprints. It’s very strange to follow the logic, the work is mouse heavy, code review and merging is hard, and it feels very convoluted compared to actual programming. However, the visual scripting for materials/shaders, particles in unity, and ai with behaviour trees is quite nice.

Visual scripting is growing but it’s better for some things than others

Making shaders with blenders visual scripting is satisfyingly easy to learn. Domain specific stuff seems to work quite well in that way.

Show HN:


> Luna is a data processing and visualization environment built on a principle that people need an immediate connection to what they are building. It provides an ever-growing library of highly tailored, domain specific components and an extensible framework for building new ones.

Looks like you're new here. Is that your project? If not, you have misunderstood what "Show HN" generally means. Click "Show" at the top of the screen, then "rules".

We have covered Luna here quite a few times: https://hn.algolia.com/?q=luna+lang

It's not mine, but I wanted to post it here because project is quite impressive itself but it also fits this topic

At work, I am required to use SPSS modeler which has visual programming model, and I mainly dislike it for the reason of not having a way to easily diff and find out what has changed.

That aside, I think most of us actually code in visual programming style, but all the "visuals" are constructed in our head on the fly as we read the code text. So how good you are at coding maybe a function of how well you can represent these structures and how long you can maintain them in your head. Maybe an external tool that does it for us produces a representation doesn't mesh well with the internal representation for programmers experienced in text based programming.

Saying "Visual programming will replace text-based programming" is like saying "Flowcharts will replace novels." People tend to prefer text over diagrams for anything longer than a page, partly because text follows the structure of natural language.

Another issue: it's hard to pretty print visual programs or to put them in any sort of canonical form. This makes it harder to read them--there's no way to enforce a consistent style. It also makes various processing tasks (e.g. diffing, merging) much harder.

Aside from everything else, visual programming as it's usually implemented is inherently slower than typing text when the latter is not ‘hunt-and-peck’. Because, after some experience with the keyboard, your brain knows exactly how to jerk a finger to get a particular character and can do that very fast (touch typing particularly excels at this since the normal positions of the fingers are fixed—and it's aptly called ‘blind typing’ in some languages). Meanwhile, to lay stuff out on the screen you need to 1) find with your eyes where to grab a block and 1b) where to put it; 2) move the mouse to grab the block and 2b) to drop it, and sometimes also do that with connectors too. Both of these kinds of operations—visual and manual—are way slower than mechanical jerking of the limbs, especially due to them being prone to error. Even worse if you have to drag-and-drop instead of just clicking. All this fiddling requires you to pay close attention to things on the screen, while typing mostly allows the visual system to coast along or tune out altogether. Multiply this by hundreds of words a day, and you'll see that visual programming is in large part an exercise in mouse-brandishing amid visual load, in the vein of FPSes. And usually you have to regularly move the hands to the keyboard anyway, to name your blocks, variables and whatnot.

On phones, the difference shrinks since typing relies on just two thumbs with no fixed position and no physical keys—while onscreen manipulation gets more immediate compared to a mouse. However, phones suffer from short supply of screen area where you pick and place the blocks. In my experience, it would still make sense to choose blocks with typing (by filtering in real-time from the entire list), and to minimize the work of placement—like in Scratch or Tasker.

Visual programming might have distinct value if it allowed to manipulate higher-level concepts compared to text-based coding. However, it turns out that the current building blocks of code work pretty well in providing both conciseness and flexibility, so visual blocks tend to just clone them. Again, the situation is better is you can ride on some APIs specific to your problem domain—like movement commands in Scratch or phone automation in Tasker and ‘Automate’. Similarly, laying out higher-level concepts like in UML or database diagrams has its own benefit by making the structure and connections more prominent.

A typo: that should've been ‘if you can ride on APIs’.

I think in some ways spreadsheets are visual programming tools. You have data, functions and layout / presentation, all working together in one space.

I think this is one of the draws of spreadsheets for simple "programs" and non-programmers. And of course, spreadsheets are ubiquitous.

It was a running joke that every year in our school another student would start its master thesis journey in " visual programming for the masses, but that works! ".

Can you start with using an WYSIWYG HTML editor and do a really good webpage to see the benefits and drawbacks?

So, what do you make of the fact that most electronic / avant-garde music composition students learn Max/MSP and are in general quite succesful at it without even a computer science background

"Computer Science is no more about computers than astronomy is about telescopes.". I think you meant: programming-background.

I didn't mean to be condescending, it was just a very common master thesis subject.

Go ahead and write your visual programming language, maybe I'll learn a thing or two.

> "Computer Science is no more about computers than astronomy is about telescopes.".

I don't understand the relationship between that sentence and what I said. You can do visual programming on paper without any computers involved, and I've seen a fair share of artists actually do that. TBF in my native tongue there's only one word for anything related to computer science, programming, etc which is 'informatique', so that may bias things a bit.

> Go ahead and write your visual programming language, maybe I'll learn a thing or two.

I actually went ahead five years ago or so as part of a team :-) https://hal.inria.fr/hal-01364702/document - currently instantiated as https://github.com/OSSIA/score

Looks really polished for such a small team! Great work. :)

Is there a visual programming language that is self hosting? i.e. one where the entire system is written in its own visual language including the compiler and any runtime VM?

Plenty are. I'd say almost all visual programming languages are like this, at least the ones that have native code for their OS of choice.

That seems incorrect. The most commonly mentioned ones in this thread are:

LabVIEW (C, C++, C#): https://en.wikipedia.org/wiki/LabVIEW

Unreal's Blueprints seem to produce C++ code? I can't find any mention of it being used to implement itself: https://game-ace.com/blog/unreal-engine-blueprints/

Max/MSP (C++): https://en.wikipedia.org/wiki/Max_(software)

Please name a few of the "almost all" visual programming languages that are self-hosting.

Delphi, Lazarus, Visual Studio - to name just a few

None of those are visual programming languages. A graphical UI editor is entirely different from what we're talking about, which is graphical code.

oh, then based on that argument you're not human, just a bunch of atoms

Visual Studio is as visual as C++ is.

I think at least part of it is image. "real" programmers dismiss visual languages, especially ones aimed at kids like Scratch or Snap as toys, not the kind of thing Real Programmers use. (insert references to all the Real Programmer humor)

Aside from that, there is the issue of tooling (source control, etc), editing large blocks, etc. which the visual languages I've used are not great at.

But it should be recognized that some things are better visually and some things are better textually. Typing "a = b + c" is way simpler than dragging a bunch of blocks around to describe the same thing. But visual tools are superior for understanding relationships - a connects to b, which connects to c makes a lot more sense when you see it as "[a] -> [b] -> [c]", and an ascii diagram like that quickly becomes unwieldy while graphical boxes still work.

I find an interesting comparison between drawing diagrams with a diagramming tool (e.g., Lucidchart) vs with a textual description language (PlantUML). I find the textual language far easier to use to quickly produce diagrams, but LucidChart is superior for tweaking the exact dimensions and alignments of things.

All of which is to say, both approaches have cases where they work better, and others not so much.

100% agree with the calculation expression difficulty in visual languages. Stuff like math and binary communications over serial, TCP, etc. just feel incredibly tedious for what I know how to do in 1 or two lines of C.

What I definitely appreciate in my professional life is the ability to directly map the high level design of an application or module into nodes on a diagram and then I descend down filling in the implementation. Not really much of a difference in developing in text languages except that there's a physical layout that matches, what should have been written, nearly directly the documented UML, user stories, flow charts, and sometimes state diagrams of design documentation. When you're doing combined architecture and implementation work it can nearly eliminate the mind-load difference between the two, or at least it does for me, having now worked in the visual environment I've used for 8 years. (I grokked it many years ago)

You are completely ignoring the vast swathe of 'engineering' programming market that is covered by Simulink, LabVIEW etc.

Not 'engineering' programming, but real engineering programming. I did a lot of that. Automotive, aerospace, space shuttle, power stations and such.

No syntax or type bugs, just logical or more like physical bugs. Because you are modelling physics, and sometimes the model is just not good enough. Still vastly better than traditional C++ models.

Problems: No diff tool. You can hardly see what changed. That is like shipping updated lisp images or binaries without source code to the devs. You also get a lot of windows, like 30 for a typical small model.

In college I had to use labVIEW; a visual language normally used for automation. I found it significantly harder to work with compared to programming the robot in C. Part of it has to do with my familiarity of the language / learning what the shapes meant, but another part of it was trying to juggle the programs layout. Eventually, everything became a big mess and was hard to maintain.

Using labVIEW over C did have some benefits. It seemed like streamlined concurrency is a major advantage.

Multiple 4k monitors help, but IMO the limitation really is "all the code can be displayed at the same time".

> IMO the limitation really is "all the code can be displayed at the same time"

You don't have to have all your code inside a single diagram. Any professional LabVIEW programmer has a rule that a single VI (basically function or method) should only require a single modestly sized monitor to view it, aside from a few exceptions. This is akin to text-based languages having a sweet spot of 500-1,000 lines of code per file and keeping a function within a single screen without needing to scroll. Anything above starts to become unwieldy.

The size of a LabVIEW diagram isn't a limitation of the medium just like the size of a text-based programming language's file isn't a limitation of the medium. It all boils down to the programmer needing to modularize appropriately.

I interned in a research lab where half the PHDs had their teams use some outdated version of LabView and the other half all used Simulink. There was not real king of feature parity (both the tools have very well developed ecosystems), but the LabView version they used couldn't zoom in or out.

Simulink was much preferred.

> Eventually, everything became a big mess and was hard to maintain.

You need to architect your application using classes and functions just like in any other language with whatever appropriate data abstractions present.

I'd love to see an editor for textual programming languages that can display the code visually. For example a tool which can show the code within a single JavaScript function as a flow chart. I don't see why that shouldn't be possible but I've never found one.

Note I'm not talking about class diagrams, I want to see a flow chart of the actual imperative code (for loops, if/then, etc...) of an existing popular text-based programming language.

They have been around for a while, esp. about 10-20 years ago -- I've seen these plug-ins wane in popularity. I think the reason is the same as the arguments here for VP -- although a flowchart is nice in principal it becomes unwieldy and quickly looses any value once something doesn't all fit on the screen -- now your "navigating" rather than reading -- there is no gestalt to grok.

I'd love to see that as well. I might try to give it a go for Python code - try to generate the control flow graph, data dependency graph and syntax tree, and find a way of flipping between them.

Visual programming, imo, is not popular for general purpose programming largely because there doesn’t presently exist, and it’s unclear if there ever will exist, general purpose visual programming tools that work well and provide a notable benefit over text-based programming.

When you add some constraints in, like for example, the limitations of a spreadsheet, visual programming can work exceedingly well. It works great for these domain specific usages. But honestly, a text document of textual statements is a pretty good way to represent a general purpose programming language made up of procedural statements. You could make a UI like Scratch for other programming languages, but:

- the interface would be cluttered and likely not nearly as efficient as just typing

- other than virtually eliminating syntax errors its unclear what you are accomplishing - its not easier to break down problems or think procedurally.

- You could probably get similar benefits with a hybrid approach, like an editor that is aware of the language AST and works in tokens.

So my view is that visual programming is perfectly mainstream and just has not been demonstrated to have substantial benefits for typical general purpose programming languages.

I worked with a team working with visual programming tool once. The tool was connecting programming blocks with arrows. The complexity of finished program was such, that it looked laid out like a motherboard, full of arrows (traces) - and an idea that you could follow the control flow from that was laughable.

It does not end well. The results are not pretty. Stick to text representation of any control flows.

My assumption on what I have heard so far is that.. Ive been amazed for a good while now by the on-line-system from Engelbart, Sketchpad by Sutherland, Smalltalk from Xerox PARC, Nelsons Xanadu and later Bret Victors demos. They where/are visually and philosophically strong, and seemingly inspired countless weaker systems that in turn somehow got picked up as the "industry standard".

Compromises where made, quick-fixes on quick-fixes made text interfaces just usable enough, sunk costs grew and habits formed. The visual programming I see in game-engines now are carrying those habits with it, because to build a language of nodes you first have to learn the ways of ASCII code.

And from what I understand, hardware is optimised for what ever software is popular enough to sell, so even if the software changed, the hardware would take longer. It takes an awesome goal to justify starting over on a truly visual interaction path when there is a system that almost, kinda works. And what-ifs are not in budget.

Well, so you're using the term "visual" as though it were a commonly understood term, but implicitly excluding all of Microsoft's products with Visual in the name?

I've been trying to implement something with Power Automate, and presumably that's "mainstream", but it strikes me as falling into the classic pattern of appealing to buyers rather than users. I feel 10-100 times less productive than with, say, VBA, for no advantage.

One thing that is particularly frustrating to me is that it's so slow and buggy I am afraid of losing my work at any moment. You can't save your work unless it passes preliminary validation, but sometimes reopening it makes it invalid by removing quotes or whatever. Copying something out and pasting it back often fails to validate too, as the transformations are not inverses like they should be. Sometimes it just gets corrupted entirely. I'm not aware of any way to manage versions, or undo beyond a few trivial actions.

But the more fundamental reason I hate this is because it seems not to be designed to let you take a chunk of logic and use it in a modular way. At least this style of "visual programming" seems to apply the disadvantages of physically building things out of blocks, where it's entirely unnecessary. You've got some chain of actions A->B->C, but the stuff inside those actions is on a different level; you can't take that chunk of stuff and use it as a tool to do more sophisticated things. As far as I can tell. I keep thinking "it can't be as simplistic as it seems" and thinking I'm about to find a way to create general functions.

See: https://flow.microsoft.com/en-us/

Visual Studio : visual programming :: VI (VIsual editor) : Blender

That's one possible reason, but anyway, that's implicitly why I brought up Power Automate, to determine if that was the reason. Would you call it (formerly Microsoft Flow) visual programming? Because it certainly is frustrating to me in a way that traditional programming is not. Anything this awful must qualify as visual programming...

Today I was trying to figure out if I could work around some of my problems by converting everything to XML and using XPath to manipulate it but I didn't get far and apparently Microsoft only does XPath 1.0.

It's been a dream concept for a very, very long time. My company (long before I joined) started in 1995 making a visual programming language. We keep the site up for posterity: http://www.sanscript.net/index.htm

Lots of examples on Progopedia: http://progopedia.com/version/sanscript-2.2/

It turned out, people weren't really interested. However people were interested in the diagramming library created to make the language, so by virtue of already having thought really hard about what goes into good diagramming tools, my company started selling that. Girst as C++, then Java, then .NET, now as a JavaScript and TypeScript library called GoJS: https://gojs.net (Go = Graphical Object).

What I would like to see is a hybrid approach - text input on one side, and a visual graph on the other. I'd like to be able to live-edit a graph by typing, and see how the data flows between objects. Likewise I can rewire the graph and have the references in the code update.

It's one of those ideas I have no time to implement sadly, at least for now.

You might like to explore the Luna language: https://luna-lang.org/

Because complex programs ramp up in difficulty of reading and consumption (i.e "looking at") way faster than text does.

I would say it's the opposite, visual programs have more ways for you to zoom in & out and trace what is happening. From what I saw in Mendix it makes code navigation and understanding way easier.

It might also be related to how we learn in different ways. Some need to view, others to reconstruct and visualize in their heads.

Control flow is hard to describe visually. Think about how often we write conditions and loops.

That said - working with data is an area that lends itself well to visual programming. Data pipelines don’t have branching control flow and So you’ll see some really successful companies in this space.

Alteryx has a $8b market cap. Excel is visual programming as well.

Aren't conditionals and loops easier in visual languages? If you need something to iterate, you just draw a for loop around it. If you need two while loops each doing something concurrently, you just draw two parallel while loops. If you need to conditionally do something, just draw a conditional structure and put code in each condition.

One type of control structure I have not seen a good implementation of is pattern matching. But that doesn't mean it can't exist, and it's also something most text-based languages don't do anyway.

> If you need two while loops each doing something concurrently, you just draw two parallel while loops.

Not quite. You'd need to draw two parallel boxes, each of which is strictly single-entry/single-exit, and draw a while loop in each box. This is because a while loop

     v         +-->[fn]--^
  -->*-->«cond?»           +-->
depicts parallel flows that do not represent stuff being done concurrently! Once you acknowledge that, pattern matching actually becomes easy: just start with a "control flow" pattern and include a conditional choice node with multiple flowlines going out of it, one for each choice in the match statement. You're drawing control flow so it's easy to see that the multiple flowlines represent dispatch, not something happening in parallel.

Here's a picture of what I was talking about:


Now, the two while loops, as shown here, have no dependencies between each other and are indeed processing in parallel. However, there are various mechanisms in LabVIEW to exchange data between the two loops, the most common being queues, in which case they process concurrently.

You can also have a for loop iterating on an array.


In LabVIEW, it's nice because it's trivial to configure the loop iterations to occur in parallel (if there are no dependencies between iterations), using all available cores in the computer.

And by pattern matching I meant something like the pattern matching and type destructuring you find in SML, Ocaml, F#, and Elixir.

Yes, that's really just abstracting away the control flow I was depicting in my simple diagram. It's treating "while" as a higher order function of sorts, with its own input(s) of type "data flow diagram connecting types Xs and Ys". That's the best you can do if your language only really deals in dataflow, as w/ LabVIEW ]. And that's really what leads to the difficulty you mention with pattern matching. Pattern matching on a variant record is inherently a control-flow step, even though destructuring variables in a single variant should clearly be depicted as data flow.

Not sure it's that hard, what about Σ and 𝚷. Branching conditionals are also easy to represent graphically

One of the unappreciated facets of visual languages is precisely the dichotomy between easy dataflow vs easy control flow. Everyone can agree that

  --> [A] --> [B] -->        ------>
represents (1) a simple pipeline (function composition) and (2) a sort of local no-op, but what about more complex representations? Does parallel composition of arrows and boxes represent multiple data inputs/outputs/computations occurring concurrently, or entry/exit points and alternative choices in a sequential process? Is there a natural "split" of flowlines to represent duplication of data, or instead a natural "merge" for converging control flows after a choice? Do looping diagrams represent variable unification and inference of a fixpoint, or the simpler case of a computation recursing on itself, with control jumping back to an earlier point in the program with updated data?

Either of these is a consistent choice in itself, but they don't play well together as part of the same diagram unless rigorously separated.

A different point is that some high-level abstractions do have a fairly natural visual representation. This even includes the "functors" and "monads" that are well-known in FP. In general, one should not be surprised if category-theoretical abstractions turn out to be representable in this way. (Many visual languages actually turn this into a source of confusion, by conflating operations on an atomic piece of data with operations on a stream or sequence of singular data points. A stream is a kind of functor, which is why it (with its peculiar operations) is endowed with a simple representation in dataflow. But there are other functors of interest.)

I've always thought that the best solution for disagreement is to simply try everything and then figure out after the fact what works and what doesn't. I don't think there should just be one visual language, we don't just have one programming language and if we did you could bet it would be something terrible. The biggest hurdle is the implementation of the UI, it's hard to make it usable and a lot harder than just putting characters on lines.

The implementation of the UI could be made generic. Make a UI that allows for playing with boxes and arrows in the usual visual-language-y way, and ultimately outputs a serialized, optimized representation that can be fed to a compiler-like pass. Then the semantics, and the pre-defined components to go with it, can be defined separately.

I used LabVIEW for a while. I noticed a couple things. First, it is actually physically laborious to create and edit code. I got really severe eyestrain headaches and wrist fatigue from it. Also, programs (including one written by a certified LabView consultant) bigger than one screen become very difficult to read and maintain. While LabVIEW programs can be refactored like any other language, the physical labor involved discourages it from actually happening.

I think another issue is that it's costly to create a visual language, discouraging experimentation with new languages. With a text based language, all of the editing tools are already there -- a text editor -- on any decent computer. You can focus on refining your language, and getting it out there for others to try.

I think it is because programming is inherently about working with abstractions and visual programming is typically removing abstraction and making details of your program visible before you.

Part of learning to program is to learn work with abstractions especially if you have never been really exposed to something else similar (mathematics, physics, engineering, etc.) Things go out of sight but you still need to train your brain to manage these things.

This is a bit like playing chess. Good experienced player will be able to plan long in advance because his brain has been trained to spot and ignore irrelevant moves efficiently. If you imagine training that would let the player learn recognize good moves but not learn how to efficiently search solution space, you would be training brute force computer that would not be very good player.

I think visual programming is a different thing from regular programming. I think compromises like Logo https://en.wikipedia.org/wiki/Logo_(programming_language) are much better teaching tools. You still program a language with syntax but the syntax is very simplified and the results (but not the program) are given in a graphical form that lets you relatively easily understand and reason about your program.

This is slightly tangential but I think is related in some way to the broader question of the effectiveness of VP. I can say from my experience with my son and trying to teach other kids programming, that visual programming does not seem to promote actual understanding, especially concepts composition, re-use, etc. Kids can write animation sequences or even simple games that "work", but then have absolutely no idea how to generalize -- every visual program is a one-off. As soon as you start teaching them Python (or whatever) then they start to understand what's going on. This is why I don't like the "gamification" of learning in general.

Another way to look at it is the 7 +/- 2 rule of short-term memory attention -- when you look at something and try to "grok" it (a gestalt experience) you really need a limited amount of information in your visual field. To do this you need to move to higher and higher levels of abstraction, which is a linguistic operation - assigning meaning to symbols. Even in visual programming, you end up with a box that has a text field that holds the name of the exact operation - so you may as well cut out the middleman and stick with the linguistic abstractions.

Now, if a program is "embarrassingly visual" -- dataflow operations in signal processing, etc., the visual DSLs do seem appropriate.

Here is an early example of visual programming for scientific visualization from 1989:


The idea spawned many imitators (VTK, IBM DX, SGI Iris Explorer). The product was spun out of Stellar shortly afterwards, and the company is still in existence:


It's not efficient for large scale things. It's like communicating with memes, you can't exactly write a news article with just memes.

It's useful to allow more "citizen devs" (regular folk with little exposure) to come up with prototype high level proof of concept apps,including UX design. It is a big deal in the corporate arenas I've had exposure too,but I think widespread adoption is still years away.

You will always need non-visual languages to do things in a featureful and scalable way.

> It's not efficient for large scale things.

> You will always need non-visual languages to do things in a featureful and scalable way.

What is an example of a system that you have developed where things broke down with a visual programming language?

I needed to make an API request to a cloud service provider but it supported only one provider and even then not the api auth (oauth2) I needed. I couldn't even begin to try and figure out how to implement the api myself or patch in oauth2 support just with the visual lanaguage's facilities.

How is that a limitation of the visual programming paradigm and not a library problem? And that doesn't have much to do with scaling to a large program or system.

The library isn't in a visual language. You can't do things with it if an interface/lib to do that thing hasn't been implemented by a non-visual language.

That is again a limitation of the particular language you were using (which one?) and not a limitation of the paradigm. There's nothing there that's an inherent problem of visual programming languages, which is my point in asking.

For example, in LabVIEW, you have TCP/IP, HTTP, UDP, CAN, serial, MODBUS, and more protocols and can build things out of them. If there's a missing protocol, then you can write your own, call a C/C++ DLL, .NET assembly, Python script, or a command line application, just like any other language (actually more than most languages).

I have not seen any solution for tracking changes, differences,displaying versions, etc. I.E. git for pictures. Some visual languages can turn an area into a 'subroutine', I have not seen any solution to build libraries of reusable 'subroutines'. I used to draw flowcharts on size D (22.0 x 34.0 in) and size E (34.0 x 44.0 in) sheets of paper. I wish I had a monitor of either of those sizes.

Visual programming works very well for data flow problems.

> I have not seen any solution for tracking changes, differences,displaying versions, etc. I.E. git for pictures.

LabVIEW has a diff capability, and while working for National Instruments, my team actually had a quite capable custom code-review system built out of this diff. This is an area that is brought up often, but text-based diff tools weren't magically found in the universe. They were built, and some of them are good and some of them not. It's not a paradigm's fault that tools like git were built around the idea that code must be text.

I do agree that better tools need to exist though, but there isn't a reason for why they can't. They just need to be built. There are hard and interesting problems in that space, both technically and design-wise.

> Some visual languages can turn an area into a 'subroutine', I have not seen any solution to build libraries of reusable 'subroutines'.

LabVIEW has a third-party developed package manager in the JKI VIPM. And LabVIEW has features where you can put your VIs and classes into various LabVIEW specific containers, most usually source libraries, that can then be referenced and re-used in projects. Just treat the libraries as modules like you would in any other language. They should contain classes and functions that have a shared or particular purpose.

> It's not a paradigm's fault that tools like git were built around the idea that code must be text.

FWIW you can have custom diff and merge drivers in git. One could probably hook up LabVIEW's diff tool as a git diff driver, I don't know about merge.

Yes, but it isn't easy. Some people have done so, and I should probably revisit it myself since I know use GitHub in my later jobs and not Perforce. I mentioned this in another comment, but we had some tools integrated into Perforce's workflow to do diffs.

Merge is harder in LabVIEW and isn't where it should be. However, that doesn't mean it can't exist.

It is pretty big in engineering. Labview or VEE. TLDR: the more complex the program becomes, the worse it is to use.

I used it to easily connect to a piece of lab equipment, reset it, set whatever settings I want, run a test, and then log the output to a file. I could setup a test then walk away and return to data. Doing the tests manually would take many months.

Both have labels as remarks/comments and you can easily put in a switch statement to test new code or use highlighting to see exactly where the program is running albeit slowly.

One of the fun things to do was circle a repetitive task then turn it into a function. A large program requires a large screen to see it all. Widescreens are terrible for it.

After basic settings and availability in libraries, it is better to move to a text language. Visual programming is a quick and dirty solution.

We have a Labview program for our circuit board tester and it is a bit of a nightmare. At least with visual coding it is obvious, literal "spaghetti code".

My only experience with Labview was in another job writing a DLL that it could import so the poor sucker working on the code could do a bunch of complex state machine stuff without having to drag "wires" all over the place in Labview. That ended up with a design pattern that was "route inputs to DLL function, route output to next stage" that turned out to be much easier to maintain. Partly because it enforced modularisation, and partly just because a series of if statements and function calls is easier to read than diagram.


Also, Lego have a whole series of graphical languages for their electronics, and those are very cool but extremely limited. Once you get past "when this switch is pressed make this motor go" it is easier to hit the textual language.

Simulink has the ability to make Blocks out of Matlab (or any FFI supporting language) code and run them in the simulation loop. We used this for state machines at our lab.

I believe the Lego mindstorms stuff was actually lobbies under the hood? I seem the rememberer the NI folks handing out large mindstorms sets as pressies to engineers because they had a partnership with Lego.

It was semi common to see engineers using the “big boy” tool, Labview to play with some Lego mindstorms actuators etc

Visual programming is really hard to debug. Just as people tend to give the advice to limit a procedure/method to one page, because you start to lose control. You just can keep up with so much. Where does this line goes in? You just don't know. Collaboration is nearly impossible. Opening projects from others will be much worse than looking at pure code.

Also it is too verbose. The 'functions' take away a lot of screen estate (this is most obvious for mathematical stuff). If you start on just a bit more than something you could have achieved in 50 LoC, it tends to get really messy.

VP lacks referencing as well. At least most of those envs I know. Declared a variable? Too bad, you have to connect this node to everywhere you need it. Sigh.

Reusing of components is possible, and for some envs it is implemented, but mostly just per file, not in general, e.g. you can make a custom component inside one project, and if you edit it, all instances get updated, but you can't save that as a standalone component you can XREF in other files/projects, which makes it hard to make a custom library of functions.

But it depends on the industry. As others already mentioned, the more an industry is led by visuals in the first place, the more common it is to actually utilize Visual Programming (or rather Scripting), also it's quite useful in real time contexts – which is the place where the strengths are. The CG industry is one of those, but also architecture and design in general (think of McNeel's Rhinoceros with Grasshopper, THE most used visual scripting environment used as of today, especially in a professional setting).

Conclusion: VP has its merits and is used extensively, just maybe not the places you expected/hoped for.

I haven't used a visual programming language, but it's likely a lot harder to build a good visual programming language than a nonvisual one at a given level of expressiveness.

I suspect visual programming is more common than we realize though. I had an acquaintance at Workday who claimed a lot of work was done there in a visual programming language.

Also, arguably website/app builders are a visual programming "language" and they are extremely common.

A visual programming languages are often high level and can only take the user so far. If you were to make a programming editor mode that visually shows how programs are structured that understood standard programming languages it would be easier to understand the structure of these programs but the actual edits would likely still be done in the native language. Higher level edits might be very powerful but how often would you be able to do this safely? From a documentation point of view I agree with your assessment, the ability to understand how things work is easier visually at higher levels but we also have other tools to do this like UML. There are also editors like http://staruml.io/ that enable you to convert UML into code but I find this only works when projects are starting when you are trying to find the right high level abstractions. After this is set it is usually best to keep the high level structures that you have in place.

You might be interested in https://futureofcoding.org/

Around 1977-1978 I tutored a friend that was a fellow engineer. I could not get him to write structured programs. He insisted on creating large flowcharts with lines going in all directions. He had been introduced to this "visual" form of programming, and he kept going back to it each time he tried to construct a program.

His programs ended up with state distributed all over the place and an impossible to keep track of control flow.

Around the same time, I was intrigued by articles promising easy visual construction of programs; it seemed to be in vogue then. I took me several years to realize that the nice examples in journal articles were just that, nice examples. Visual programming is appealing, like flowcharts to my friend, but they suffer from lack of good support for building abstractions and an inefficient method of building visual programs and keeping track of changes.

If you're a bit frank and you're dealing with a person you don't particularly care for, you might use the expression "Do you want me to draw you a picture? How dense are you?"

The thing is that it's extra effort for the author to draw pictures and think of image layout for what is ultimately the manipulation of symbols. If you're a proficient touch typist with a powerful editor, including macros, jump to definition, symbol search, multi cursor editing, you can spend a few minutes at a time without your hands ever moving far from the home row, let alone touching the mouse. We have very powerful tools for text manipulation and the highest bandwidth input device we have is a keyboard, so it's not at all surprising to me that text is still king for programming and probably will be until we have some kind of direct brain -> machine interface.

Having tried for a long time for a viable option to practice programming in my phone while commuting (not seating, seldom having both hands available) I'd say that visual programming is not that useful in that medium, at least if it's Scratch-like. Maybe a language with better abstractions would be more useful.

I tried a Scratch-like for android and did the first couple of days of Advent of Code a couple years ago. It was tyring (too many instructions to drag), midly infuriating (when something didn't fall where it should), hard to refactor (when experimenting).

That's why that year I ended up transitioning to lisp, writing in a text editor and copying to a web-based lisp interpreter.

The local maxima I found was this last year with J's android client. With their terseness array languages can be used quite effectively with mobile constrains.

I think at best visual programming is good for tasks you might otherwise solve with a DSL. But for general purpose programming, you will end up with a visual grammar as complex as text, but harder for people to understand and compose (because we just happen to have a certain proficiency with written languages).

I think the main problems is that it doesn't simplify things. Programming is not easy, no matter which representation you use. Visual programming systems might be easier for some categories of people, for example, electrical engineers, thought, due to the fact they got used to working with diagrams.

I recently had to fix some copy and pasted code across 11 files. So I write a regex something like

    /(.*?)somefunc\((.*?)\) {\n( *?)return a + b;/$1somefunc($2) {\n$3return a * b;/
And then search and replaced across the 11 files. I have no idea how I'd do that with a visual programming environment. I actually needed to do that about 7 times with different regular expressions to do all the refactoring.

Also did this yesterday. I `ls` some folder. It gives me a list of 15 .jpg files. I copy and paste those 15 filenames into my editor, then put a cursor on each line and transform each line into code and HTML (did both). Again, not sure I how I do that in a visual programming environment.

You could do that in pretty much the same way, i.e. you create a visual regex. The one difference is that regexes simply match characters, no matter the structure of the code, whereas in a visual language you'd make a distinction between structure and content.

> Since text is difficult to manipulate without a physical keyboard, visual programming opens the doors to doing development on mobile devices.

All content creation is difficult on mobile devices, other than passive content creation like shooting videos and taking pictures.

Visual programming will be way easier on a desktop machine with a proper mouse, and a real keyboard (for the UI shortcuts you will end up using).

But, about the main question, visual pogramming has no value beyond being friendly to newbies.

This is like asking why don't Fisher-Price plastic tools for kids take carpentry by storm. They are so light and easy to hold, why don't we frame houses with them?

Most of the problems described in other posts concern trying to convert text programming paradigms into a visual representation.

Let me suggest two ways visual programming might be a big part of the future:

1. New paradigms, such as constraint based programming, might well lend themselves better to a visual presentation; and 2. VR. Visual programming is indeed much less visually dense than text, but if you start over with the assumption you’re doing it in VR, that suddenly is if anything a virtue.

Imagine something that was part Excel, part Access, with visual, animated 3D representations of other major programming abstractions also, and you start to see that VP might really be the future.

One point that I think people are missing is that language is the most powerful tool available for our brains. Language is what makes us human. Language causes our brains to grow and helps us to think about problems. Pictures don't do that. A picture is just a picture. If you try to make it more than just a picture then what you're making is a language.

While visualisation is (sometimes) useful for grokking difficult concepts, writing a program is a completely different kettle of fish. I could draw some pictures that might convince you that you understand a Fourier transform, but you'd be no closer to being able to efficiently implement a Fourier transform in a computer.

Visual programming languages aren’t plain text and therefore are harder to do version control for and harder to share. Whatever benefits visual programming languages have, and they are many, version control and easy sharing are more important.

I’m not sure it’s possible to keep the complexity managed (at least with current tools). If you’ve ever tried to build a complicated PureData patch or similar, you’ll notice that the complexity of visual programs just explodes.

It's pretty big on some narrow areas - see, for instance, Max/MSP for audio design (https://cycling74.com/products/max)

I think the problem is with more complex algorithms - where complex means dynamic in nature.

Creating (new)/ destroying (existing) actors runtime is the hard part in programming, because the complexity dimension explodes via that. +1 thingy in the system means many new possible runtime flows - in theory, even if it's not a de facto new flow; you have to prove if it is and handle accordingly.

You can show it in a visualisation, but to be able to do that it must be an animation; time matters.

I think a higher language like Idris will be able to generate these animations from the code to make it easier to absorb existing codebases.

Salesforce is a visual programming environment that has huge adoption and reach.

They are almost universal at large companies and have a large ecosystem of visually programmed/configured partner software companies and components.

Well, text is an abstraction for ideas. But text is a flexible abstraction, which is easy to store change, diff, read.

Visual programming with nodes/blocks is another abstraction for ideas. But blocks and nodes are much, much less flexible. So these abstractions have to be much more precise... Which leads to problems.

A good analogy is Lego vs clay.

With Lego; you can make anything with the right bricks. The problem is each brick is precisely crafted and you're limited to the blocks you have.

With Clay; you have the freedom to mold anything to whateber precision you need... But it might take you longer

> Emoji-only text seems to unlock a ton of value. Difficult concepts can be more easily grokked in visual form. Writing and Reading becomes more approachable to first timers.

I can't imagine being able to write maintainable, well tested, scalable software (cough, software engineering, cough) with some version of drag and drop. I'd love a visual element added for helping navigate code. I like system diagrams, flowcharts, etc. But I'd like these to be generated by my code, not generate my code. I feel like this would be trying to write a book with only emoji and/or gifs.

There are successful visual programming languages, like GRAFCET. But it doesn't scale beyond simple problems. As others have pointed out, it's about information density; what can be shown on a single screen, and also inputting data with a keyboard as opposed to a mouse.

In principle, though, nothing prevents you in theory from writing very complex programs using visual languages. It doesn't really make things simpler; once you reach that point, as said above it's just more efficient to combine words than drawing shapes with a mouse.

There are a lot of solutions for doing programming in a visual way but most of them are not well known.

For examples, for building backend infrastructures take a look at https://www.mockless.com/.

They provide an easy to use interface where you can setup the data model and even complex functional flows. In the background the tool creates the source code and it's able to connect to your GIT repository and commit anytime you made some changes exactly like a developer would do it.

If visual {x} unlocks a ton of value you'd expect someone would have been doing visual math or visual novels by now and the market would eat it up. It may exist but it didn't own the market for literature or science. Written word is still king. Not sure off hand what the advantage is but there does seem to be 'something' keeping writing on top of a visual centric approach across the board, not just in programming. Even on imgur where images are king the comments are mostly text.

Visual novels are definitely a thing.

I never said they weren't. I said they didn't over take written word in the market place.

I have seen various promises of visual programming over the years (Java beans was going t be the next big thing when I learned). The fact is it gains you very little (in my opinion). The only thing that it solves is syntax, and that's actually a relatively easy thing when it comes to programming. MS Access had an OK visual query builder, but by the time you needed to anything moderately advanced it was as easy to switch to text SQL (plus text SQL is a more standard way of doing things).

I feel certain domains lend themselves better to visual programming. Reaktor is a great example in the audio space, but I'd hate to use a tool like that to write a csv parser.

> Visual programming seems to unlock a ton of value.

It gives some value, and sacrifies other values. So far tooling has not reched the point where the sacrified values are small enouhg to justifie the added values.

> Programming becomes more approachable to first-timers.

How is that relevant? In no industry should first-timers get any responsability. And textual code is easy enough to be grokh after a short while. The problems people have with textual code after that would still remain with visual code. So no value added.

It's big enough to be available to the "mainstream", it just doesn't have the decades of tech debt C++ and other languages bring to the table.

I've been programming for about 37 years now and recently, not wanting to mess with Swift for that, built a "quick action" command (for Finder) that converts/shrinks HEIC images to .jpg suitable for e-mail. Took something like 2 minutes with nearly no experience using Automator. It's not a niche technology.

I think visual programming is just getting started. I have had good luck with teaching Scratch 3 to kids this past year.

What sets it apart from previous versions of Scratch is that it can run in the browser. It makes much more accessible to a wider audience.

Its my opinion with this browser based interface and the growth of instructional videos, we will see visual programming become more mainstream.

It is harder to get GUI right, UX in text/CLI is much simpler. In addition, developer UX is not quite profitable, the business side does not quite work out.

Visual tools seems to be harder to manage and add a lot of overhead.

Have you heard of Informatica PowerCenter? It creates a mapping instead of writing down SQL query. The problem is you must manage inconsistent interfaces, resize windows, writing down in small textboxes.

Of course it has its benefits, but in most cases it just doesn't help much in removing complexity but it adds its own.

> Difficult concepts can more easily be grokked in a visual form.

Some difficult concepts can more easily be grokked in a visual form. I'm not sure that they all can. In fact, I suspect that it might be about even (as many easier in text as are easier in pictures). We just notice the ones that would be easier in pictures, because we're working in text.

I think it is because of mainstream languages are all prose-first and don't expose the underlying graph structure in useful way for visual editor to manipulate. Closest I know are:

* paredit/parinfer for Lisps are actually tree editors in disguise.

* DRAKON. Having put critical business logic in it, was really a boon for quick understanding after returning to the codebase later.

My guess is that in the majority of areas of programming the work is to express rules, and here text is simply more powerful. Are laws written using graphics? Visual programming can be useful for workflow (3D, movies or music DAWs). But otherwise for expressing rules (which is a good part of programming) visual is too limiting.

Personally I hate visual editors for CAD, 3D, etc., and I think they are inefficient. People are scared of programming, so people go out of their way to build complicated UIs for them, which are much harder than programming. But, at least there's no text to type! (Look at professional video editors, that try to make video editing like programming by hooking up two keyboards to their computer and using one of them to execute AutoHotKey macros in their video editor.)

I often want to describe some sort of exact operation based upon exact numbers, and code is a good way of doing this. Obviously, providing input to the computer in program form does not preclude real-time display, and this is essential for things like 3D modelling. You wouldn't just "sketch" an object in your text editor with no visual reference. Fortunately, a lot of tools cater to that use case -- Blender lets you define scenes and animations as Python code (and when you mouse over something in the UI it tells you the Python function that implements that button!), and there are some CAD packages that let you describe your object as a computer program. (Unfortunately, I tried these and didn't like them. I do like Fusion 360's sketch-and-extrude model, but didn't like OpenSCAD's model. What I want to do is draw the rough shape of something to get a template, then turn that into code that I can edit to put in exact constraints and dimensions.)

I am also looking for a text-based schematic capture application if anyone has any suggestions. I would much prefer typing the names of the edges of my netlist graph to pointing and clicking them.

Yeah, I think there's a lot of potential in hybrid models like this. You actually do see these things out in the wild -- GUI builders come to mind -- but they're not a replacement for a proper programming language.

As it turns out, the written word has some genuine advantages over other media? But on that point, the wall of text you see in published prose is an artifact of the printing press -- prior to that it was unusual to have long texts that didn't include little sketches, diagrams, illustrations...

I'm currently looking at a hybrid approach, using Blockly to define operations visually that you want to perform on CAD data.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact