Hacker News new | past | comments | ask | show | jobs | submit login
Folk wisdom on visual programming (drossbucket.com)
266 points by telotortium on July 1, 2021 | hide | past | favorite | 162 comments



One thing missing here and in most of the visual programming discussions since it sits in a somewhat adjacent SCADA industry is Ladder Logic[1].

It was pretty illuminating how many parallels ladder has with the visual and scripting solutions I'd worked with in GameDev. Hot-reload under running processes, visual debugging very reminiscent of node based visual editors. I ended up automating our greenhouse with a P100[2] from Automation Direct and was pretty impressed with how straightforward it was to use.

Structured text is making in-roads in some places from what I can tell but ladder still seems to dominate because it's easily understood and maps well to a "relay based" mental model. It seems to be very sticky, at least more than most visual scripting languages I'm familiar with.

[1] https://en.wikipedia.org/wiki/Ladder_logic

[2] https://www.automationdirect.com/adc/shopping/catalog/progra...


Ladder logic is powerful stuff. I learned it at Westinghouse in the 1980s. It's a non-Von Neuman based programming methodology well suited to real time environments. Every ladder gets executed once per cycle, and there are usually a few hundred cycles per second.

Special functions are implemented in boxes, like delays, counters, etc.

My task back then was to feed the entire I/O state into a ring buffer, and when a "crash" of the steel mill happened, a host PC could then download the entire buffer, giving access to information about what happened in the seconds BEFORE the crash.

It would be fairly easy to port to the Arduino, if someone hasn't done it already. Yep - it's been done https://cq.cx/ladder.pl

Because Ladders are text based, you could use Teco/EDT/Vi/Emacs/WordStar macros on them.


Yeah, it's a pretty neat ecosystem. PLCs are kinda like the Legos of automation, incredibly easy to mix and match with a fair number of (insecure) standard protocols.

I briefly toyed with the idea of building out some open source hardware/software but I didn't really seem like there was enough of a userbase to make it viable. Anyone using it professionally wants support and 10+ years of parts, most maker/hobby projects want direct access to code.

Still some pretty neat concepts for putting together highly customized processes with extensible and interchangeable hardware.


if you show them how insecure their current hardware is, it will sell.

if it supports the old protocol but secure it should be transparent to them


You don't understand how industry works. If it works, you don't replace it, you get it fixed when it breaks. Nobody is going to willingly risk their factory, plant, etc. on some new supposedly better hardware. They'll keep what works going for 100+ years if they can.

It is totally logical for them to do so. The "security" threat can be band-aided over with a new firewall at the internet access. Replacing any component of the control system introduces an element of risk. Downtime is horrendously expensive, and can ruin a business. Only a fool would do anything to introduce the risk of downtime in the name of "efficiency".


I'm sorry about the tone of my other reply. In retrospect it was rude.


Indeed, it's still very much alive in European industrial automation scene.

The ladder logic lends itself well for majority of industrial automation problems (e.g. turn on the motor until the part triggers a sensor, then activate a pneumatic cylinder to push it sideways). The control flow can be easily followed for debugging.

Since it's also easily interfaced with structured text modules, one can write procedural code for problems that are more easily expressed in procedural code (in my experience: statemachines, network communication, loops), but still use ladder logic that's more understandable by other people who may be maintaining the machine down the roead.


I use ladder for Boolean logic but transfer functions and block diagrams and numerical flow is better visualized with function block diagrams. It helps one gain an intuitive understanding of the effect of each term in a transfer function as you watch it operate in a way that structured text doesn’t offer.


The author alludes to problems related to complexity, modularity, and formatting, of visual code. I think a hard problem for visual languages is the sheer physical labor of creating code using drawing tools.

I used LabVIEW extensively for a few months, and ended up with crippling wrist fatigue and eyestrain headaches from all of the fine mouse work and menu selections needed to write even small programs. This is an ergonomic problem with all graphical software, but is particularly acute with programming because of the amount of code that gets written.

Another problem is innovation. You can quickly spin up an experimental language and try it out because the infrastructure -- editor and command shell -- already exist. This is one reason why we enjoy such a proliferation of new language ideas. New ideas in graphical software require intensive effort to try out.

On the other hand, there does seem to be something about data flow programming (LabVIEW, Excel) that is attractive to beginners, that can't be overlooked. Excel is probably the number-one programming tool in use right now.


> I used LabVIEW extensively for a few months

I’m not sure that a few months is enough to understand the true pros and cons of a language. I’m using Elixir in my day job now and have used F# for side projects, and it’s not clear to me any one is faster to program in, especially for GUIs. LabVIEW actually shines, contrary to popular but inexperienced opinion, for large projects. I have used LabVIEW rather extensively, and it seems any project would have seen massive delays if I used something else, which is why I didn’t.

But I can indeed see an argument for strain when using the mouse so much. I have experienced that before and improved my ergonomic setup. As an aside, it is curious to me that many programmers are heavy PC gamers as the one true way to game and don’t complain about the mouse use there but will throw out all sorts of complaints when using it to program or use tools.

> New ideas in graphical software require software intensive effort to try out.

This “problem” isn’t inherent to graphical programming though. It’s because legions of programmers are convinced that text=programming and have spent decades writing text-only tools. You see this friction when using LabVIEW with GitHub, whose tooling is incompatible with the diff and merge tools that come with LabVIEW, which makes GitHub look outdated and not LabVIEW.

I’m actually working on ideas to build a toolkit for graphical programming languages. There are certainly challenges though, such as the desktop GUI ecosystem being a mess and reliance upon graph data structures and algorithms (like automatic layout).

> On the other hand, there does seem to be something about data flow programming (LabVIEW, Excel) that is attractive to beginners

Dataflow is attractive to advanced users as well, which is part of the draw to languages like F#, Elixir, Racket, Clojure, etc. Although, the dataflow in LabVIEW is actually more advanced, which is basically a multithreaded and branched version of the single threaded linear pipelines found in the languages I mentioned. Working in F# and Elixir with pipes can be pretty irritating sometimes given my knowledge of LabVIEW.


> But I can indeed see an argument for strain when using the mouse so much. I have experienced that before and improved my ergonomic setup. As an aside, it is curious to me that many programmers are heavy PC gamers as the one true way to game and don’t complain about the mouse use there but will throw out all sorts of complaints when using it to program or use tools.

I wonder also how using visual languages like LabVIEW compares to other mouse-heavy computer users like photo editors/graphic designers, CAD users etc.

> Dataflow is attractive to advanced users as well

I quite enjoyed using Max/MSP a few years ago when I did some stuff with it for a few months. I've always wished I had it available as a general purpose language in my programming toolbox since. As for other advanced users who use (visual) dataflow, my go to example is Ansys' SCADE Suite[1][2][3], which is used for critical systems like trains, helicopters and power plants.

[1] https://www.ansys.com/products/embedded-software/ansys-scade...

[2] https://www.ansys.com/content/dam/product/embedded-software/...

[3] https://www.youtube.com/watch?v=5z0h-WeScqw&list=PL0lZXwHtV6... (the third video introduces the visual language)


> As an aside, it is curious to me that many programmers are heavy PC gamers as the one true way to game and don’t complain about the mouse use there but will throw out all sorts of complaints when using it to program

Are those definitely the same people? Mouse use is certainly one of the reasons I gave up gaming. (The other being that an X hour working day is quite enough time to spend in front of a computer already).


Maybe not. It wasn’t a well thought out aside and is maybe fraught with bias. Haha.

I don’t mind visual programming, but I don’t enjoy PC gaming at all. There have been times I’ve experienced fatigue in my mouse hand with consecutive heavy days of LabVIEW programming. I think there are definitely ways to improve it though.


You may know of it already and it may not solve all of or any of your problems because I don't have experience with Labview outside of Eng 101 in college, but using something like Flow in elixir may make your experience with pipes less frustrating from the description you gave here. Flow is basically a GenStage backed data processing library that allows you to build out some parallel pipelines with various timings, triggers and stages.


Thanks for the mention of Flow. I have heard of it but haven’t used it. I am admittedly still getting my feet wet with Elixir, so I’m still learning the ecosystem, which has a pretty steep learning curve. The language itself is quite nice and the tooling is pleasant. You just “mix <this>” and “mix <that>” to working programs.

Part of my issue is the reverse shock of what text based programmers experience when moving to a visual language since I’m coming to Elixir from LabVIEW. I missing seeing certain things but am becoming accustomed to it. I’ve done text based programming a decent amount but never day to day on large code bases (and I’m new to Elixir having primarily used F# and Racket for projects and some Python for work against my will).


Unless you game for a living, you usually work a lot more than you game.


That’s true to a degree, but I think the discrepancy is still there. Experts in LabVIEW build up key commands to do things for them, and there’s even an ability to custom implement these using LabVIEW itself. Also, for some reason other fields seem to love these visual tools like TouchDesigner and Grasshopper. Programmers can be quite myopic when it comes to their programming preferences and perspectives.

For tools that require precise placing of wires like LabVIEW and vvvv, mouse strain and also the time it takes is indeed an issue. I am personally interested in sane but beautiful automatic layouts for visual languages, but this is most certainly a difficult problem. For example, if I could make an algorithm that basically makes LabVIEW (or a similar language) reorganize diagrams the way that I do, that would be great.

I think one inspiration are editing bays that professional video editors and also audio professional use. They have heavy use of software intermixed with hardware interfaces to perform their jobs.


> Also, for some reason other fields seem to love these visual tools like TouchDesigner and Grasshopper.

It's very simple - if you work with artists and you make them a presentation of your software, and at any moment there is some amount of code displayed on the screen, some of them will get up and leave the presentation.

I worked on my software with someone who would literally refuse to use the keyboard - everything had to be done by mouse, and I know a fair amount of like-minded people in th e same field.


> It's very simple - if you work with artists and you make them a presentation of your software, and at any moment there is some amount of code displayed on the screen, some of them will get up and leave the presentation.

Surprisingly, this is also the case for mechanical engineers. The proportion might be smaller, but they still exist. I know people in freshman classes that are retook the intro to programming class which was stuff like variables, conditionals and loops in Matlab.

Then there's people who did OK in their Fortran classes in the 1990s but never touched a line of cosey since.

The people have a good chance of being intimidated by code. This is where LabVIEW and Simulink come in.


That's probably true once once hits the workforce, but I'm pretty sure I gamed 40+ hours/week back in school. That starcraft ladder doesn't climb itself...


>>> I’m not sure that a few months is enough to understand the true pros and cons of a language.

No doubt. My problems were mainly with the ergonomics of the editing environment, not the language. And I had problems with other graphical software as well, including games.


Yea, that’s fair. I only meant to point out that the LabVIEW paradigm is powerful once you really get into it.

I can totally understand the ergonomics issue. I think that automatic layout is the way to go to reduce pixel-perfect manual layout and thus strain from heavy mouse usage.


"I think a hard problem for visual languages is the sheer physical labor of creating code using drawing tools."

Has anyone tried yet, to build a compiler, that compiles code into visual programming structures - and back?

I know scratch does this in a very limited way(code to scratch), but are there other tools around?

I am not aware of any, except for mine (which is unpublished as of yet, but if I get the website right, that might change by this weekend).


We are working on one https://wso2.com/choreo/?ref=manu


For a beginner Labview can be a great step up.

Back in 1991 I worked on Labview at a university lab. It seemed amazing for the time.

For a teenager migrating from Basic - Labview seemed a fantastic improvement.

It felt incredibly powerful to draw boxes and wires to make multiple scientific instruments gather data.

I do remember spending a lot of time on alignment, plus there was no code revision at the time...

However, the lab moved to custom C based workflow. It was great on a personal level - writing low level drivers etc.

Part of the reasoning was that at some point the scientists needed a custom UI and it was actually easier to create one with C than with Labview.


> Is ‘folk wisdom from internet forums’ worth exploring as a genre of blog post?

Yes. I found this very well done and useful.

One other (not entirely unrelated) potential area that immediately comes to mind for me is the architecture of UI toolkits. This is an area where there is very little serious (ie academic) writing, and even less good writing. The vast majority of what's set to paper is marketing material for a particular toolkit. There are some potentially interesting blog posts, but these tend to be highly opinionated.

Adding to this, while most of the discussion today is around shiny new stuff like SwiftUI and React, there's actually a pretty significant "literature" of older systems. It's hard to know what's worth digging into.

And, as with visual programming, there's not much in the way of synthesized understanding about the best way to do things. Game programmers like imgui, but it hasn't caught on outside that domain. Why? React is massively popular, but there is criticism of its performance (and of web-based technology stacks). There are other debates about the value of being "native," which perhaps used to be a clear distinction but is fuzzier today when you look at it closely.

So I think something along these lines for that topic could be pretty valuable.


I wrote a spreadsheet based rules engine at Honeywell in around 2003 which enabled the business to maintain the business rules for all sales contracts that ran through the automation and control systems division. This gave them the ability to define forms, subsets of forms (legal or otherwise), notifications and approvals. I had its own natural language based expression language which the super users and business analysts could easily understand.

The spreadsheets, were change controlled by the business an imported into the system when they were approved. I considered this design a form of a visual language and the real beauty was that it did not require any re-coding of business rules, which did change quite frequently. The developers did not have to be re-writing any business logic as the rules changed.

The back end was Java and TCL based built on a graph database. Not super fast but super easy to understand and train up new developers.

It was quite successful and ran all sales contracts for for several years with billions of dollars of contracts passing through per year. I lost contact with the teams so I'm not sure how long it survived but I know that SAP charged millions of dollars trying to build a solution in their systems but could not get one working at a reasonable price.


I'd love to hear how you integrated a Java and TCL backend with spreadsheets. I'm assuming this isn't something you can do with excel?

(Logistics at the small business I work at, is run on a mess of excel spreadsheets with complicated formulas and occasionally VBA for some email functionality. I have a feeling we're doing things horribly wrong, but I don't know of any better way to do things with our zero IT budget.)


I did not leverage any VBA or VB. I learned to code in VBA and written way too much code in that space and was well aware of the limitations.

I'm pretty sure the way it went was, on check-in of a new version, I had a Java program that read the spreadsheet validated that all the right sheets were there and that the columns on each sheet conformed with the spec and also validated rows of data. The sheets were translated on each front end node for easy access.

Regarding the expression engine, the the expressions themselves were then translated into TCL boolean expressions. The attributes for the related objects where already loaded into memory into dictionaries and then the expressions were just evaluated to get a boolean result.


Neat! Seems like what's old is new again. I'm up to the same shenanigans (sans java, TCL and a graph database). spreadsheets are a great medium to provide non-technical people access to adjusting production with nuanced precision.


Agreed. I was amazed to see how much coding time is wasted by people trying to do some data cleanup job when it can be done so much easier and safer with no coding by: 1) Exporting he data to a spreadsheet 2) Massaging the data in the spreadsheet 3) Use a general purpose program to convert the spreadsheet into scripts that can be run against the DB.

I actually wrote a VBA program that does this where you have a query with ~1, ~2, ~3 macros and you just paste in a table, click a button and it spits out scripts for you. I wrote it 20 years ago and still use it to this day.


Mandatory fanboi mention, and IMO a gross omission of the article:

https://enso.org/ (née Luna)

- an open-source language & IDE which aims to insta-kill all the "vs. textual programming" arguments by keeping a strict 1:1 visual<->textual representation translation as a core feature of the language. A 2.0 alpha was recently released, go check it out!


Very nice. I am also currently building a new environment for programming (I want my 5 year old niece to get a head start on modern creativity :-)). I think the main idea is simple: logic + visualisation + data. Make something that takes all of these three things seriously. Programming languages usually care about the logic part only (and even that not very well).


I really like the node based workflow used by Blender for creating materials, compositing and now for generating geometry. This is where I think visual programming works best, realtime interactive tweaking of something actually visual.

It's also generally kept simple and is powerful enough to get the job done.

I've always hated LabView however, it grows too unwieldy quickly and the limited nature of its visual display doesn't help. I remember once going to a NI training session around 2008 that was actually a sales session and their guy tried to convince us that LabView was easily parallelisable by drawing two empty while loops and showing them maxing out both CPUs on his laptop. The mind boggles.


Arguably he's not wrong?

I don't love LabVIEW but I make it work because their hardware is not bad compared with alternatives.

Dragging two loops is pretty easy compared to figuring out threads and thread-safe communications in C++.


The point is that two empty loops should not be let run the CPU to 100%. But yes, the concept is nice at least.

I agree on NÍ hardware, it’s very solid. I’ve had good success using session based control with Matlab and also with Python.


I worked on a visual programming tool from the late 90s to the early 2000s. The same problems apply then as they do now.

There's an unwinnable war between keeping things simple and being complex-enough to do useful things. Visual stuff is great for simple things, but simple things aren't very useful. When you really need to do more complex things, you reach a limit very quickly. It becomes pretty unmaintainable very quickly and then people will "graduate" from it and go to something more convenient.

My kid is learning Scratch right now, and it's taught him a lot of great stuff. But after about a year, he's ready to move onto Python. Scratch has taught him some very valuable concepts and he was able to jump into some Python concepts with ease (others still escape him). But you can only go so far with Scratch vs other programming languages.

The same goes for other visual tools. In the end, people graduate from the visual style because it ironically becomes too complex because they want to keep things simple, and then you've lost a customer.


In my experience, there are three things that visual programming tools make difficult:

* abstraction

* version control

* test automation

Those are also (in my opinion) the three techniques which separate proper "software engineering" from coding/scripting, which is why visual programming tools are pretty much universally unsuited for complicated projects.

The one asterisk I would put on that is tools in which the visualization is based upon an actual, human readable, programming language. I did some work with XSLT back in the day using a visual programming tool, and it worked reasonably well.


An interesting thing about things like version control and test automation is that they are almost like "meta interpreters" that interpret your code, but do so for purposes other than executing it. Syntax checking and linting come to mind too. Those things are possible, not so much because the program is in text format, but because it's in an open format that anybody can parse.

If a visual programming language stored its programs in a human readable or at least open format, then it would be possible for people to create those tools for it. But then, people would start writing and editing the program directly in its storage format, and it would cease to be visual programming.


The really funny bit is that the classic visual programming tool LabView is designed mostly for test automation. But it ends up practically requiring the creation of unmaintainable spaghetti (this[1] being a classic example) and there's no test automation possible for it. It's a test automation platform so complex it needs (but doesn't have) test automation for its automation of your tests.

[1] https://thedailywtf.com/articles/Labview-Spaghetti


No one should ever write code like that and no experienced LabVIEW developer would find that anywhere close to acceptable. And there are no doubt analogs in the text-based world, except you may not even know it because the structure isn’t as visible.

Also, there are certainly test frameworks for testing LabVIEW code. NI has a few but then JKI, a third party company, has multiple testing frameworks including Caraya, which is similar in philosophy to something like FsUnit for F# or Elixir’s ExUnit.

https://github.com/JKISoftware/Caraya


abstraction

I do wonder why some concept likes building chips doesn't make it into visual environments? I should be able to pass a chip with inputs and outputs to others.


It does in some. Visual programming environments are common for setting up the giant sound systems (think hundreds of channels of high-quality audio) that run everything from movie theaters to conference venues to parliamentary halls to amusement parks. Some of them support encapsulation of audio or logic elements into reusable packages.

Another technique they use for abstraction before that point include one-to-n on-page connectors and multi-signal buses when the visual wires get to be too cumbersome.


Snap! supports user defined blocks, first class functions, lexical closures, eval/apply, and even call-with-current-continuation. See the summary and description I posted in reply to the GP.


Node Red "Nodes" perhaps?


I'm curious if you've tried Simulink? Because my experience with visual programming tools maps largely to yours. Especially the RAD tools of the late-90's and early-00's. Eventually you get to a place where the "simple" thing is a dizzying array of radio buttons and check boxes to manage that are all hidden away in their own little silos that you can only get to by drilling down through the visual hierarchy.

This becomes a huge problem with your "pivot" for changing things is plumbing up and down, up and down, up and down the hierarchy. Instead of being able to flip the whole thing on its head and seeing all the settings/flags/options for whole classes & categories of similar things and changing them from that perspective.

It's like there's a transition point where you know enough about how things are all put together that being forced to interact with multiple things at once only through the top of their individual silos starts to fall apart pretty dramatically.

The reason I ask about Simulink, is that it's very common in the toolchain of our customers (I'm CEO of https://www.auxon.io) and we're building an integration to it for our product. For the most part it seems to do a much better job of being an IDE for more complicated models/programs compared to the bevy of RAD tools I've used in the past. I have certainly encountered limitations reminiscent of those old RAD ways, but I can also see how much better a job it seems to do before you hit those snags.


I've used Simulink a lot in the past, and the things that it gives you (as well as it's limitations) are a big driver for my attempts to build tools in this space (I think I'm trying to do a similar thing to you -- causal reasoning using data flow information and other graph-structured engineering data such as requirements traces).


Yeah, that's the first half of our stack. Ultimately it exists in service of the second half of our stack which is for automatically stressing and analyzing system behavior to localize root causes, uncover emergent properties, and auto-optimize system parameters.

On the way toward building everything we needed for the analysis we wanted to do we realized we had a distributed tracing system suitable for continuous system lifecycles & embedded systems (as opposed to transactional lifecycles & IT systems) and a spec/query mechanism over what amounts to a logic model derived from system executions. That sort of thing tends to be valuable to folks who aren't as far along in their development or use case maturity to need bleeding out all the corner cases & "unknown unknowns" from their systems, so we exposed many of our building blocks as features themselves.

Somewhat ironically, given the thread this is all related to, everything we build today is for CLI consumption on Linux and Windows. We will build out a kind of IDE/Workbench UI this year to fulfill some of our vision around the category of CAE tools we're angling toward, and to access additional kinds of customers, but predominantly folks have preferred CLI shaped tooling because it's easier for them to bake our continuous verification & validation capabilities into their existing processes when we don't force them into a siloed GUI.


I'd love to have a chat at some point. I'm trying to (slowly) bootstrap some tools for the embedded space, and we're trying to get a grasp on (a) what our MVP should look like, and (b) how to go about approaching customer #1.


Happy to chat sometime. Any preferred way to connect?


> But you can only go so far with Scratch vs other programming languages.

A friend has built some pretty amazing stuff in Scratch, including a BBC Micro emulator (runs 60+ games), a few arcade game ports, and lots more [1].

As a developer myself, I find it quite surprising how far one can go with Scratch. (FWIW I've never coded with it myself)

[1] https://scratch.mit.edu/users/RokCoder/


People have also made neural networks using Redstone in Minecraft. You can make anything you want in Scratch; the question is whether it is wise to do so.


He seems to code an interpreter in scratch and then do the other stuff. Coding an interpreter is a lot easier than coding a game, and after that you really aren't coding in scratch anymore.

Complex minecraft works similarly, instead of working in redstone they make the logic gates and then the bit adders in redstone and then used programming as normal on those constructs.


In the case of his Beeb project, the 'interpreter' you speak of here is in fact a fairly full featured emulator. Although yes, the z-machine (Zork) is an interpreter of sorts.

But all of his other games - including the clones of arcade games - aren't emulators nor interpreters. They're just games, written from scratch (pun unintended), from the ground up, in pure Scratch.

After all, his day job is in fact games dev - and with 25+ years experience (I worked with him mid-90s), one could say he's a veteran in that field.

In reality, it depends on the what the game actually is, as to whether it is harder to implement than an emulator or an interpreter. e.g. I strongly suspect that Battleships was much easier to implement than the Beeb emulator was.


Were there advantages? Or was it an exercise in determination?


He's been in the games industry for some years now, and as a side-project he teaches kids how to code — I'm pretty certain that he builds such crazy projects mostly as a fun hobby, perhaps also partly as a showcase for his teaching.

IMO, for these kinds of projects, I suspect Scratch doesn't offer many advantages (if any at all). He likes a challenge :)

— More details, including an interview, can be found here:

https://www.coderkids.com/blog/who-is-rokcoder

He's pretty responsive to comments on his Scratch homepage — and can also be found under the same name on Twitter/FB/etc. He's always been happy to help others. Give him a shout if you have any specific questions.

(FWIW: I worked with him — Cliff — for a short while, back in the 90s)


Yup, I've been working on a visual browser automation tool (https://browserflow.app/) for the past year, and it's been quite the fun design problem to balance simplicity (so that it can be used by non-technical users) with flexibility (so that it can handle more than the simplest cases).

An approach that worked for me was to provide an escape hatch (in my case, giving the user a way to run arbitrary Javascript on the page being automated) so that the built-in commands could be designed for the most common scenarios and users would still have a way to handle gnarly edge cases.


Yeah, I think the endgame of visual programming tools is to just eliminate syntax errors.

You could probably conceive of C as a visual programming language if you spent long enough time. All of your keywords can be reduced to blocks, forcing people to insert the appropriate parameters. And once you allow people to define their own blocks, you're like a good portion of the way there.

The problem is that after a while, it is simply faster to type. The breadth of options available becomes too much to manage from menus and drag and drop interfaces. Already you're probably typing in some values and names. So you're constantly switching between keyboard and mouse.

At some point, the benefit of perfect syntax doesn't outweigh the loss of productivity from not being able to immediately use any construct the language makes available to you.


I think visual programming tools are great when they are aimed at a specific domain. but not when they try to do too much. While Scratch is great for teaching kids to code, trying to come up with a visual replacement for C++ is almost certainly a bad idea.

Also it is possible to blend the two approaches. For example by allowing users to script some of the boxes in a text based language. This is the approach I have taken with my own visual-based data wrangling tool. The standard transforms cover 95% of cases people need (re-order columns, pivot, filter etc) and you can script a Javascript transform for anything else.


There is most definitely an unwinnable war between simplicity/ease-of-use and more power/customizability.

That being said, Alteryx is an example of a company that has made a ton of money via a data munging tool that is a visual programming language. The users are fanatical as well.


It is not really a war though is it? Some people prefer visual tools like Alteryx, Knime or Easy Data Transform. Other people prefer R or Python. And some people will alternate approaches, depending on the problem at hand. Also, many 'visual' tools include the option of text based programming, for flexibility.


I posted this about Snap! recently:

https://news.ycombinator.com/item?id=27397375

Snap! is not simply all the usability and functionality of Logo, but also all the functionality and power of Scheme! Without any of the dumbing down of Scratch or Logo. Visual block programming. Turtle Graphics. Sprites. Lexical scoping. Lambda. Closures. Call/cc. Plus JavaScript integration and web stuff. With extensions for networking, AI, machine learning, speech synthesis and recognition, graph theory, robotics, Lego, Arduino, 3d graphics, 3d design, 3d fabrication, and 3d printing, embroidery, etc. ;)

https://dl.acm.org/doi/pdf/10.1145/3386329

History of Logo. Proc. ACM Program. Lang., Vol. 4, No. HOPL, Article 79. Publication date: June 2020.

6.2 Brian Harvey’s Personal Narrative on Snap!: Scheme Disguised as Scratch (pp. 49-50)

In 2009, the University of California, Berkeley, was one of several universities developing a new kind of introductory computer science course, meant for non-CS majors, to include aspects of the social implications of computing along with the programming content. Scratch wasn’t quite expressive enough to support such a course (it lacked the ability to write recursive functions), so Prof. Daniel Garcia and I thought "What’s the smallest change we could make to Scratch to make it usable in our course?" After 20 years teaching Structure and Interpretation of Computer Programs [Abelson et al. 1984], the best computer science text ever written, I knew that the answer to "what’s the smallest change" is generally "add lambda." I joined forces with German programmer Jens Mönig, who had developed BYOB (Build Your Own Blocks), an extension to Scratch with custom (user-defined) blocks, including reporters and predicates. At that time we were hoping to convince the Scratch Team to adopt our ideas, so we took "smallest change" very seriously. BYOB 3.0 [Harvey and Mönig 2010], with first class procedures and first class lists, added only eight blocks to Scratch’s palette. (The code is almost all Jens’s. My contribution was part of the user interface design, plus teaching Jens about lambda.) Version 3.1 added first class sprites with Object Logo-style inheritance. The Berkeley course, The Beauty and Joy of Computing (BJC) [Garcia et al. 2012], is also used by hundreds of high schools, especially since the College Board endorsed it as a curriculum for their new AP CS Principles exam. Unfortunately, some teachers have no sense of humor, and so BYOB version 4.0, a complete rewrite in JavaScript, was renamed Snap! [Harvey 2019]. [18]

Since Scratch seemed to be positioned as the successor to Logo, it was a goal for Snap! to restore the features from Logo that are missing in Scratch. The most important missing feature, the ability to define functions (and therefore to use recursive functions), is at the core of the new language. (Scratch introduced user-definable command blocks in version 2.0, but still doesn’t support user defined reporters.) Scratch had also replaced the structured text (word and sentence) functions with a flat text string data type. We wanted to be backward compatible with Scratch, so we implemented words and sentences as a library, defining first, last, butfirst, and so on. (Since block languages allow multi-word procedure names, and you don’t have to type the long name in order to use the procedure, the library names are, e.g., all but first letter of.)

Lists are first class and can be arbitrarily deep in sublists. The usual higher order functions on lists are provided; the graphical representation of lambda is built into the blocks representing higher order functions, and so beginning users can use higher order functions in simple cases without thinking hard about function-as-data at all, but the full power of lambda is available to more advanced programmers. It took us three tries to get the lambda design right, but we’re very proud of its pedagogic benefits.

Another of our goals for Snap! is to be a complete version of Scheme; it was largely as a way of planting that flag that we added call with current continuation, not taught in BJC (nor even in SICP) but used to implement tools such as catch and throw as library procedures written in Snap! itself. As of this writing, macros are only half-implemented; users can define procedures whose inputs are unevaluated (more precisely, thunked, since procedures are first class), but cannot yet inject code into the caller’s environment.

Snap! is lexically scoped, not least to allow the use of closures as objects, but a planned extension is "hybrid scope": variable names follow lexical scope, but instead of giving an error message when no binding is found in the lexical environment, the evaluator will instead look in the dynamic environment. So name capture is impossible, since the global environment is examined before the dynamic environment. (Only if a mistyped name matches another name can the user get the wrong variable rather than an error message. But mistyping can’t really happen in a block language.) This, too, is an effort to be a Logo as well as a Scheme.

Since Snap! is free software (AGPL), it has served as the starting point for at least a dozen significant extensions, including BeetleBlocks [Koschitz and Rosenbaum 2012; Rosenbaum et al. 2011] for 3-D graphics and 3-D printing; TurtleStitch [Mayr-Stalder and Aschauer 2016] for controlling sewing machines to do embroidery; Edgy [Bird et al. 2013] for studying graph theory; NetsBlox [Ledeczi and Broll 2016] for access to online data APIs and collaborative editing of projects; and others. The ability to write new Snap! blocks in Javascript, from the Snap! editor, has allowed many other user-level extension libraries, including support for robots and other hardware. Snap! features such as first class procedures help authors develop these extensions, even if the users of an extension don’t see that.

[18] For non-Anglophones, "BYOB" is used in party invitations as an abbreviation for "bring your own booze."

Also:

https://news.ycombinator.com/item?id=27396842

Brian Harvey's books are excellent! Definitely check out Brian Harvey's and Jens Mönig's latest masterpiece: Snap!, a block based visual programming language with the full power of Scheme, but ease of use of Scratch and Logo, written in JavaScript and tightly integrated with web browser technologies and libraries (including Ken Kahn's eCraftToLearn AI Programming for Kids extension using Tensorflow).

Snap:

https://snap.berkeley.edu/

AI For Kids with Snap!:

https://ecraft2learn.github.io/ai/

Snap! 6 is here, and it's all about scale (HN discussion of Snap! 6 announcement):

https://news.ycombinator.com/item?id=24781716

Brian and Jens earned the NTLS Educational Leadership Award for their work on Snap!:

https://ntls.info/ntls-educational-leadership-award/brian-ha...

>The National Technology Leadership Summit (NTLS) Educational Technology Leadership Award recognizes individuals who made a significant impact on the field of educational technology over the course of a lifetime. The NTLS consortium is a coalition of twelve national teacher education associations that collaborate to advance effective use of technology in schools. The NTLS Educational Technology Leadership Award is the coalition’s highest honor.

>Brian Harvey and Jens Möenig, working together, have had an impact on the field of educational technology that is as significant as any other. The origins of their work dates to development of the first computing language explicitly designed for children. In 1966 Seymour Papert, Wallace Feurzeig, Daniel Bobrow, and Cynthia Solomon created the programming language Logo. Logo, whose name is drawn from the Greek word for word, is both a technology and an educational philosophy. Its inception also resulted in the development of an educational community that exists to this day.

>Brian Harvey had the opportunity to learn from Lisp inventor John McCarthy and Scheme inventors Gerald Sussman and Guy Steele, among others, as a student at the MIT and Stanford Artificial Intelligence Labs. Throughout the 1970s he was a frequent visitor at the MIT Logo Group, and starting in 1981 he was part of design teams for microcomputer versions of Logo for the Apple II, the Atari 800, and the Apple Macintosh. A high point of his career was establishing the Computer Department at the Lincoln-Sudbury Regional High School, in Massachusetts, offering ungraded courses that attracted a community of kids with keys to the lab and the responsibility for making the facility meet everyone’s needs.

>In the 1980s he wrote the three-volume Computer Science Logo Style, published by MIT Press. These books showed that Logo could be used beyond elementary school to introduce serious computer science ideas to a broad and diverse audience. He subsequently taught at the University of California, Berkeley, where he was recognized with the Distinguished Teaching Award, the university’s most prestigious award for teaching. He was lead developer of Berkeley Logo, which because of its status as free software has become a de facto standard for Logo implementations. Since 2013 he has been Teaching Professor Emeritus.

>On a parallel track, Jens Möenig collaborated with Alan Kay, inventor of Smalltalk, and worked with colleagues from the Xerox Palo Alto Research Center (PARC) who invented personal computing. He subsequently contributed to development of the block programming language Scratch, one of the languages influenced by Logo.

>Brian Harvey and Jens Möenig then embarked upon one of the most productive collaborations in the history of educational computing, jointly developing the block programming language Snap! The Snap! reference manual notes, “The brilliant design of Scratch, from the Lifelong Kindergarten Group at the MIT Media Lab, is crucial to Snap!.”

>Snap! makes advanced computational concepts accessible to nonprogrammers. Brian Harvey notes, “Languages in the Logo family, including Scratch and Snap!, take the position that we’re not in the business of training professional computer programmers. Our mission is to bring programming to the masses.” The Beauty and Joy of Computing, tightly integrated with Snap!, does just that. This curriculum, developed at the University of California at Berkeley, is notable for attracting equal numbers of male and female students.

>The course is approved for Advanced Placement credit by the College Board. With support from the National Science Foundation, professional development has been provided to more than one thousand high school computer science teachers. One computer science teacher who introduced the curriculum in his high school reported that, “Before using Snap! and the Beauty and Joy of Computing curriculum, I had one section of computer science with 17 students. Three of the students were girls. Now I have three full sections of the course with equal numbers of male and female students.”

>Snap!, provided as free, open source software, has inspired development of many extensions. Among others, these include environments such as Snap4Arduino, which supports work with microcontrollers and robotics; mathematics microworlds for elementary children developed by Paul Goldenberg and his colleagues at the Educational Development Corporation; and iSnap, an extension developed by Thomas Price which suggests hints to students based on the work of other students. Another extension, TuneScope, designed to facilitate exploration of coding through music, is being developed by a collaborative team at the Society for Information Technology and Teacher Education (SITE).

>Snap! is a remarkable technological achievement. However, like Logo, its greatest achievement is arguably the educational philosophy that it draws upon and supports, and the associated community drawn together by this philosophy. In a very real sense, the Snap! community embodies the spirit of the early Logo community, extending it for the modern world. The NTLS Educational Technology Leadership Award, awarded to Brian Harvey and Jens Möenig, is presented in recognition of that accomplishment.


I believe that diagrams are at their most useful when they are deployed as a communications tool. When discussing our software, we commonly draw a diagram to help illustrate how functionality is divided up among components, and to show which components are impacted by particular functional chains. They are a powerful tool for communicating and coordinating the work of multiple people or multiple teams. If our software does not provide us with diagrams itself, then we will commonly create them ourselves in our technical documentation or on whiteboards as an aide to collaboration.

However, as we descend in our system towards finer and more detailed levels of granularity, two things happen. First of all, fewer people are impacted by very ‘local’ design decisions, so the importance of communication diminishes. Secondly, the frequency of change increases, so the importance of mutability and ease-of-editing increases.

The requirements that we place on our engineering tools depend largely on what we are trying to do with them. Communicating and coordinating work across a large engineering organization places fundamentally different requirements on our engineering tools than does, say, working alone to solve a challenging problem that requires the creation of new concepts and abstractions.

There are many different ways to work with software, and although each share much of the same flavor (dealing with informational complexity), the nature of these challenges is often distinct enough that the different requirements may lead to vastly different solutions.

For my part, I have a lot of faith in the general utility of tools which allow for the easy generation of diagrammatic representations of high-level structure, but which naturally decay to a more conventional textual representation at finer levels of granularity.


Yeah, to make sense of complex systems they need to be well structured and modularized, and visuals have a lot to contribute here. If you can't fit your structure comfortably in a diagram, your structure is too complex for most people to manage.

Further down the line you stop needing to know where you are (global view, understanding), and you only need to deal with implementation: actually doing what you need to do. Implementation can be often very complex on its own too, that's why algorithms are so scary at the beginning... and text is a better tool here. There's not much more structure to distill, and some parts are tricky, and you have edge cases and you often need to get your hands dirty. You need to keep all this complexity in mind at the same time anyway, and after you are done you can treat the implementation as a black box. So text is superior here.

That's how we manage complexity. Now we only need tools that can represent it better. But maybe it's not even visual programming languages, but rather visual system "structurers"? There are other parts of programming that could use better visual cues, and visual editors for visual stuff (colors, shapes, borders, etc.) will always be relevant, but those are all very different concerns.


It's always a good sign when a debate moves beyond 'x' is better than 'y' to a more nuanced discussions of the circumstances and conditions when 'x' provides more value, and the circumstances and conditions when the converse is true.


I think what the author did here hits hard on why hacker news is such a great resource. To quote:

> Most fields have a problem with ‘ghost knowledge’, hard-won practical understanding that is mostly passed on verbally between practitioners and not written down anywhere public. At least in programming some chunk of it makes it into forum posts.

I'd be incredibly interested in seeing a series of posts like this, where authors mine hacker news and other forums for insights that they consolidate (then repost here).

What other topics do y'all think HN provides unique insight on?


I agree that this type of compendium/synthesis is useful. I was amused to open this thread and see comments focused on visual programming rather than the meta-aspect.

As for what HN could be mined for, a big topic is software languages. There are ways to subdivide that: by language, by optimization aspect (e.g., comparative performance). Geographic topics like cost of living and quality of life. Remote work pros and cons.

Another way of identifying topics worth mining would be identify posts where @dang provides a link to previous posts and those previous posts have a sufficient number of comments.


> What other topics do y'all think HN provides unique insight on?

Career managment I think (not sure if it's the good term for it). I see lots of people in their 40's or more that quit some FANG job for another job that earns less but is more aligned with how they want to live their life. I see lots of people advocating to quit jobs as soon as you see red flags. I think both of these are good wisdom.

On the other hand, a point that's still a mystery to me is "programming language choice". I've learned useful heuristics like you can do web backend in pretty much anything, if you have no libraries for something that needs them like an AWS SDK you're going to suffer, what helps in the long run slows you down at first, but that's about it. I don't know if it's because I'm too young in this field, because I don't understand/missed something, or because there is no big truth to find.


> I don't know if it's because I'm too young in this field, because I don't understand/missed something, or because there is no big truth to find.

I'm pretty young in the field as well, but my impression so far is that there aren't really any big truths, just a lot of little ones


The only big truths are things that seem obviously one way, but in reality are the other way due to a myriad of non-obvious sub-things.

E.g. Productive person-hour scaling as a function of team headcount


Personally I think programming language is about a compromise between performance and convenience.

For instance, I program in Python if speed is irrelevant. Rust if it is very relevant. C# if I want a large project where performance is somewhat important.

I think there are a wide range of languages for each "category" where I put Python, C# and Rust above, but those 3 are my current favorites.

Certain languages are very well suited to specific tasks. You learn these with experience. Initially I loved python because it was a joy to slice and dice strings in it. I understand that Ruby is popular for the same reason. Closure is great if you want lots of agents processing stuff on a network stack.

If I were you, I'd just focus on making fun projects in your favorite language.


That’s true for conventional languages, you can trade off some performance for a bit of productivity. However, if you want significantly more productivity, we hit walls quickly in PL design where trading off performance is no longer effective. So ya, Kotlin is slightly better and slower than C, but both languages are fundamentally still in the same ballpark that we don’t know how to get out of yet.


> Personally I think programming language is about a compromise between performance and convenience.

That's one dimension, but what about popularity, maintainability, being able to hire, productivity, being able to tackle precise problems, etc.


I think concern about “ability to hire” is a bit overblown: I’ve worked at a couple places now that have taken the “hire people that know what they’re doing and they’ll figure out the language” approach, and it’s generally successful for most business applications: this is true even when the team is using unusual languages like Clojure.


The problem is really about tooling, you also need people who can fix bugs/port to new arch in the compiler and related tools. For a very popular language you can always find the right people for this task or even offload this task like they do with oracle.


I concur, though most of the traits you mention are not problem-specific.

When putting a language into "your toolbox" you indeed SHOULD focus on "popularity, maintainability, being able to hire."

However, when picking which of the languages you have learnt to apply to a given problem you need to know their relative strengths.

Most of the strengths you mention help you pick the "best in class" language. But only the problem can tell you which subset of langauges you are picking from. Sometimes the best possible tool for your problem is going to be unpopular.


Those are layman’s concerns. Programmers love to learn new things and are typically good at it. If a given tech has merit and some unique advantages.

Edit: I didn’t read the comment right. Productivity and being able to solve precise problems are essentially why you’re choosing a technology in the first place?


> Those are layman’s concerns. Programmers love to learn new things and are typically good at it. If a given tech has merit and some unique advantages.

I don't think that's true, I believe most programmers are either dark matter developers [1] or people doing Java (or equivalent) all their life for a software service company. There is nothing wrong with that, but that would explain why less popular languages and ecosystem still have lots of stuff. For example, Rust has ~60k crates on crates.io, while Maven Central has ~400k packages. I believe Java is used by more than 10 times the number of people that use Rust. But the people writing the libraries in the first place are usually the people that are open to trying out new languages.

[1]: https://www.hanselman.com/blog/dark-matter-developers-the-un...


The technical term for “ghost knowledge” is https://en.wikipedia.org/wiki/Tacit_knowledge


Years ago I bounced around from one visual tool to another, never mastering any of them but getting deep enough to see the flaws. Hypercard, Authorware, Director, Flash. They were all very approachable, but they were all incredibly flawed in the end. You'd spend most of your time jumping from one limitation to the next, trying to hack your way out of the spaghetti jungle.

I think the better approach is to use conventional languages with UI tools layered on top that at worst generate readable code. (Like I assume is often done in game development.) Source control and diffing are indispensable.

The lure of these tools is that they are approachable, which sometimes is exactly what you need to get started. It's just very rarely what you need to finish. So let these tools be used to get people interested in programming, or to give people an idea of how to accomplish a project. But more importantly let's make sure to continue taking steps to demystify coding. Not because we want everyone to be programmers, but because when people do choose to program we want them to be able to collaborate with the rest of the world, and when they leave we don't want to inherit some visual pasta dish.


I loved MacroMind (later Macromedia) (later Adobe) Director. Delivered dozens of educational CD's, corporate CD's and many of my own 'experiments'.

One of those was a map for EverQuest to help find dead bodies (it was a big thing at the time), and help swimmers who fell off a ship to get back to land alive. Another was a project to scan every user-authored book in Ultima Online (using basic OCR)

Director was extensible in that you could not only import 3rd party libraries, but write your own modules in Pascal. Much more powerful and extensible than Flash IMHO.

Flash won that battle after the buy-out, and Adobe extinguished Director. Still miss it to create proof-of-concept, simple or amazing cross-platform apps.


We need both. And more. Visual projection and editing is valuable when looking at a zoomed-out "components and wires" sort of architectural view. It's also good for showing high-level overviews of a process, like various UML diagrams should have been. It can sometimes be used well for the nitty-gritty implementation details of a single unit, although this seems to be where a lot of it breaks down.

But it's terrible for doing all of these things at once. And also, if you can control your level of detail/zoom well enough to do it right, you'd get similar benefits projecting to simpler text formats as well.

What's most important is that all this stuff needs to be declarative. Not in the sense that you hide the order of operations within a block of code, in a way that final performance characteristics can become a surprise, but that the more complex structures of your project that organises the blocks into modules, classes, functions, records, etc need to be declarative, rather than an imperative set of instructions that build a structure at run-time (usually difficult or impossible to explore and instrument) which is later executed to achieve your actual goals.


What I find interesting is that the human society and economy at large is able to function without there being any genius programmer who designed it all. If you look at roads, there is a road everywhere and still people don't get lost. But in a big chunk of source-code I do feel I get lost.

I believe that may be because there are too few constraints when programming textually. Whereas when designing roads and bridges we have the constraints of physical space. It is possible to draw a 2-dimensional map of all the roads and thus easily understand it all. I think visual programming was inspired by this notion, how physical and geometrical constraints make things simpler because we know those constraints intuitively, which thus helps us reason about the structure of the whole thing. It can not violate physics, nor geometry. Textual programs can, do anything their syntax allows. That makes their full structure hard to understand


Habe you seen an aerial view of Birmingham's "spaghetti junction"? ;0)


A city with streets is just a graph with 1-4 pointers per node. It is super simple in code, just a few lines, not sure how you could get lost in that. If you make code anything more complex than that then it isn't really equivalent to navigating the streets on a map.

> I believe that may be because there are too few constraints when programming textually

No that is not the issue, you can write extremely complex and useful programs in just 100 lines of code and you can translate those to visual programming easily with no improvement in comprehension. Code just takes long to read since it is so powerful.


> If you make code anything more complex than that then it isn't really equivalent to navigating the streets on a map.

My point is, you should strive to make coding not more complex than that, if possible. And if you have constraints such as must represent the program as a 2-D dataflow structure then it must stay simple as that. Now is there a need to make more complicated programs? Probably yes. But human society is still able to function with the simple dataflows that it has, which makes it understandable and thus very resilient.


For completeness, I feel compelled to mention GNU Radio. It is a node based visual programming tool. It has its quirks, but you can do fairly complex digital signal processing with it, and it will do I/O from your sound card, as well as RF stuff.


And its one of the more common ways to make those little Software Defined Radio boards do something.


I guess this is turning into another "visual programming" post.

On my side I'd like to ask your opinion about Blueprint. It really turned me off back in the day and I always wonder why they couldn't optimize UnrealScript further instead of getting something totally new.

Yeah for sure eventually everyone get used to it and it becomes the new standard ("How can I NOT do Blueprint? It's so splendid" type of posts will show up) because UE is an extremely competetent engine.

But why did we bring it forward in the first place? Is there enough reason to do so aside from that some teams get to have their own marks in the world and some managers can grab more power?


I'm a programmer who does hobbyist gamedev and is a big fan of blueprints despite writing in text all day, for a few reasons.

* A lot of game engine functionality is extremely helpful but not discoverable. Visual scripting languages make the full spectrum of the engine's API more easily discovered

* Games are very stateful, and I find it is much easier to conceptualize this with blueprints than with code, which is better for functional stuff

* Collapsing nodes in blueprints is much more powerful as a documentation tool than collapsable lines of code. When you collapse nodes, you can show the execution running through long text high level descriptors of what hidden blocks are doing, and without needing to create a function for that task specifically.

e.g. code might do something like

def update_target():

target = find_new_target() # set target to nearest enemy unit

attack_pattern = choose_attack_pattern() # choose next attack pattern

execute_attack() # perform attack montage and execute effects

____________________

whereas the bp would be

Set next target to nearest enemy -> Choose next attack pattern -> Perform attack montage and execute effects

It becomes pseudocode that is bright and on the forefront of your attention, vs. most code editors which will highlight code and lowlight comments.


> But why did we bring it forward in the first place?

Speculation: visual programming is a bit more beginner friendly, and more compatible with the brains of people designing stuff in UE. I wish blueprint was around when I was 13, messing around in UT’s version of UE and having no idea what unreal script could do. Visual language can break out all possible components into UI menus, just like everything else in UE. Visual oriented people really click with tools like visual programming.

Personally I think it’s cynical / arbitrary to pin the decision to corporate power struggles. Sometimes people propose new things out of interest or as a side project and then it grows from there.


Yeah let's ignore the cynical part.

Whatever, it's already done and text scripting language is not coming back for UE.



This presentation might enlighten why visual programming is in demand for gamedev. It's about a similar system developed in-house for The Division series: https://www.gdcvault.com/play/1023382/AI-Behavior-Editing-an...

In short: There are many game designers and artists who prefer visual scripting tools over traditional code.


But I also see some hate this style. This guy is an artist/programmer.

https://forums.unrealengine.com/t/blueprints-are-a-toy-which...

BTW I do agree visual programming makes sense in certain area, but to use it for a general scripting language? Hell no...


The ratio of people I know in gamedev who love vs hate visual programming is on the order of 10:1


I assume you work in unreal? Unity is more popular though and doesn't have visual scripting. Unreal Engine just having visual scripting and C++ with nothing in between is probably its biggest weakness, people prefer C# over those.


If you cannot access the vault, here is a YT link to the same talk:

https://www.youtube.com/watch?v=rYQQRIY_zcM


It’s nice for asynchronous programming. Promises or callbacks take some getting used to, but a block diagram is as transparent of an abstraction for asynchronous flow as it gets.


I've been using LabVIEW for 15 years, first in an academic and scientific setting, then for machine vision in surveillance, then as the HMI for industrial robots. It is, for all intents and purposes, a general purpose programming language, and I think it would be more popular if it wasn't so stupid expensive.

I was never interested in or good at coding, but LabVIEW made sense to me. Now I'm using programming principles I learned in LabVIEW and re-implementing them in C++ and C#. It's been a journey.


“Better for data flow than control flow”

Here’s a fun use-case that barely made the post: commercial/install Audio DSP. I say barely because Max/MSP did make the list, but that’s not typically for install work (think stadium, convention center, airport, etc.)

The AV world is full of VPL examples. The audio DSP ones that come to mind are QSC’s Q-Sys Designer, Biamp’s Canvas, Symetrix’s Composer. There are many others, but they’re all built on the premise of an expensive hardware processor with free configuration software. Many of these processor/software pairings are configurators rather than VPLs. However, VPL is universal for more complex Audio DSP because audio schematics are very common, and essential processing algorithms are well defined with universally accepted names like “Compressor,” “Parametric EQ,” and “Mixer,” among many more.

Where it’s been getting interesting in the last 5-10 years is the growth of Audio DSP into more flexible control products. Q-Sys Designer is leading the push with nodes that allow Lua scripting, which now supports Blockly. There’s an awkward transition between the primary audio VPL and the Lua/Blockly VPL though. Lua/Blockly supports the event handling features within a predominantly data flow driven application. They also allow the Audio DSP environment to interface with APIs that aren’t supported by plug-ins or other canned modules in the software. Recently QSC has been selling Dell servers configured to run as central processors, highlighting the Linux backend as opposed to other proprietary real-time DSP OSes.

On some level, these VPLs have catered to keeping it simple for guys in vans with USB cables that need to service equipment on site. However, with more remote service possible, this has decoupled the van travel from the programming, allowing it to scale in complexity. It seems like the software could be licensed for use on just about any server if there was a financial incentive.

It all points to an interesting dynamic between visual and text programming as programming talent crystallizes as an off-site role. E.g., GUI buttons are instantiated through the node-based VPL then copied to a GUI canvas where they can be visually customized. This could easily be replaced by a web programming paradigm but will likely remain because the AV industry has a long legacy of guys in vans with USB cables.


My feedback, based on my both long but limited LabVIEW experience

> IDEs for text-based languages normally have features like code folding and call hierarchies for moving between levels, but these conventions are less developed in node-based tools. This may be just because these tools are more niche and have had less development time, or it may genuinely be a more difficult problem for a 2D layout — I don’t know enough about the details to tell.

NI demoed a system to function zoom in/out feature at one point but never fully developed it.

LabVIEW does have a call hierarchy. Like with Doxygen, my problem with these is they show everything rather than finding ways to highlight important relationships (e.g. you don't want to show every array append).

> Input

I suffered from RSI when using LabVIEW. Maybe I hadn't mastered enough but there is a lot more than just dropping that needs to be improved, but the wiring.

> Formatting

I think LabVIEW suffers from giving people so much pixel-perfect control, it encourages people to waste time on it because its "so close".

At one point, they demoed dynamic formatting (reflows the diagram as you add/move nodes without any manual management). This really needs to be made the default.

> Version control and code review

LabVIEW NXG has a text-based GVI format. It is not easily reviewable, mergeable, etc because it has all of the graphical minutia in it.

My proposal would be

- Remove block diagram formatting (see above)

- Separate top-level hand-designed UI from VI auto-placed front panel

- Reduce reliance on icons

- Encourage naming wires

At that point, a text-based VI format would be simple enough to be manageable within existing tools.


> At one point, they demoed dynamic formatting (reflows the diagram as you add/move nodes without any manual management). This really needs to be made the default.

Do you by any chance have a link to a video or article where this is shown or discussed? I’m a heavy user of LabVIEW, and I am also convinced in my own explorations of creating visual languages that dynamic automatic layout is the way to go, but it is a very hard problem.


I had to suffer through a semester of LabVIEW. It seems like it might be okay if you have high enough level blocks of functionality available that you're never hooking many together, but for what I was trying to do it was a massive pain (doable, but tedious and hard to organize and follow)


There are certainly many many instances where visual programming doesn't do the job. One instance where I found it incredibly useful, though, was with Node-RED. I used it to build a hobby-grade process automation solution, including a number of software PID control loops, that ran current limiters, valves and pumps, several load cells and half a dozen temperature sensors. All of this was orchestrated from Node-RED running on a Raspberry Pi communicating with the Tasmota esp8266 sensor/controller nodes over MQTT and I built it from scratch, learning along the way, over the course of 2-3 days.

Something about the visual pipelines and processing nodes really helped with handling concurrency, priority and maintaining tidy boundaries between systems. It was the first time I really had fun programming something in quite some time.


Great job on summarizing and categorizing tons of opinions in one post!

Shameless plug of my take on rethinking visual programming couple of years ago: https://divan.dev/posts/visual_programming_go/


Oh, this is great. I especially like the fire torch analogy and the animation you came up with (https://divan.dev/images/torch.gif).

That's exactly how I feel when reading code. Recently, I explained the problem to a dear colleague when working on a code base (with about 600K lines of code). I compared the situation to working in a large office building with all lights turned off and you have to skim through thousands of documents with a flashlight.

It's comforting to know that other people feel the same way.


"A picture paints a thousand words, sure, but here's a thousand word essay. Try painting me a picture of it."

I don't remember where I hard that, or close to it, but there's a lot of truth to it. There do seem to be some very useful visual logic and work flow tools, I hope that continues, but there's something magical about the ability of language to explain, describe and precisely define things.

Maybe eventually we'll arrive at a state where really good programming environments incorporate the advantages of both textual and visual tools in a powerful and dynamic way. I love films and picture books, I also love novels, but even the best novels are enhanced by really good illustrations and sometimes even maps. Some authors find visual tools like relationship maps and even geographic maps help them in the actual authoring process too.


"Maybe eventually we'll arrive at a state where really good programming environments incorporate the advantages of both textual and visual tools in a powerful and dynamic way. "

Yes, I agree. But not surprisingly, this is not easy to solve. You need a visual language as a consistent concept and then you need a actual visual framework, rendering it all, and enabling user input. You need a compiler compiling forth and back, between text and visu.

And then it needs to scale. And be performant enough to actually work with it and not just have a visual demo.

Sounds hard? Trust me, it is.


I listened to a podcast a while ago where someone was speculating why programs processing data can't just be connected together like devices in a water system. Pumps, sinks, boilers, showers, etc. Or like electrical devices, just channel the data between them.

It occurred to me that data isn't like water, or electricity. Those are generic resources that can largely be easily standardised and treated consistently by any device. Data isn't like that.

Data formats and encoding vary hugely and can have crazy different properties. It's more like piping chemicals between devices in a chemical factory. You can't just take the output from a device producing acid and plug the pipe into the input of a device that's designed to process water, or ammonia, or petrol. They're all 'just' fluids and you can pipe them about, but their properties vary wildly. You can only feed one into a device as input if that device is specifically designed to process that specific material.

Yes it is possible to define standardised data formats, up to a point. XML, JSON, CSV, etc but even then you can't just feed arbitrary JSON into every program designed to ingest JSON and expect it to work, just because the data it expects is JSON formatted.

Yes Unix has pipes, but each tool in the pipe chain has to be told exactly how to process the specific input it gets from the previous tool. You can't just look in history for two arbitrary examples of using pipes, and cut and paste the first half of one pipe chain, and paste the second half of another arbitrary pipe chain on the end, and expect to get something useful out of the combination. Maybe you'll get lucky, but usually you'll get garbage.


"It occurred to me that data isn't like water, or electricity. Those are generic resources that can largely be easily standardised and treated consistently by any device."

Well, electricity is not really easy either. There is a huge effort, to transform the current in the needed shape (voltage, current, frequency, DC vs. AC, smoothened... And water is also not just water, as it can be clean drinking water, or sewage water, or hot (but slightly dirty) water for the heating, or you have hot, high pressured steam, ...

Not just coding is complicated ;)


I don't think he was trying to imply that coding was the only thing that's complicated.

I think it comes back to "No Silver Bullet" and what it has to say about accidental and essential difficulties.

There may be impurities in water, yes, and it can be several temperatures, but water is water is water. It's still chemically two hydrogen atoms connected to a single oxygen. We can define what pure water is. Having that definition allows us to define tolerances for how much "not-water" is in the water. What kind of pipe is necessary to deal with the not-water, etc. The essential difficulties of plumbing and water management are never really about the water, it's about how to deal with not-water.

Same with electricity, the way it moves may be differing, but it's still just electrons. There's no special electricity that will conduct through rubber. Again, we can define what electricity is. And again, having those definitions, we've moved our essential difficulties to the not-electricity part of the problem.

There is no standard "data". We cannot define what data is. Because it kind of is everything. It's a nebulous, abstract concept. It doesn't mean anything on its own. What we really want to do is process subsets of that data. And filtering to that subset is the essential difficulty. And then you have the issue that two consumers of data could want similar data, but not quite. So data's essentially difficulties come sooner and once you've transformed your data into water, you still have more essential difficulties to handle.


Coding includes things like simulating water and electricity, hence it is more complicated.

For example, lets say you want to code to check that your electric setup is right. Then you implement the concepts voltage, current, frequency, DC AC etc, and run it to see what the end result looks like. Coding this is of course more complicated than learning about the concepts in the first place. And as have been said many times before, coding this system is often the easiest part of the job, then you have to add all the helper systems, the UI etc, and that is the really hard part that often is the reason projects fails.

Of course taking electrical engineering in college is way harder than software engineering or computer science, but that is because the software engineering and computer science tracks in college are usually a joke. If you had to be able to code systems as described above then it wouldn't be easier at all.


That's exactly what I was thinking. Electrical engineering is a complex field and may seem easy from the outside simply because of its maturity, but I would argue is the hardest of the engineering disciplines to grasp as well as one of the most important.


> why programs processing data can't just be connected together like devices in a water system. Pumps, sinks, boilers, showers, etc. Or like electrical devices, just channel the data between them.

I'm not a functional programmer, but don't monads aim to do something like that?


> Yes Unix has pipes

And they only understand byte streams, which are the simplest possible abstraction.

> Or like electrical devices, just channel the data between them.

If you get to that level of precision, there are visual tools for logical circuit design. But I don't think that's what the author had in mind.


> You can't just take the output from a device producing acid and plug the pipe into the input of a device that's designed to process water, or ammonia, or petrol.

Ever played SpaceChem?


What if the painting could change? What if you could touch the painting and explore it interactively? What if the painting engulfed all your senses, creating the illusion that you're in the painting?

The journey of media from print to VR. While it'll certainly take more than 1000 words to program the painting in VR, if a million people see it, the value ratio is pretty good.


If you're starting from an essay, I doubt all those things together would he as helpful as adding a spoken-word audio track.

Or some text.


That was a good summary, was useful and really enjoyed reading it. The key takeaway for me, which really reified ideas I have about UI design tools, was this:

> At the opposite end of the spectrum is, say, an oil painting, which is also a visual medium but much more of an unconstrained, freeform one, where brushstrokes can swirl in any arbitrary pattern. This freedom is useful in artistic fields, where rich ambiguous associative meaning is the whole point, but becomes a nuisance in technical contexts...[elided]...Drag-n-drop editors arguably lose a lot of the features of ‘true’ languages by giving up structure, and more programmatic elements are likely to still use a constrained set of primitives.

That's the key problem afaics, and why UI design tools + codegen never works well outside of highly constrained situations. There are two conflicting practices: programming and visual design. And they overlap, obviously, but they don't properly mesh. The tool either goes in the programmatic direction, which makes codegen much easier but drastically curtails the ability to design properly. Or it goes in the design direction and makes codegen either very difficult or forces generation of absolute garbage code (for worst examples, see tools that generate enormous amounts of absolutely positioned HTML elements with inline styles on everything). And there's no real middle ground (not that this stops endless attempts to square the circle).


It think, VP shines in specialized fields and not general programming.

Visual domains are an obvious example.

If your domain is constrained by something that maps good to spatial dimensions than you're good to go. A few extra dimensions can also put into styling. But after that it becomes too restricting.

That's why interface builders work fabulous until you have to implement actual business logic and modularize your application. This always leads to adding things that aren't directly represented in the current visual model and get lost, because of indirections.


The strength of visual programming is not in visualising code, but rather, visualising programs.

At a small scale, textual code can be very concise, expressive, compact, efficient and fast to write. But at a large scale, writing thousands of lines of code in text files in folders makes it very difficult to understand how the pieces of a program connect. To overcome this, we rely on file naming conventions, frameworks or writing documentation to understand how the various parts of text relate to each other.

Programming with nodes and wires is a way of visualising those relationships by creating a map of the program. Similar to maps of the real world, those maps may be complex and not easy to understand. But those maps show that there is complexity. what we can learn from maps of the real world? Well: zoom level matters. When we want to understand the world, we look at countries. When we want to get to the next city, we look at the road level.

In my opinion, visual programming environments are transparent and honest about the underlying complexity of a program. Contrast this with countless folders full of files filled with thousands of lines of code.

Such repositories of code can be very difficult to navigate and get around. Yes, frameworks and conventions can help to understand already learned organisational systems (example from the real world: “I learned to navigate American cities designed on a grid of streets, but I get lost in Europe“).

If we compare textual coding to visual programming, we should not only focus on the expression of low level logic and primitives, but also on the organisational aspect of programming. I think there is still a lot of potential for both textual coding and visual programming tools to create better wayfinding systems.


Yeah, this is going to be yet another "visual programming" topic.

I sort of disagree with the intro to LabVIEW:

> There are a large number of visual programming tools that are roughly in the paradigm of ‘boxes with some arrows between them’, like the LabVIEW example above. I think the technical term for these is ‘node-based’, so that’s what I’ll call them.

There are NOT a "large number" of visual programming tools that are roughly in the paradigm [...] like LabVIEW. I am sure there are some but, really, how many active ones?

Also, "node-based" is not the main concept. The most essential concept of LabVIEW is known as "data-flow" (https://labviewwiki.org/wiki/Data_flow). Data generated by a node "flows" to the other nodes it is connected to and when any node has data on all it's inputs, it executes and produces data which flows to whatever nodes it's connected to. This happens asynchronously. You get concurrency "out of the box" with LabVIEW. That's extremely powerful and much easier to reason about, IMHO, than the concurrency constructs in virtually any mainstream computer language.

I think there's plenty of room for all kinds of languages (both text and visual) as DSL's. If they're well-designed and intuitive, they're like a hot-knife through butter for a domain expert. The nice thing about a really good DSL is that you use it more like a library, whereas with a general purpose language you would be reaching for a framework. It gives you a lot of creative latitude and it feels right. The problems start when people start using them as golden hammers, or use them for things which they're not good at.


>There are NOT a "large number" of visual programming tools that are roughly in the paradigm [...] like LabVIEW.

They list half a dozen or so. Most of the node based systems are also flow oriented, aka nodes are functional and stateless which allows them to be mapped across any amount of input.

It's a neat pattern but it's really not as uncommon as you think. The mainstream shader languages work this way, for example.


In my opinion, LabVIEW is the most general purpose and powerful visual programming language. Others I know of are vvvv gamma where the recent update basically makes it a .NET language (although LabVIEW also has .NET integration) and Pure Data. Other tools are Simulink, TouchDesigner, Max/MSP, and Grasshopper for Rhino. After that, it drops off pretty rapidly to a collection of barely used or even known about tools pr highly specialized node-based tools.


Another example is ncode Glyohworks, which was used by a very large company I used to work for, for test data post-processing.


> Data generated by a node "flows" to the other nodes it is connected to and when any node has data on all it's inputs, it executes and produces data which flows to whatever nodes it's connected to.

I think you just described flow programming. There are at least 10+ visual languages / environments that work like this.


If not a "large number", then more like a "huge number".

Even the single application Blender itself has several different visual node based programming systems (most but not all built on top of the same framework, and applied to different kinds of programming), for shader programming, CPU image processing, video compositing, lighting, animation and constraints, particle systems, physics simulations, procedural mesh generation, and the Blender Foundation and third parties are developing even more, like procedural city generation:

Procedural city generation:

https://www.youtube.com/watch?v=jb_jwsyfQc4&ab_channel=derbe...

Everything nodes:

https://code.blender.org/2020/12/everything-nodes-and-the-sc...

Nodes Workshop - 22 - 25 June 2021

https://devtalk.blender.org/t/nodes-workshop-22-25-june-2021...

>“It is like compositor but for physics”.

The complete beginners guide to Blender nodes, Eevee, Cycles and PBR:

https://artisticrender.com/the-complete-beginners-guide-to-b...

>Blender has a few nodes systems. The first and obvious one is Blenders shading system for Cycles and Eevee. This is the node system that we will focus on in this article.

>But we also have nodes for compositing, lighting and textures, even if the use case and future for texture nodes are uncertain at this point.

>We can also extend Blender with other node systems through add-ons. The most well-known is probably animation nodes that come bundled with Blender.

>There are also other node systems available for Blender. AMD ProRender for example, a third-party render engine that has its own shader node system. Another example is Luxrender. There is also mTree for generating trees with nodes and Sverchok that can manipulate all kinds of data with nodes.

Reposting this from a few years ago:

https://news.ycombinator.com/item?id=18496880

There's so much interesting prior work! I really enjoyed this paper “A Taxonomy of Simulation Software: A work in progress” from Learning Technology Review by Kurt Schmucker at Apple. It covered many of my favorite systems.

http://donhopkins.com/home/documents/taxonomy.pdf

It reminds me of the much more modern an comprehensive "Gadget Background Survey" that Chaim Gingold did at HARC, which includes Alan Kay's favorites, Rockey’s Boots and Robot Odyssey, and Chaim's amazing SimCity Reverse Diagrams and lots of great stuff I’d never seen before:

http://chaim.io/download/Gingold%20(2017)%20Gadget%20(1)%20S...

I've also been greatly inspired by the systems described in the classic books “Visual Programming” by Nan C Shu, and “Watch What I Do: Programming by Demonstration” edited by Alan Cypher.

https://archive.org/details/visualprogrammin00shu_2pf

https://archive.org/details/watchwhatido00alle

Brad Myers wrote several articles in that book about his work on PERIDOT and GARNET, and he also developed C32:

C32: CMU's Clever and Compelling Contribution to Computer Science in CommonLisp which is Customizable and Characterized by a Complete Coverage of Code and Contains a Cornucopia of Creative Constructs, because it Can Create Complex, Correct Constraints that are Constructed Clearly and Concretely, and Communicated using Columns of Cells, that are Constantly Calculated so they Change Continuously, and Cancel Confusion

http://www.cs.cmu.edu/~bam/acronyms.html

Also, here's an interesting paper about Fabrik:

https://donhopkins.com/home/Fabrik%20PE%20paper.pdf

Danny Ingalls, one of the developers of Fabrik at Apple, explains:

"Probably the biggest difference between Fabrik and other wiring languages was that it obeyed modular time. There were no loops, only blocks in which time was instant, although a block might ’tick’ many times in its enclosing context. This meant that it was real data flow and could be compiled to normal languages like Smalltalk (and Pascal for Apple at the time). Although it also behaved bidirectionally (e.g. temp converter), a bidirectional diagram was really only a shorthand for two diagrams with different sources (this extended to multidirectionality as well)"


Also, spreadsheets are certainly one of the most popular, widely used, easily accessible, and important visual programming languages. It's not an understatement to say that the economy would grind to a halt and civilization as we know it would collapse if spreadsheets suddenly disappeared tomorrow.

The article quoted one of my earlier posts on that subject, but I've written more about spreadsheets as visual programming, and cited Brad Myer's work and articles about visual programming languages, and there's more in the HN discussion of his classic 1989 paper "Visual Programming, Programming by Example, and Program Visualization; A Taxonomy," Proceedings SIGCHI '86: Human Factors in Computing Systems. Boston, MA. April 13-17, 1986. pp. 59-66:

https://news.ycombinator.com/item?id=26645253

DonHopkins 3 months ago | on: Spreadsheet is a software development paradigm

Spreadsheet certainly are visual programming languages: by any measure, by far one of the most common most widely used types of visual programming languages in the world.

HN discussion:

https://news.ycombinator.com/item?id=26057530

Taxonomies of Visual Programming (1990) [pdf] (cmu.edu)

https://www.cs.cmu.edu/~bam/papers/VLtax2-jvlc-1990.pdf

https://news.ycombinator.com/item?id=26061576

Brad Myers' paper answers the age-old argument about whether or not spreadsheets are visual programming languages!

https://news.ycombinator.com/item?id=20425821

>DonHopkins on July 13, 2019 | on: I was wrong about spreadsheets (2017)

>Google sheets (and other google docs) can be programmed in "serverless" JavaScript that runs in the cloud somewhere. It's hellishly slow making sheets API calls, though. Feels like some kind of remote procedure call. (Slower than driving Excel via OLE Automation even, and that's saying something!) Then it times out on a wall clock (not cpu time) limit, and breaks if you take too long.

>A CS grad student friend of mine was in a programming language class, and the instructor was lecturing about visual programming languages, and claimed that there weren't any widely used visual programming languages. (This was in the late 80's, but some people are still under the same impression.)

>He raised his hand and pointed out that spreadsheets qualified as visual programming languages, and were pretty darn common.

>They're quite visual and popular because of their 2D spatial nature, relative and absolute 2D addressing modes, declarative functions and constraints, visual presentation of live directly manipulatable data, fonts, text attributes, background and foreground colors, lines, patterns, etc. Some even support procedural scripting languages whose statements are written in columns of cells.

>Maybe "real programmers" would have accepted spreadsheets more readily had Lotus named their product "Lotus 012"? (But then normal people would have hated it!)

I Was Wrong About Spreadsheets And I'm Sorry:

https://www.reifyworks.com/writing/2017-01-25-i-was-wrong-ab...

HN Discussion:

https://news.ycombinator.com/item?id=26668885

Excerpt from "Taxonomies of Visual Programming and Program Visualization", by Brad A Myers, 1990/3/1, Journal of Visual Languages & Computing, Volume 1, Issue 1, pages 97-123:

Spreadsheets, such as those in VisiCalc or Lotus 1-2-3, were designed to help nonprogrammers manage finances. Spreadsheets incorporate programming features and can be made to do general purpose calculations [71] and therefore qualify as a very-high level Visual Programming Language. Some of the reasons that spreadsheets are so popular are (from [43] and [1]):

1. the graphics on the screen use familiar, concrete, and visible representation which directly maps to the user's natural model of the data,

2. they are nonmodal and interpretive and therefore provide immediate feedback,

3. they supply aggregate and high-level operations,

4. they avoid the notion of variables (all data is visible),

5. the inner world of computation is suppressed,

6. each cell typically has a single value throughout the computation,

7. they are nondeclarative and typeless,

8. consistency is automatically maintained, and

9. the order of evaluation (flow of control) is entirely derived from the declared cell dependencies.

The first point differentiates spreadsheets from many other Visual Programming Languages including flowcharts which are graphical representations derived from textual (linear) languages. With spreadsheets, the original representation in graphical and there is no natural textual language.

Action Graphics [41] uses ideas from spreadsheets to try to make it easier to program graphical animations. The 'Forms' system [43] uses a more conventional spreadsheet format, but adds sub-sheets (to provide procedural abstraction) which can have an unbounded size (to handle arbitrary parameters).

A different style of system is SIL-ICON [49], which allows the user to construct 'iconic sentences' consisting of graphics arranged in a meaningful two-dimensional fashion, as shown in Figure 5. The SIL-ICON interpreter then parses the picture to determine what it means. The interpreter itself is generated from a description of the legal pictures, in the same way that conventional compilers can be generated from BNF descriptions of the grammar.

10. Conclusions

Visual Programming and Program Visualization are interesting areas that show promise for improving the programming process, especially for non-programmers, but more work needs to be done. The success of spreadsheets demonstrates that if we find the appropriate paradigms, graphical techniques can revolutionize the way people interact with computers.

DonHopkins 3 months ago [–]

https://news.ycombinator.com/item?id=26112751

Here's an example of a thread where somebody was fruitlessly trying to argue that a spreadsheet isn't a visual programming language:

https://news.ycombinator.com/item?id=22984831

>lmm 9 months ago | on: Maybe visual programming is the answer, maybe not

>If there was a visual programming language with anywhere near the popularity of Ruby, I'd be willing to consider that maybe the idea has some merit.

>DonHopkins 9 months ago [–]

>Excel.

>I could turn your argument around: If Ruby were anywhere near as popular, widely used, and successful as Excel, I'd be willing to consider that maybe the idea that Ruby is a viable programming language has some merit.

>But I won't, because whether or not something is a visual programming language isn't up to a popularity contest.

>Can you come up with a plausible definition of visual programming languages that excludes Excel, without being so hopelessly contrived and gerrymandered that it also arbitrarily excludes other visual programming languages?

[...] (TL;DR: he couldn't, since he was under the mistaken impression that Excel not programmable, and was less popular than Ruby...)

That thread was on an earlier discussion about a blog posting from 2020 about the same 1989 paper by Brad Myers that we're currently discussing.

https://news.ycombinator.com/item?id=22978454

https://blog.metaobject.com/2020/04/maybe-visual-programming...

>Maybe Visual Programming is The Answer. Maybe Not

>Whenever discussing problems with programming today and potential solutions, invariably someone will pop up and declare that the problem is obviously the fact that programs are linear text and if only programming were visual, all problems would immediately disappear in some unspecified way.

>I understand the attraction of visual programming, particularly for visual thinkers. However, it's not as if this hasn't been tried, with so far very limited success. Brad Myers, in his 1989 paper Taxonomies of Visual Programming gave, along with the titular taxonomy, a non-exhaustive summary of the problems, starting with visual languages in general:

[...]


The benefits of visual programming are usually that it captures the domain model better than using general programming languages without any guidance. Lot of the value is in well maintained integrations to other systems. Often, if the components of the visual language were released as a library or a framework, it would provide more benefit than having to drag and drop visual boxes and arrows which don't ever make it to version control or in some very clunky way.

Releasing the composable parts as text based library will not happen, because of economics. There is little money in creating libraries for the creators. All generated value would be captured by infrastructure, cloud platform providers.

With visual languages, the value can be captured by selling a no-code idea to non-programming folks and especially to enterprise with deep pockets wishing to get rid of expensive engineers.

Visual programming languages are an economical problem, not a technical one.


What a fantastic post.

> Is ‘folk wisdom from internet forums’ worth exploring as a genre of blog post?

I’d add another yes here, if they’re all as thorough as this post.

I’ll add something that I didn’t see directly addressed in here: input methods strongly affect the medium.

Last year I struggled intensely with RSI. I got to the point where I couldn’t type for months. I tried some voice coding tools (caster and talon). Some aspects of this were actually better than keyboard coding. Unfortunately speech recognition is still at a point where it drove me insane with inaccuracies.

It also made me realize that with a keyboard, text code is incredibly natural. With keyboard+mouse, some new modes open up. With voice alone, I wanted to code a very different way.

I won’t go into detail, because this is already a long comment, but I believe that when we get speech recognition as reliable as typing, we’ll see an explosion of new programming paradigms.


Something I'd like: a modern maintained DRAKON version: https://en.wikipedia.org/wiki/DRAKON . Tried it once, generates reasonably good C code. It would be great to have a modern FLOSS version.


Not for C, but someone shares your motivation enough to build a version for JS: https://drakon.tech/ & https://github.com/stepan-mitkin/drakon.tech


> So presumably we could learn to distinguish closely packed shapes if we were familiar enough with the conventions.

The data visualization people have already demonstrated this with sparklines and small multiples. There are fewer preattentive attributes than most people would assume so I expect increasingly dense visual representations to approach text (e.g. logograms).

I'll also point out that data viz people have also come up with reasonable solutions to problems mentioned like wire bundling. I haven't seen an auto-layout algorithm I like but I've found a reasonable guess followed by manual manipulation with automatic wire routing to be okay for visualizations and the circuit drawing and VLSI I did 20 years ago in college.


There is a type of visual programming which is applied to virtually everything around you of some mechanical complexity: interactive geometric constraint solvers.

In CAD tools like SolveSpace, SolidWorks, Inventor and several others, the dimensions of objects are not described directly, but instead through a set of constraints such as “these are orthogonal”, “these have this distance”, “this is a tangent” etc. If you have never used it, try SolveSpace, which is free.

Albeit highly declarative, it is absolutely a programming language for a specific domain. It doesn’t try to be good at describing imperative programs or flows; instead it presents a paradigm which is very different, but fits perfectly in the problem domain of mechanical designs.


Wow! This is so awesome! Would you please write a page like this for every Hacker News post?


Probably worth triaging which topics are recurring to perform a meta-analysis. See my comment elsewhere in this thread.


I have to write scripts for Cisco UCCX (call center software) at work sometimes. Never heard of it before this job. I was horrified that uses a visual ‘drag-n-drop’ IDE to create the call handling scripts, no other supported method. It’s the only ‘visual’ coding I’ve ever used. And in the 2 years I’ve used it, I actually don’t hate it. As other posters have mentioned, a visual approach can be decent DSL.

Btw, you can run still run arbitrary Java (1.7) in UCCX scripts too, which is kinda cool. Used that to make email alerts in case of certain situations in the call queue


The biggest challenge with visual programming is you have to reinvent every text-based tool.

And there are a lot of them.

Diff, interactive debugging, keyboard-accelerated inputs and shortcuts, search, version control... The list goes on.


Side note: interactive debugging is kinda inherent with the flow based programming paradigm. The program is continuously executing with every new action and immediately reflecting the current state. To program with it is to debug. If there’s an error with a component, it turns red and everything down stream breaks as well.


Yes, I think that sort of instant feedback is very helpful and is probably one of the reasons Excel is so popular.


That falls on is face at scale. There is a reason not every bullet in a gun is a tracer bullet.


Flow paradigm doesn’t claim scale though. It’s about being a DSL for specific environments, often visually-oriented software like CAD or interactive art, which doesn’t have heavy 10K person team requirements.


Fair. And I think there is more here.

If you are programming a singular thing, this probably has good legs. As soon as you are programming many things, though?

This is why cad systems can be seen as visual programming of parts, but are never considered for this debate.


I’d be curious to hear you elaborate on the singular/many things point. Trying to understand which deficiency you are pointing out.


For example, Programming a security system for my house would be great visually. Programming all of the houses on my block? Not so much. Even though the textual rules would be roughly equivalent.


To further your point. Copy/paste. Email. Templating. Highlighting. Rendering that is not a part of the visual, such that accessibility tools work with it.

There is an impressive list of things your enter a race to parity with.


Our attempt at a Scratch-like language for our game creation app Modbox: https://docs.modboxgame.com/docs/mbscript

It's definitely allowed non-programmers to make complex AI/game state for their creations, without having to deal with syntax errors or other issues. We choose Scratch/Blocky since this would be used for more managing states/logic, rather than flowing data (where node based makes more sense)


Overall a really good essay.

I would have split the "Types of visual programming" in two main categories: Visual and graphical.

E.g. scratch is only visual because it is an over-the-top syntax highlighting but still has an AST like textual programming (they are based on hierarchies and sequences).

Spreadsheets are also sequential but in multiple dimensions.

Dataflow- / node- based systems on the other hand are actually graphical (graph-based) and can not be mapped to hierarchies in text directly.


Post was useful, more of this kind of thing please!


I like the line "There’s a comforting familiarity in reading the same internet argument over and over again."

The problem I think with visual programming tools is people get stuck. I watched my dad's career die stuck in Visual Basic. It's because people get stuck in thinking the way those tools force you to think. Code in text has a magnificent universality.


I'm curious what you mean by how Visual Basic makes you think.


Visual Programming is great for selling: look how simple it is to do this simple task!


The cynical side of me thinks that is the basic pitch for every framework and methodology.


Look at the kinds of examples people are using to demo Copilot.


Funnily enough, I’ve been quoted in this. ;)

To expand on modularity, it is indeed curious to me why many will often throw it out that visual programming somehow doesn’t scale for complex and large programs and fails at modularity.

Let’s look at LabVIEW as compared to F# and Elixir, two text-based languages that are wonderful to work in and generally well loved, where LabVIEW is well loathed. In LabVIEW, you have projects to organize code, libraries to serve as modules or namespaces, the ability to create functions (including polymorphic and malleable VIs, which are a way to auto propagate types to make functions auto polymorphic), clusters (which are basically structs or records), immutable data, OOP with interfaces (value-based and not reference-based, which is a major plus and not found in many languages), concurrency features, and actors. F# has projects, namespaces, modules, immutable data, records, discriminated union, OOP (reference-based), pattern matching, concurrency features, and some actor frameworks. Elixir is similar with all its features with some small differences. Now, these are all essentially the same fundamental tools available to create programs and to modularize them and scale them. So where’s the rub? How is it that you can somehow create modular and large complex programs with Elixir and F# but not LabVIEW? Where’s the actual discrepancy? Throw in other languages to this discussion and you get hoards of inheritance hierarchies, memory access problems, and mutability, all of which are well known to cause issues with scale and complexity. So I’ve never understood the argument. (I actually do have arguments against LabVIEW for large complex problems but it’s not the arguments usually made and more relate to the IDE than the language).

Anecdotally, I once interviewed with a place that was vehemently giving up on LabVIEW. They gave no real reason other than they hated it and said it didn’t work for large programs. So I asked to see one of their Python programs, which was their new language of choice. I was shown a single file that was something like 10,000-15,000 lines long with functions often having signatures taking 10-20 lines, meaning the functions had something like 10-30 function arguments. So at that point I knew that these people didn’t actually know how to properly architect software and that LabVIEW wasn’t actually their problem. They were their problem.

As mentioned below my part that was quoted, it can indeed be argued that LabVIEW’s visual nature is a feature when it comes to showcasing poorly organized code. It makes it bare, right there in front of you and in more dimensions than what you get in text. You don’t necessarily get that with text, so people who write poorly organized code in any language but then use LabVIEW are greeted with a visual experience they’re unhappy with. They, incorrectly, see the problem as lying with the tool and not themselves and their programs.

All that being said, there are absolutely problems with LabVIEW and other visual languages, just like there are with text-based languages. I’d just like to see more objective discussions of these things because I believe a text-only view holds things back. I’d love to see programming computers evolve beyond the languages, editors, and IDEs we have now.


Could we connect? I have some background in visual programming languages (grad school), and would love to discuss these ideas with you, esp. the "programming=text" assumption. Thanks!


One more into that list.

Flow based with input as impulses.


Really high quality post, would love to see more like this


really cool survey of the topic, nice work!


off topic but just triggered by the article.

I strongly suggest somebody write a bot on HN and maybe leverage GPT-3 to harvest collective wisdom on many topics. It's a golden mine.


HN is a great resource to gather different points of views from experts. Here is some folk wisdom I compiled on load and performance testing for a presentation:

Selenium/Web Driver handles the UI tests. Postman and a handful of others can perform the API tests. JMeter is the de facto for load testing at every place I've been to. And you can write any of these into your CI.

There are three separate reasons to do load testing:

1) Performance testing. Confirm the system does not degrade under a specified load and find out what performance can be expected under these circumstances. This is basically ensuring your system can handle X amount of traffic without issues and knowing your baseline performance. You should get the same kind of response times as you are getting from live server telemetry.

2) Stress testing. finding out what happens when the system is stressed beyond its specs. How does it degrade & where.

3) Reliability testing. Find out how your system breaks and when. The goal here is to try to break the system and test things like failover and making sure you don't lose or corrupt data. Better to die gracefully then abruptly.

I've used Jmeter quite a bit over the last year, and what seems to be the largest issue is that it doesn't "break" apps because it works at the protocol level and isn't a full "browser". As you increase load and response times start to increase, since it runs sequentially through the test plan the time between the requests also increases. But, when actual humans with browsers use apps there are loads of AJAX requests being fired which don't necessarily go in order or wait for others to complete first.

Most managers only want to see how traffic will perform in what-if scenarios (can we handle Black Friday, what will happen if our traffic goes up 10x during a special event, etc). For these, JMeter and whatever service you're using for analytics/metrics work perfectly fine (New Relic, is the current favorite). Then we can compare average latency, Error Rates, etc. with what we get during JMeter.

The issue is that most managers or other PHB want more than just a one off for load testing. And this article greatly covers that.

A lot of time people want a Swiss Army Knife of tools for "automation", where load testing falls under that category for them.

This magical tool should be able to:

* Test your API calls from a functional, integration, and a performance (not load testing; just making sure we're under specific latency)

* Test your Web-sites from the same testing perspectives, as well as being modular (ie: Selenium's POM)

* Integrate with CI so specific tests, test suites/flows can be tested for every new build, and we can run specific workflows by clicking a button

* Be used during load testing so we can measure latency and run tests while recreating customer experience from the Web side of things.

Last time I had to do load testing in a professional context, we required 10+ million long-lived TCP and websocket connections transacting multiple times a second. There weren't any off-the-shelf solutions at that time that could come within an order of magnitude of that at a reasonable cost - the most viable solution we tested required a thousand EC2 instances to sustain that traffic.

Do red line testing. Gradually increase load on your service with concurrent requests until it is saturated.

Measure how the latency and errors progress with more concurrent requests and understand at what point and how your service starts to break down under heavy load.

Based on this you may have to do many things –

1. Can you optimize your service or your service's downstream dependencies or the application calling your service.

2. Can you build in graceful degradation into your service – functionality reduction to get more useful throughput out of your service – with same resources.

3. Build circuit breakers and throttlers before your downstream dependencies so that you don't overload them and cause them to fail or you don't fail totally when they do indeed fail.

4. If you do get overloaded for some reason (say your server pool suddenly became half its size), are you able to recover quickly.

5. Do you have monitoring and alerts for these load scenarios?

I tend to avoid having to do load testing as it sucks up time without telling me much of interest. I instead opt for having decent telemetry on the live system. It will tell me how it performs and where the bottlenecks are. I can set alerts and take action when things degrade (e.g. because of a bad change). Besides, there is no substitute for having real users doing real things with real data. And in any case, having telemetry is crucial to do any meaningful stress or reliability testing. Otherwise you just know it degrades without understanding why.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: