Hacker News new | past | comments | ask | show | jobs | submit login
Visual Programming – Why It’s a Bad Idea (mikehadlow.blogspot.com)
95 points by fagnerbrack on Nov 20, 2018 | hide | past | favorite | 119 comments

Visual programming languages will only take you so far, but often that is far enough. What the author misses is that we as professional programmers miss the success stories, because we only get called in when the boundaries have been reached.

In my career I’ve often been asked to rewrite Access DBs, InfoPath forms, or SharePoint sites developed by amateurs into something more usable. My early reactions where along the lines of “what insane person tried to do this in Access.” I realized at some point that was the wrong approach in my thinking. These systems are working software delivering value. I now say “wow, congrats on doing so much on your own. Let me help you take this further.”

Back when I was doing tech-work in the financial districts of Chicago and NYC (late 90s, early 00s), some of my very favorite people were salesmen and analysts who had built these intricate and complex applications on top of Excel or Access or FoxPro or whatever.

I was just starting out as a programmer (professionally) myself, and once I was finished doing whatever I was sent to their office to accomplish, they would pull me to the side and show me their applications and pick my brain about ideas and ways to improve them or if it made sense to try a "more serious" programming language.

I was always completely floored by how much they had accomplished with every-day tools that people generally slog through out of necessity. Some of these guys were selling their applications for a solid slice of monthly income and support as well. "Normal" people building real, and truly useful applications with real domain knowledge are an absolute inspiration for me.

It reminds me a lot of web development, actually. Because everyone in their potential client-base had Excel, just as we all have browsers now. So building in Excel made perfect sense. I see the same thing happen with Wordpress or Shopify these days. Building these amazing applications that extend far beyond the purpose of the underlying application.

My early interactions with those people them helped me immensely when I went on my own full time, where I would walk into a small business and see this complicated app built upon - something - by an overworked "tech person of the office", who was clearly a bit sad that their creation was about to be replaced by some stranger. That was the person I was going to get to know and work with daily to get the application right, and I made sure they were part of every conversation.

Most of my projects are greenfield these days, but I still hold those programmers - in the truest sense - in the utmost highest regard. I still love to discuss their projects with them in the rare cases that I get the opportunity.

Indeed. I once thought as the author did, and as I'm sure many programmers do, that things like VB and Flash and Excel macros were a blight upon the world. But a lot of useful work got done with VB, careers were launched from humble beginnings on Newgrounds, and Excel is one of, if not the most, popular tools in all of business.

I think John Ohno does a good job of articulating what's so great about these tools in his article "Big and small computing" [0].

[0] https://hackernoon.com/big-and-small-computing-73dc49901b9a

VB and Excel have the advantage of having full blown programming language behind them so you can implement very complex things. I compare this to Labview where everything is visual and things get out of control pretty quickly.

Labview has m-script (which is matlab like)

How does this relate to the visual programming stuff? Can you go back and forth or is it a separate thing?

It is used to write additional components to be used in the visual editor. A code block, like a custom macro in spreadsheets.

As a non professional programmer I much prefer the way you think about this than the author.

Rarely ever do I find myself programing for the sake of programming, it's always to serve some goal, to make some thing, fix some problem. If I write the worst code, in the worst way, and it still makes the light flash or the robot arm move, etc, then i succeeded.

I make heavy usage of the MIT Android app maker thing in my arduino and esp32 projects, and I'll be the first to admit that I use it so heavily because I have some minimal grasp of it and I've wrapped my head around most of its paradigms.

I'll admit that because I've only got a hammer, every problem is a nail, but taking away my hammer because im using it wrong just means I cant build anything.

If it doesn't itch, don't scratch. If it itches and scratching with your finger makes it go away, you may not not need a "professional scratching device".

Also, if people can build computers in Minecraft that don't achieve anything and get praise for it, people who solve real problems with creative (ab)use of something really should get kudos, too. You're not even using anything "wrong".

I think the broader, opposing point you bring out is that visual programming is perfectly suitable for specific scopes of computational modeling, and that's just because they're DSLs, and we already know DSLs are the right choice for a number of problems. I'd like to know who the author things they're engaging with on the points they're making.

I disagree that DSLs are the right choice unless it's an extension to the suitably flexible implementation language itself.

Actually, the perceived need for "dumb" DSLs is sprung from the same misconceptions as mentioned in the article and the end result is inevitably that simple problems are made somewhat simpler and complex problems turn into impossible problems.

A proper DSL extension in an unobtrusive host language is another thing but also hard to find. If a Lisp syntax is acceptable then that's the canonical example.

I don't want to get depressed overe all misconcieved excuses for test automation languages, process automation, configuration templating systems, etc, so I'll just stop here.

Obviously these are sweeping statements, but the problem with this generalization is that successful DSLs become invisible because we take them for granted. Regular expressions are a DSL. SQL is a DSL. LaTeX is a DSL. The Wikipedia page on domain-specific languages even lists HTML as an example: https://en.wikipedia.org/wiki/Domain-specific_language#Examp...

With regard to LaTeX or HTML, you could even argue that their goals could be better accomplished with some sort of graphical interface, which sounds a lot like… visual programming!

Geometric constraint solvers are essentially graphical, declarative domain-specific languages for mechanical/physical design. As an example: http://solvespace.com/index.pl

I would argue that perhaps with the exception of regex, the examples you mentioned are excellent ideas that are awsome despite their suboptimal DSLs, and they would have been even better if their DSLs had been implemented as extensions in a Lisp-like language.

I agree that it's good for a DSL to give not just unfiltered but well integrated access to the underlying implementation language, instead of trying to replace it with a dumbed down domain specific language.

It should be possible for implementation language programmers to create new primitives, that visual language programmers can easily use without learning the implementation language.

And that extension interface should be part of the visual programming language from day one, good enough for the visual language to use itself for most of its built-in primitives, not an afterthought nailed onto the side.

I think the future is General Purpose Languages that make DSL. Right now we have Racket and that is such a powerful thing to create DSL simply.

I don’t want to beat the author up too much- the over application of CASE tools in the 90s was a real thing, and I have less than fond memories of being forced to use a VPL when it really wasn’t the appropriate solution.

A good professional programmer will feel (and in fact be) constrained and slower by a VPL. That’s frustrating and likely a misuse of resources. I just think the author missed the point that VPLs generally aren’t for “us”, they are for people who something else for a living.

To add, can you imagine if IT had to develop every single application the business thought up? They would never cope. I think Access, InfoPath and CMS systems like Drupal allow business to create prototypes. Once the application has been validated and becomes business critical then IT can take over and develop a "proper" application.

It makes sense to use a visual programming language to create basic CRUD applications. A central task list with fields that are specific to the business process can go a long way in improving operational efficiency in a business unit. Beats an excel spreadsheet.

Prototype is not enough. Prototype is backed by some analytics that usually hidden or omitted. And “decoding” prototype app to create “real” app may take a lot of time.

In my experience, people have more ideas than we have developers to implement every idea. It is very hard to work out if an idea is a good idea or not before you implement it.

You cannot implement a system for every idea because you will not have enough developers and you will waste time implementing ideas that sound good but are useless. Visual Programming applications like Access allow less technical users to implement their ideas. Some of these ideas will go nowhere. This is fine because you will not have wasted developer resources. Some of these Access applications will solve genuine business problems. It may be difficult to decipher them but it is no more difficult than trying to figure out what users want. At least in this instance, you have an application you can test against.

That is a awesome attitude to have.

Topkai22, I love your attitude. This is exactly how we should look at visual programming languages. They allow people to create something, which was so far not accessible for them. In surprisingly many cases, it's completely enough!

I also understand why people dislike this idea. In many cases, it's really hard to extend something created by non-professional-programmers. I'm one of the founders of Luna ( http://luna-lang.org ), which is a visual programming language blending the boundaries between visual data-flow designing and conventional textual programming.

As a contrast to the mentioned article in this topic, I'd encourage people to also read our blog post about why visual programming could actually be very useful: https://medium.com/@luna_language/luna-the-visual-way-to-cre...

There is more to visual programming than simple wire diagrams and "code-blocks-as-visual-blocks":





And yes, this: http://www.fantasticcontraption.com/ This is not 100% real programming, because it does not include sensors, but you do create constructs that interact with the environment and accomplish goals. It's close enough to show what's possible with certain interfaces.

Long-term: https://dynamicland.org/

There have been much fewer resources allocated to the development of these concepts. Mostly for historic reasons.

The sad reality is that both critics and proponents of visual programming are often uninformed about prior work, user studies and related research rooted in cognitive and developmental psychology.

I thought this presented an interesting visual programming concept:


More in these videos: https://vimeo.com/179904952 https://vimeo.com/274771188

There's so much interesting prior work!

I really enjoyed this paper “A Taxonomy of Simulation Software: A work in progress” from Learning Technology Review by Kurt Schmucker at Apple. It covered many of my favorite systems.


It reminds me of the much more modern an comprehensive "Gadget Background Survey" that Chaim Gingold did at HARC, which includes Alan Kay's favorites, Rockey’s Boots and Robot Odyssey, and Chaim's amazing SimCity Reverse Diagrams and lots of great stuff I’d never seen before:


I've also been greatly inspired by the systems described in the classic books “Visual Programming” by Nan C Shu, and “Watch What I Do: Programming by Demonstration” edited by Alan Cypher.



Brad Myers wrote several articles in that book about his work on PERIDOT and GARNET, and he also developed C32:

C32: CMU's Clever and Compelling Contribution to Computer Science in CommonLisp which is Customizable and Characterized by a Complete Coverage of Code and Contains a Cornucopia of Creative Constructs, because it Can Create Complex, Correct Constraints that are Constructed Clearly and Concretely, and Communicated using Columns of Cells, that are Constantly Calculated so they Change Continuously, and Cancel Confusion


Also, here's an interesting paper about Fabrik:


Danny Ingalls, one of the developers of Fabrik at Apple, explains:

"Probably the biggest difference between Fabrik and other wiring languages was that it obeyed modular time. There were no loops, only blocks in which time was instant, although a block might ’tick’ many times in its enclosing context. This meant that it was real data flow and could be compiled to normal languages like Smalltalk (and Pascal for Apple at the time). Although it also behaved bidirectionally (e.g. temp converter), a bidirectional diagram was really only a shorthand for two diagrams with different sources (this extended to multidirectionality as well)"

We could go back farther with Sutherland's SketchPad (early 60s) and Smith's Pygmalion (mid 70s).

There are also a lot of disparate visual programming paradigms that are all classed under "visual", I guess in the same way that both Haskell and Java are "textual". It makes for a weird debate when one party in a conversation is thinking about patch/wire dataflow languages as the primary VPLs (e.g. QuartzComposer) and the other one is thinking about procedural block languages (e.g. Scratch) as the primary VPLs.

Absolutely, their important work foreshadowed and inspired so much great stuff. Also Douglas Engelbart's NLS pioneered many of the ideas of visual programming.

I think spreadsheets also qualify as visual programming languages, because they're two-dimensional and grid based in a way that one-dimensional textual programming languages aren't.

The grid enables them to use relative and absolute 2D addressing, so you can copy and paste formulae between cells, so they're reusable and relocatable. And you can enter addresses and operands by pointing and clicking and dragging, instead of (or as well as) typing text.

Some people mistakenly assume visual programming languages necessarily don't use text, or that they must use icons and wires, so they don't consider spreadsheets to be visual programming languages.

Spreadsheets are a wildly successful (and extremely popular) example of a visual programming language that doesn't forsake all the advantages of text based languages, but builds on top of them instead.

And their widespread use and success disproves the theory that visual programming languages are esoteric or obscure or not as powerful as text based languages.

Other more esoteric, graphical, grid-based visual programming languages include cellular automata (which von Neumann explored), and more recently "robust first computing" architectures like the Moveable Feast Machine.



Robust-first Computing: Distributed City Generation (Moveable Feast)


>A rough video demo of Trent R. Small's procedural city generation dynamics in the Movable Feast Machine simulator. See http://nm8.us/q for more information. Apologies for the poor audio!

Programming the Movable Feast Machine with λ-Codons


>λ-Codons provide a mechanism for describing arbitrary computations in the Movable Feast Machine (MFM). A collection of λ-Codon molecules describe the computation by a series of primitive functions. Evaluator particles carry along a stack of memory (which initially contains the input to the program) and visit the λ-Codons, which they interpret as functions and apply to their stacks. When the program completes, they transmute into output particles and carry the answer to the output terminals (left).

Various links related to this have come up in the past few days, and I think they mostly betray a lack of understanding of the weaknesses of text-based programming, and a lack of imagination as to how visual programming could actually work.

One person who's done a lot of thinking about this is Bret Victor, and I encourage anyone interested to watch his videos: http://worrydream.com/

Here's a project that's trying to split the difference between visual programming and text-based programming: https://luna-lang.org/

Thank you jakelazaroff for mentioning Luna <3 ! We, at Luna, are trying to blend the difference between text and visual programming, allowing people to choose the best tool for their use case – visual representation for high-level system modeling and textual representation for low-level components.

Personally, I've been creating visual programming languages for years for such domains as visual effects, allowing artists (people with very limited / none programming skills) to create outstanding physics simulations systems or automated geometry processing by their own. Seeing people being able to create something that was so far not accessible for them is amazing.

Readers of this topic might also be interested in reading our blog post about why visual languages are actually very useful here: https://medium.com/@luna_language/luna-the-visual-way-to-cre...

Tools for visual programming have seen massively less effort spent on them than text-based tools. This is why text-based tools have much more conveniences, and better integration, than visual-oriented tools.

Another important aspect is that humans use words to communicate, so some amount of text is inevitable (and welcome) in visual programming tools.

Also, text-based tools use the most common and open format of all, a stream of ASCII (UTF-8) characters. This allows to mix and match text-based tools (editors, compilers, formatters, linters, etc). Visual languages were (and are) dominated by proprietary formats. This keeps visual programming tools in silos, profitable for the vendor but preventing the wild expansion that text-based programming languages periodically enjoy.

I'm not a visual programming proponent, but I don't think this article does a very good job of supporting its thesis. It exclusively makes claims about current VP implementations and tries to extrapolate them to VP in the abstract, but I think these are all non-sequiturs. I specifically think the article fails to distinguish between the dual problems of _modeling a program_ (hard) and representing that model visually vs textually.

For example, the article claims that the idea behind scratch is that programming is fundamentally easy but text makes it hard. I don't think that's the idea at all, but rather that programming (i.e., modeling a program) is fundamentally hard, but the difficulty is compounded because text is not a natural way for the current crop of humans to think about a program.

Another example is his criticism of the failure of enterprise UML tools--presumably the problem with these is that UML is just not a very good way of _modeling a computer program_, regardless of whether the interface to UML is visual or textual.

Another example is the criticism that the current crop of VP tools are procedural. Why can't a visual programming language be functional?

Another is the claim that VP is bad because the current crop of tools combines visual and textual programming (I'm not sure this is even true; I don't recall LabVIEW requiring text, for example). Is there any fundamental reason a programming model can't be purely visual? Why can't you represent a whole programming model (all the way down to primitives) visually?

I think most of these would be solved with more trial and error and more investment; none of these support the thesis that VP is fundamentally a bad idea.

To be clear, I think this article has a lot to offer people who are trying to solve visual programming issues, but it doesn't make the case that it sets out to do--that visual programming is fundamentally a waste of time.

"Why can't a visual programming language be functional?" - there is one: https://luna-lang.org/ and it also has a dual textual/visual representation.

I agree wholeheartedly with your comment -- no-one is actually saying that programming is fundamentally easy. I'd go even further and say that textual vs visual is a bit of a false dichotomy. Text is a form of visual notation. We format our source code carefully to achieve specific visual effects. The same AST can be rendered in many different ways of various visual richness. Formatted text is just one point in a continuum of possible visualisations of the same AST.

Years ago I developed a visual interface to an existing non-visual language, the multi threaded object oriented dialect of PostScript in the NeWS window system. Since PostScript is homoiconic, the data viewers and editors could also be applied to code!

It didn't try to replace text based PostScript programming, just augment it: there was an interactive shell window you could type expressions to, which had a stack "spike" sticking out of it representing the PostScript stack, which would update in response to the commands you typed, or you could drag and drop objects to manipulate the stack and the state of the system.

You could perform "direct stack manipulation" by dragging objects up and down or on and off the stack, open up nested objects and functions in an outliner, adjust the point size and formatting style, open up special editors on data and code, drag and drop objects and code around, etc. It was like a live Smalltalk system, in that you could explore (and vandalize) the entire state of the window system, and use it as a debugger to inspect and modify and debug processes of other NeWS clients (including itself).


In order to make a serious comparison of textual and visual programming, you'd be better off using a serious system such as Labview for the comparison, not a... toy for children.

Agreed. Simulink is an extremely serious visual programming language that powers many of our modern high technology. Spacecraft and the like.

The authors point on assuming reduced complexity makes me think that he doesn't understand how complex mathematical function models are even when the block diagram looks very simple to the untrained observer.

Simulink isn't really a visual programming language? It's a block-diagram simulator for simulation and analysis. You link together simulation blocks, and Simulink runs a ODE solver to "solve" for the time-varying output.

Simulink can generate C code for productionization, but to write custom algorithms, you typically need to write S-functions, which are typically in C, C++ or Fortran.

Visual environments let you set up topologies quickly and reliably. It's not really programming as such though -- kinda more like scaffolding. Any kind of complex logic is still more efficiently achieved in code because many abstract ideas cannot be efficiently expressed graphically. Visual abstractions are more useful for ideas that can be concretized (typically data-flows type ideas).

Source: used and taught Simulink for many years for control systems engineering.

I think it really depends on how you define a programming language. Simulink can generate a variety of codes from models, the most common being C as you mentioned, but Simulink models can also be compiled into an executable using the Real-Time Workshop toolbox. This packages the simulation engine into the model. You're right about needing code scaffolding like S-functions to compile into C but those physical model simulations are often used directly in the control system program.

While not efficient, thanks to Turing completeness and it's analog relatives, anything you can implement in code you could implement in Simulink flow. It took me about an hour to gin up Pong for example. I think that if it is fair to consider Scratch a visual programming language, then Simulink fits the bill.

I'm not entirely sure about that. Simulink solves an ODE (more or less) system internally. How did you write Pong in it, with interactive controls?

I'm also not sure if Simulink (without S-functions, .m or any external programming language) is Turing complete, and even if it is, Turing-completeness is a very low bar for a programming system.

Not trying to be contrarian, but from my experience, despite its many logic subsystems, Simulink isn't really meant to be a general purpose programming system so much as a simulation system for time-varying outputs. It's difficult to impossible to write most normal programs in it.

I wrote pong by encoding the ball position as a continuous complex variable and used a star chart (I-Q) with minimal persistence as the output. The paddle inputs were just variable sliders bound to a limit threshold equation. Ball motion was implemented as a normal physical system of momentum and boundaries.

Simulink models are Turing complete. You can set a discrete clock simulation (very common with the DSP toolbox) and implement Flip-Flops/Boolean logic. They are also for lack of a better term, Shannon complete, that is they have all the analog components to satisfy general function computability. They also have flow control outside of those conditions and full memory storage. If you were infinitely bored and long lived, you could write Windows in Simulink flow logic and run the modeler to make a VM.

I agree with you here. Simulink absolutely isn't meant to be a general purpose programming language. It's an extremely domain specific programming language but one that in it's specific domain offers considerable advantages over textual programming. I don't want to advocate writing software in Simulink because it's a bad idea, and I only do it myself in very limited cases because I have a hobby of analog computing and Simulink is very good for that at a certain level of abstraction.

Mostly, I just take umbrage with the Article and the author's reasoning. Simulink is a counterpoint to some of his arguments but those counterpoints were made well elsewhere in the thread so I was just providing an example.

Sorry if I am starting to drone on or come across hostile.

There are people who swear by Labview, especially those more familiar with electronics than code. There are very good researchers I know of in soft condensed matter Physics who know they can't code but feel confident using Labview.

One of the interesting things I found was that the 2-dimensional layout helped a lot in remembering where stuff was: this was especially useful in larger programs.

I believe Labview barely scratches the surface of what is possible.

Having spent a lot of time using Labview I would agree that it can be an amazingly useful tool (highly performant code and easily interfaces with data acquisition devices). That being said, LabView shows the limitations of Visual Coding:

Sharp Learning Curve Difficult to find help or code snippets online Lack of decent Version Control Finding the function you want is hard (requires navigating several submenus) *All of the icon symbols look the same

Also, while LabView offers a great platform for an object oriented coding style, I feel like the 2d layout always results in a messy sprawl rather than layers of abstraction. This could be because LabView is used by people with less traditional software background, or it may be that making new functions in a bit of a pain in LV.

Some of your complaints come from the fact that LabVIEW is pretty niche. It would need at least an order of magnitude more developers to see numbers like C/C++. That's why you're not finding much online help.

There is also the fact that LabVIEW is old. Like REALLY old. LabVIEW came out over 30 years ago and as such has a lot of legacy cruft along with lacking some of the more modern goals such as good scaling source control. LabVIEW was also designed by a hardware first company and has always been geared towards building that hardware ecosystem.

Finally, yea, poor programming practices exist in all languages. In visual languages, it makes them hard to read. Sometimes this forces you to be clever to reduce the flow complexity by increasing the logic or mathematical complexity which is it's own can of programming worms we could debate for hours.

Fair points. A lot of the issues with Visual Programming are related to the supporting tools - something the author mentions. That being said, text input easily supports search, pattern matching and code sharing, so what are the benefits to visual coding to justify building all these tools?

In my experience it is harder to implement good programing practices in a visual programing environment. For example, abstracting, consolidating and refactoring visual code is much more challenging because of the sprawl of wires...

Code sharing is actually really good in visual languages if done right because you are forced to have well defined inputs and outputs for your nested structure. Search is getting better and is not bad in Simulink. Pattern matching would be an interesting research project. Those big wire rats nests typically come about because the people using visual languages (especially Simulink and LabVIEW) are only doing that as their secondary or tertiary job. It's more important to get something working out the door than to do it well and proper and maintainable. Besides, it's hardly limited to visual languages; I'm sure nearly everyone here has gone back to some code they wrote in the past and found it nearly indecipherable.

The main benefits of visual languages is that they allow domain specific people to work in methods that are natural to the problem and closer to what they are used to. Some problems can be represented very easily visually but are a big pain in code such as physical systems. Data flow and timing diagrams lend themselves very well to visual descriptions and can help prevent race conditions. WYSIWYG editors open up software domains to huge swaths of people who go on to produce some incredible work. Yeah, ultimately, people run up against the limitations of these programming environments and then you either transition to a less friendly but more scalable code, turn into spaghetti mess, or give up.

I think the biggest thing text based code development has for it is that text is inherently "open" and simple to share so tools get built for it while most visual languages tend to be proprietary, niche, and locked down. Maybe one day we will see a solid FOSS visual programming language/environment that hits above it's weight and can drop into other languages specifically for handling types of problems. I think that would be ideal but articles like this one that are close to straw man arguments don't help get anyone excited to work on it.

I was very impressed by Mark Elendt's talk at CppCon about the Houdini system. It contains a very sophisticated visual programming "language".

My view on this is that text is one of the best media we have for representing precise, information dense data.

What I got from the Houdini presentation was that, when someone is trained to produce precise, information rich data using other media e.g. artists then other representations can be as effective.

Or an enterprise platform with actual enterprise customers: https://www.outsystems.com

Or their main competitor (with an online IDE): https://www.mendix.com/ which was sold to Siemens earlier this year for $730M

It's also a long-term maintenance challenge, especially when you have complex models.

Great for prototyping stuff fast, but hard to maintain in production. It was designed for people who aren't programmers to be rapidly productive especially when interfacing with hardware (in a lab environment). However, debugging complex topologies can be a real challenge.

We have Labview models that are being rewritten in conventional programming languages due to aforementioned challenges.

It really ultimately has to be if it's to be maintained. LabVIEW is fine for basic prototyping, the problem is that EEs tend to think they've created the mona lisa of program design when they've built a barely functional prototype and then distribute it as a finished product.

I find LabView absolutely horrible once you reach a certain level of complexity. We are converting most LabView code to C# and Measurement Studio and it's so much easier to deal with.

LabVIEW is a... toy for children.

People use what works for them. For prototyping it works fine. Not my cup of tea. I'm not a systems programmer, I only write tools in Python, Bash, and still awk. Since I work in a primarily *nix-based environment, I use mainly Kate as a poor-man's IDE and the shell, usually Bash.

I've got friends telling me I should switch over to VS now that it runs on Linux, but I'm too stuck in my ways to change, and I like my workflow minimalist as possible. Most of the time, I rapidly prototype something on a development server and then once I know the idea is feasible, I'll move over to Kate and write it more cleanly, since Kate has Konsole built in. I can make changes and run them immediately.

Don't be rude.

LabVIEW is used in production in a lot of serious environments.

I've been in those environments. I've had to try to repair horrific "code" written in it. I've participated in "code reviews" where the straightness of the lines was carefully assessed in the massive mess of spaghetti code they had created and thought was good code.

LabVIEW is a toy for EEs to write prototypes in. That would have been okay if it had stayed there -- but the problem is that people are actually distributing "applications" using it, and trying to maintain them is nearly impossible.

I still don't get why you call it a toy, or just for prototypes. Maybe think somewhat broader and not just about the bad experience(s) you had with it. For the things it's good at, it just works and it's pretty hard to find alternatives (well, Measurement Studio is ok, but for simple things it's usually still a bit more work than Labview). The hard part is figuring out what Labview is good at and not make the mistake of trying to use it for everything. Sound like that's where most of your bad experiences come from.

Here's an example of a place where Labview just shines: I needed something to plot 'rolling' analog and digital signals, i.e. basically provide a visualization of the inputs of a combined analog/digital input card. With ability to pause the thing, each line in a different color, data cursors, ... The data is acquired by C++ code, but since Labview has this stuff built in getting the whole thing up and running is just a matter of setting up a communication protocol between the C++ part and Labview. I just went for TCP/IP and got the thing up and running in a couple of hours. Has been used like that for years now. Not exactly a prototype, nor a toy. It just does this one thing, it's all we needed, and it does it extremely good.

I call it a toy because that's what it is. It's used almost exclusively to bypass doing things the right way. As you highlight quite well with your example.

Well, enlighten us then. What is the right way according to you? In case of my example: what is a better way than getting things done in a few hours, with no bugs whatsoever, with all functionality needed, and just working? I'm actually truly curious as to what would be better and more right.

To begin with, the labview run time is riddled with bugs. Every application I've ever seen written under labview is expected by its creators and its users to crash randomly and continually for no explainable reason.

Yes, drawing out some things in a visual IDE and having it work is nice. The problem is that eventually LabVIEW is going to update their runtime and it won't work anymore. Now what?

Take the time and do it right up front, and then when it needs to be updated people aren't cursing "whoever decided to do this in LabVIEW years ago" as they often do in these situations.

do it right up front,

Ok, but again what do you suggest is 'right' then, any example? Because as laid out, for me, it is right, since we have zero problems.

Every application I've ever seen written under labview is expected by its creators and its users to crash randomly and continually for no explainable reason.

Hmm, strange. Sounds to me like those creators must be doing something wrong. I mean, we've been running a couple of Labview applications for +10 years and I honestly think none of them ever crashed (where 'crash' means suddenly stops working without apparent reason because of a bug in Labview itself, not come to a halt due to programmer error). Also wen to, I don't know, 3 or 4 updates, without much problems. Again I think it just comes down to hwat I said first: the hard part might be fuguring out how to do Labview right, I guess.

Or the labview runtime is absymal and broken and you simply haven't tripped over any of its thousands of broken cases in the minimal use of it you've done.

I say again; Don't be rude.

Especially don't be mean about other people's environments for the sake of being mean about their environments. Which is what you just did.

It's not "rude", and it's not "mean" to describe something accurately. Using those adjectives is simply projecting your emotions onto a technical discussion.

LabVIEW is a cash cow that NI pushes on schools and junior EEs to make them think they're software developers, it allows them to write horrible cruft that is generally impossible to maintain. In almost every case I've seen, it's actually easier to rewrite the whole mess from scratch rather than try to decipher what someone a decade ago did as a hack that got turned into something people depend on.

You're clearly one of NI's rabid fans, you believe what you believe, but if you take personally criticism of a really poor tool, you know the rest..

I can create things in LabVIEW if I have to, I've done it and I've debugged and rewritten other people's abominations in it as well. The concept of visual programming languages is badly flawed for a variety of reasons and LabVIEW portrays each of them thoroughly.

What's "the right way"? Spending an extra few days on making your own visualization code, throwing away flexibility in the process?

Time spent doing it right up front pays off when someone has to maintain it.

Hi! I'm Wojciech Danilo, one of the founders behind Luna language ( http://luna-lang.org ). I'd love to share with you our blog post about why actually visual programming languages could be very useful: https://medium.com/@luna_language/luna-the-visual-way-to-cre...

We've been building visual languages in the past for people in different industries, including Visual Effects one. Watching people with limited programming skills to create outstanding things on their own is truly amazing.


The author well formulates the limitations of VPLs (Visual programming languages).


The rant seems to jumble in VPLs for general purpose programming with VPLs for domain-specific usecases. So although VPLs may be "a bad idea" for general purpose computing and does a good job of explaining its downsides, it is a little reckless in saying VPLs are a bad idea overall.

The author also offers two straw man arguments to make their point, though I don't believe that these things are common misconceptions at all:

1) Abstraction and decoupling play a small and peripheral part in programming.

2) The tools that have been developed to support programming are unimportant.

Lastly, the author uses Scratch as its representative VPL example, even though it is probably the most far removed from "Visual Programming" and more in line with something I'd call "Block-Based Programming". These are far from equivalent and a poor choice to use for talking about general purpose VPLs (though the author does admit that it may have been a poor choice later).

Arrays and Objects and trees and everything else in an AST (including the AST itself) are shapes.

Shapes are better built with our hands. They're often filled with text, which is more easily said than written.

Someone can already use a visual programming language to simulate the ocean in UE4 in 2018, in 20 years we'll have a mainstream general purpose visual programming language. More people might be using a visual programming language to make their AST than a keyboard, which is as comparatively cumbersome to gestures/talking as punchcards are to keyboards.

UE4 blueprints are an example of a domain-specific VPL that I've mentioned above (a good example of where VPLs can work great). They however do not solve the problem of working with abstractions well- UE4 blueprints are akin to, how the author put it, "property dialogue programming".

So far, any attempts to model abstractions for general purpose VPLs have ended up more difficult to parse and read than textual code (that argument can also be made for UE4 Blueprints[0]).

Talking about "shapes are better built with our hands" isn't a solid argument for why general purpose VPLs are bound to happen. Note, too, that I would love for them to.

[0] https://blueprintsfromhell.tumblr.com/)

Sorry, why isn't it a solid argument? I'm imagining building an AST representation that looks like our current one, since there's less abstraction, but using our hands and voice. Optimising the appearance can come later.

Alan Kay wrote some interesting stuff about some of the inspirations for "Tile-" and "Block-Based Programming", like "Thinkin' Things", in a discussion about the Snap! visual programming language!


From: Alan Kay Date: Thu, 3 May 2018 07:49:16 +0000 (UTC) Subject: Re: Blocky + Micropolis = Blockropolis! ;)

Yes, all of these "blocks" editors sprouted from the original one I designed for Etoys* more than 20 years ago now -- most of the followup was by way of Jens Moenig -- who did SNAP. You can see Etoys demoed on the OLPC in my 2007 TED talk.

I'd advise coming up with a special kid's oriented language for your SimCity/Metropolis system and then render it in "blocks".



------------- * Two precursors for DnD programming were in my grad student's -- Mike Travers -- MIT thesis (not quite the same idea), and in the "Thinking Things" parade programming system (again, just individual symbol blocks rather than expressions).


From: Don Hopkins Date: Fri, 4 May 2018 00:43:56 +0200 Subject: Re: Blocky + Micropolis = Blockropolis! ;)

I love fondly remember and love Thinkin’ Things 1, but I never saw the subsequent versions!

But there’s a great demo on youtube! https://youtu.be/gCFNUc10Vu8?t=24m58s

That would be a great way to program SimCity builder “agents” like the bulldozer and road layer, as well as agents like PacMan who know how to follow roads and eat traffic!

I am trying to get my head around Snap by playing around with it and watching Jens’s youtube videos, and it’s dawning on me that that it’s full blown undiluted Scheme with continuations and visual macros plus the best ideas of Squeak! The concept of putting a “ring” around blocks to make them a first class function, and being able to define your own custom blocks that take bodies of block code as parameters like real Lisp macros is brilliant! That is what I’ve been dreaming about and wondering how to do for so long! Looks like he nailed it! ;)

Here’s something I found that you wrote about tile programming six years ago.




Etoys, Alice and tile programming ajbn at cin.ufpe.br () 6 years ago


I have been trying the new version of Alice <www.alice.org>. It also uses tile programming like Etoys.Just for curiosity, does anyone know the history of Tile Programming? TIA,

Antonio Barros PhD Student Informatics Center Federal University of Pernambuco Brazil

Alan Kay 6 years ago

This particular strand starting with one of the projects I saw in the CDROM "Thinking Things" (I think it was the 3rd in the set). This project was basically about being able to march around a football field and the multiple marchers were controlled by a very simple tile based programming system. Also, a grad student from a number of years ago, Mike Travers, did a really excellent thesis at MIT about enduser programming of autonomous agents -- the system was called AGAR -- and many of these ideas were used in the Vivarium project at Apple 15 years ago. The thesis version of AGAR used DnD tiles to make programs in Mike's very powerful system.

The etoys originated as a design I did to make a nice constructive environment for the internet -- the Disney Family.com site -- in which small projects could make by parents and kids working together. SqC made the etoys ideas work, and Kim Rose and teacher BJ Conn decided to see how they would work in a classroom. I thought the etoys lacked too many features to be really good in a classroom, but I was wrong. The small number of features and the ease of use turned out to be real virtues.

We've been friends with Randy Pausch for a long time and have had a number of outstanding interns from his group at CMU over the years. For example, Jeff Pierce (now a prof at GaTech) did SqueakAlice working with Andreas Raab to tie it to Andreas' Balloon3D. Randy's group got interested in the etoys tile scripting and did a very nice variant (it's rather different from etoys, and maybe better).



> The lack of good source control is another major disadvantage of most visual programming tools.

I think this is underselling the point.

Lack of good source control is a non-starter for any programming paradigm. If you want to do visual programming or any other kind of programming that doesn't dovetail with directory tree of text files, you are not going to get anybody serious to work with you until you fix that problem.

The first problem you should solve is not how to compile the code, nor how to debug the code, but how to store it.

Jupyter notebooks is a counterexample. Its annoyingly unfriendly to source control, but is still quite popular (and growing).

There are a lot of very complicated documents and visual displays that can be built by a collaboration of 2 or 3 people where word of mouth suffices to deal with problems of simultaneous edit. Effectively it's ad-hoc optimistic locking with a coordination phase.

None of these models scale to teams of 10, 20, 100 people. Even at 4 they begin to become tedious.

The biggest problem with software developers is confusing "successful" with "applicable to everybody", and this is a prime example. It doesn't matter if it's successful. It's not applicable to my work. Excepting, perhaps, for Wiki-like workflows, which are less than 10% of my duties.

Matrix/EA used a visual language for Spore. http://puredata.info/exhibition

Pure Data (Pd) is a visual programming language developed by Miller Puckette in the 1990s for creating interactive computer music and multimedia works


This is the one shinning example where a visual language actually accomplishes something that a general language can't not. Make programming accessible to creative people.

This is the only visual language that I felt was worth learning.

Also, Cycling '74's Max is a commercial visual programming language that shares a lot of concepts and designs with PD (which you could think of as a free implementation of Max, but ideas have flowed both ways).


On the topic of procedural sound, you might enjoy this talk about Generative Systems with Will Wright and Brian Eno:


At Maxis, we also developed a bespoke visual programming language for The Sims (first used in SimCopter), called "SimAntics".

We never officially released "Edith", the version of The Sims with the object editing and visual programming tools built in, but users have reversed engineered the byte code, and developed text based tools for programming custom objects with SimAntics.



Here's a transcript and video demonstrating it:

The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo

This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that Don Hopkins developed for The Sims with Will Wright at Maxis and Electronic Arts.

Link to "Edith Sims Programming Tool" section:


I personally think node based programming is suitable for DSL, high level plumbing, data visualization (imagine Jupyter cells but having branches). The best visual programming language is UE4 Blueprint: prototype quickly, great debugging experience (great tooling in general), communicate with C++ easily, intellisense... It's really productive to glue stuff together but not that helpful for "drawing" all the core logic. VSP is good but not for every use-cases.

Exactly this. The best example here is NodeRed, Javascript underneath and a dataflow UI on top.

You can get connectors for pretty much anything eg. Raspberry pi robot/ home automation, twitter, http, email, mqtt, SQL, openCV, parallel processing.

The dashboard allows you to create UIs visually, and I've ported it to run on Android + IOS:


EDIT: Ran out of time to finish the comment:

Allowing this to run on mobile means that you can develop/monitor systems on mobile, cluster the systemsphone with other nodered instances, and develop mobile apps (the package is basically nodeJs, nodered and a cordova webview which can be hooked up to display the nodered dashboard).

The important things to note here are that the underlying language to the visual DSL is flexible enough to do everything, and standard ways of building visual extensions (aka nodes) exist.

The main thing missing on the Dashboard is a table node, which would allow a much more flexible layout of data (sorting, trees etc).

> The final misconception is that visual programmers can do without all the tools that have been developed over the decades to support programming. Consider the long evolution of code editors and IDEs. Visual Studio, for example, supports efficient intellisense allowing the look-up of thousands of APIs available in the base class library alone.

I think this point the author makes is an interesting one because I see it form a different perspective. I think the biggest issue is that we mix presentation with representation.

> The lack of good source control is another major disadvantage of most visual programming tools. Even if they persist their layout to a textual format, the diffs make little or no sense.

At the same time tabs vs spaces is still a thing. In languages which aren't whitespace sensitive, the diffs similarly don't make any real sense in my opinion.

I believe (and have no idea how to implement) that the representation of a programming language should be modified more directly and the rendered presentation (textual, visual, interpretive dance) should be irrelevant to the tooling.

seems from the bottom of the post, since he posted a picture of scratch (which is a fantastic tool!) that the author really just has an opinion about this and actually hasn't done really solid research and measured anything by actual outcomes.

The block based visual programming embraced by scratch and things like the BBS micro:bit are really good, my 7 year old son can do really cool things with these systems (also check out the new beta https://beta.scratch.mit.edu/ which is done with React for anyone interested )

You can even add this kind of visual block based coding into your own systems using something like https://developers.google.com/blockly/

I think it's a little shortsighted to think just because there's been tools that didn't really work out too well that the idea is bad

I think what makes visual programming like blockly and scratch great is that you can actually see the restrictions of various constructs like loops. The visuals are just a lot less intimidating then just seeing a bunch of seemingly free-form text where the rules aren't as obvious.

The main immediate downside that I can see is that normal programming refactoring tools are probably not going to work... yet.

I feel that this is a big opportunity for AirTable, MS & Google (for their Office software), and Salesforce to look into.

Maybe once AR and VR really take off, visual programming will have a major professional revival.

One of the coolest ways to learn programming I've ever seen is the Snap! visual programming language, which is written in JavaScript and runs in the browser.

Snap! is a visual "blocks" programming language like Scratch, but with the full power of Scheme: First class functions.

It's the culmination of years of work by Brian Harvey and Jens Mönig and other Smalltalk and education experts. It benefits from their experience and expert understanding about constructionist education, Smalltalk, Scratch, E-Toys, Lisp, Logo, Star Logo, and many other excellent systems.

Snap! takes the best ideas, then freshly and coherently synthesizes them into a visual programming language that kids can use, but is also satisfying to professional programmers, with all the power of Scheme (lexical closures, special forms, macros, continuations, user defined functions and control structures), but deeply integrating and leveraging the web browser and the internet (JavaScript primitives, everything is a first class object, dynamically loaded extensions, etc).

Visual lexical closures.

User defined blocks including control structures.

Macros and special forms.

Call with current continuation!

Written in JavaScript and easy to integrate with JavaScript libraries.


Adding Machine Learning Blocks to Snap!





ProgKids is a Russian site that integrates Snap! (and Python) with Minecraft, so kids can visually program 3d turtles that move around in the world and build things!


ProgKids. Строим дом, а потом ещё пару (Building a house, then another couple)


ProgKids. Куда же без зверей? (Why do you not have animals?)


ProgKids. Как работает Snap? (How does Snap work?)


More links and stuff I've written about it on HN:


cool... what strikes me about a number of these block based programming tools is they are quite approachable and complex functionality is not that hard to look at and know what's going on

https://www.mendix.com/ Seems to be cleaning up on the concept. I've seen two start-ups now using it and the turn-around times for features were definitely impressive. The problems with such solutions are always long term maintenance, scalability and lock-in.

Going from text to visual programming only takes you from (sort of) one dimension up to two dimensions. That isn't much of an advantage for complex code where the "dimensionality" (for lack of a better term) is much higher than 2D.

Perhaps the growth of VR and other 3D interfaces will breathe new life into visual programming?

David Ackley, who developed the two-dimensional CA-like "Moveable Feast Machine" architecture for "Robust First Computing", touched on moving from 2D to 3D in his retirement talk:


"Well 3D is the number one question. And my answer is, depending on what mood I'm in, we need to crawl before we fly."

"Or I say, I need to actually preserve one dimension to build the thing and fix it. Imagine if you had a three-dimensional computer, how you can actually fix something in the middle of it? It's going to be a bit of a challenge."

"So fundamentally, I'm just keeping the third dimension in my back pocket, to do other engineering. I think it would be relatively easy to imaging taking a 2D model like this, and having a finite number of layers of it, sort of a 2.1D model, where there would be a little local communication up and down, and then it was indefinitely scalable in two dimensions."

"And I think that might in fact be quite powerful. Beyond that you think about things like what about wrap-around torus connectivity rooowaaah, non-euclidian dwooraaah, aaah uuh, they say you can do that if you want, but you have to respect indefinite scalability. Our world is 3D, and you can make little tricks to make toruses embedded in a thing, but it has other consequences."

Here's more stuff about the Moveable Feast Machine:



The most amazing mind blowing demo is Robust-first Computing: Distributed City Generation:


And a paper about how that works:


Plus there's a lot more here:


Now he's working on a hardware implementation of indefinitely scalable robust first computing:


Thanks for sharing these links. I first came across David Ackley watching the Artificial Life II Video proceedings from 1992, where he was demoing an agent-based simulator. So it's interesting (and quite amazing) to see how far his research has expanded in the past 26 years.

What does visual mean? A text file is a 2D set of characters, so if you consider it that way everything is visual. What are brackets, if not visual markers of blocks that start and end? What is code coloring if not visual aid to help with typing?

Programming is visual, full-stop. The main reason why “visual programming didn’t catch on” is because there can’t be a semantic equivalent to placing programming elements on a 2D grid. A program executed things On sequence, so it’s convenient to program top to bottom. That’s why text is better suited to represent it.

Dataflow programming is not generally executing in sequence. And that’s where visual programming shines.

Spreadsheets are visual programming languages, and they seem pretty popular.

The economy would collapse if you took them away, so I think that's a pretty good measure of how important and widely used they are.

A preface before you read this- it's just a long joke :)

I wonder if instead of new programming languages or VPLs aren't the problem, but lack of meta data structures (going beyond a class or interface) that tie together data with expected structure and their implicit validations (including relateable data) to code that performs business rules (booking airplane tickets) and code that draws a UI (ticketing screen).

I know about 10-15 years back this was called "4GL" (fourth generation languages) but instead of that taking off, a sludge of just new procedural programming languages took hold (rust go ts etc..). I think VPL tries to fill that same niche space where the goal is to think meta-like, not procedurally about each input box. If the system understands and records data in similar ways, we can eventually get to a place where data is ubiquitously stored in the structure the system defines, not the programmer defined way.

So while right now we may not have the correct primitives for 4GL or VPL, maybe one day and that's why these ideas won't die - because it is likely going to be the way that software is designed in the far (or near?) future. Especially in the wake of AI where we might have a chance to let the system write the UI/business logic.

In other words, instead of thinking of AI as writing code in C/C++/Rust, etc.. AI will be in charge of moving data - at first in human defined chunks (likely spec'd out in VPL or 4GL structures), but eventually for AI to define its own objects of related structures. Eventually getting to the point where us mere humans ask AI to build things that fit human process.

The last step is that humans would no longer define the process, and AI be able to incrementally add processes with the data it has, by defining the most optimal path for humanity whether it be tax rates or human production or smoothing out effects of electricity futures based on incoming solar mass ejection predictions. Eventually in the final stages of humanity, defining the work output each human needs to make during a day. By this point enough people will have opted out and the giant war between the owners of AI and the people will start. Killing off most people, but at the same time many of the oligarchs will have been assassinated by members of the public that were willing to rip out their tracking chips in the left eye.

Enough of the planetary population will be gone and totally untrusting of anyone, but we'll have finally visioned our AI future. Won't it be grand!

Oh sorry off topic again yeah we probably should stay away from Visual Programming.

The auther of this poor article, is extremely blinkered and lacking in imagination. Just 5 minutes watching a demo of the incredible power of modern games engine, such as Unreal, blows to smithereens, the whole premise of this article. Has he never wondered into an accounting department? Not seen an Excel workbook running a business? Or seen designers and their viusal tools. A shame though, that Microsoft can’t be bothered to update the ancient appalling VB editor that ships with Excel.

This sounds analogous to the argument that mathematicians used to have: whether abstraction trumps intuition, especially in geometry.

For instance, you can't really plot a 25-dimensional object, but the same object is trivial to manipulate with equations.

It's clear to most people that both approaches have their place. In geometry, the visual component is very important to develop important intuitions about how objects behave in low dimensional space. On the other hand, if one were to insist on the visual, one would be stuck with purely intuitionist ideas and never be able to move into more complicated realms where you cannot visualize objects graphically (e.g. high-dimensional space).

The Bourbaki group was a group of mathematician who aimed to put mathematics on a rigorous foundation by developing the abstractions that shaped modern math, and by moving mathematics past its intuitionist foundations. Some argue that this caused modern "geometers" to lose their feel for geometry because everything became about symbols. But without this development, a lot of modern discoveries might not have been possible.

Our company develops a software for alarm receiving centers. In our software our customers can use a visual code editor http://pfau.de/images/beispiel002.gif to program their own processes for alarm reactions. That is actually our main selling point. They can control the user interaction (what to do, who to call, etc) in reaction to an alarm, access all the data to change process outcomes in response to data or user interaction and automate a lot of processes (filling forms, mailing reports and even automated invoices). They can even call external programs (on other computers) and fill and read databases to interact with other systems. That generally works well as long as people use simple flows and standardise a lot. But as soon as people start to program complex flows you can really see who is a “programmer in disguise“ and who is “just a user thrown in way over his head”.

It feels like it's easier to understand a complete process flow like this visually; sure a developer could probably implement individual steps for a complicated workflow faster as code, but only if they already understand how the process needs to flow in the first place. The visual paradigm almost acts to shortcut the need for a developer to "get in the zone" and load up a mental model of decision tree, but at the cost of a more rigid implementation model (you have to work within the framework of the UX that you're provided, and pretty much any GUI will fail to keep up with the speed the most advanced users "could" work at). On the other hand, I can't imagine implementing something like your example graphic in code without at least whiteboarding it out ahead of time, which is effectively just an analog way of doing visual programming.

source control is the biggest problem with visual programming. i remember doing something we eclipse modelling tools which actually supported merging of models but you just lose so much going from languages that are supported by the existing text based eco-system to languages that are not.

Is this an inherent difference, though, or "just" due to current source control systems being built with text in mind?

It's also an issue with the final output of these tools being text, instead of reified computer models.

I remember the first programming languages I used (when I was very young) were visual. I think Flowol at primary (elementary) school was the first. They gave me a pretty much instant understanding of how imperative programming worked. Admittedly, I never created anything practically useful in it, but what's important is that I could imagine, from early in my life, what programming could do / was for, and put it on my cognitive radar.

As a child I was more inspired by seeing the inner workings of something than being told X was possible if you learn to code. Visual languages can better expose the guts of the working automaton.

That said, I agree that [current] visual programming languages have shortcomings that limit them to simple / educational scenarios.

To this day, I'm amazed at the things that were done with Access, and horrified by the things I've seen in Lotus Notes.

It really just depends. What visual systems provide is accessibility for novices to create value. That sometimes means planning to replace them, or enhance them with skilled developers. In my mind, it creates jobs. Not always great jobs, but at least security in that Software Development will always be a viable career, with lots of opportunities.

There's no one-size-fits-all for programming. FWIW, I can recall a project where one piece was built visually (SSIS) and I enjoyed working with it.

I've not been a fan of visual programming in the past (and I'm not a huge fan of SSIS) but I did a project recently that included both Microsoft Flow and Azure Logic Apps and was pleasantly surprised at how quickly you can build something rather powerful.

The key as far as I can see is not to allow things to get too complex and call out to "real" code components over a certain level of complexity.

For me personally, I can see a theoretical benefit to visual programming languages - because I already think of code mostly in terms of shapes created be the flow of data. I've just never seen one that really matches my mental model, and trying to see myself using one of them seems like it would end more as a fight with the language than any actual benefit.

Well I always liked Authorware [1]. Maybe it was very media (CD/DVD) orientated but I really liked how you could see the flow of the program.

[1] https://en.m.wikipedia.org/wiki/Adobe_Authorware

This article is poorly researched (the only "visual" tool it mentions is scratch???) and pretty lazy in considering the merits of extratextual programming. It's worrying that something of this quality resonates with people.

> It's worrying that something of this quality resonates with people.

I wouldn't be worried about that. Just because people are reading it doesn't mean they are agreeing with the conclusion. It's all just stuff to think about.

Where it becomes worrying is when people vote for something of that quality.

In the early days of KloudTrader, we attempted to use Visual Programming too.


The main difficulty was scale and tradeoffs between power vs ease-of-use. We spent a lot of time creating "subroutine blocks" for the most commonly used functions and optimizing for UI/ease of use that we weren't doing much else. Furthermore, pretty much every integration, library etc. had to be converted into its visual equivalents for the language to be of any use and all the linear algebra/candlestick libraries were taking up so much time that the rest of the product was suffering. Stuff that often makes sense to a professional programmer had to be "simplified" so the UX is more optimal. E.g. data structures.

If you look at VPLs in production, there's a reason most of them tend to be "imperative commands only" i.e. a glue language that mainly strings together other subroutines. The concept of objects, or even structs is completely eliminated in the VPL.


There are exceptions to this however, the Unreal Engine's Blueprints is a lot more flexible than most.

If you want to get started building your own version of Google's Blockly, here's a good guide:


Throw in an immediate-mode UI library like React and you might even scale up indefinitely with plain HTML (most browser VPLs stick with SVG/Canvas).

For prettier node-based types of languages, try these libraries:




Bonus tip for anyone looking to implement any: use code generators. AST -> Visual programming component conversion may save you lots of time (assuming that the language has mature enough tooling and labour is extremely expensive)

Was "CloudTrader" taken?

The domain yes, check us out:


once again, someone has a strong opinion on something they have never used or don't know much about.

in my personal opinion, scratch and blockly have very little to do with visual programming. they are nice little experiments but just repeat what's bad about text-based programming into visual blocks, which just compounds the issue. i honestly am surprised mit and the media lab have dumped so much time into scratch.

i have said this before, but the most productive visual programming language and environment is labview. it certainly has its problems, and i am a big critic of it, but its visual programming style and dataflow paradigm are more powerful than people realize. it has some seriously good ideas that aren't found in many other languages, and it had many innovative features well before other languages. for example, doing concurrent programming in labview is a breeze.

on the other side, you see that modern IDEs are starting to add a lot of visual content to help the programmer out. the languages remain text-based, but the environments are adding visual features. i think that there is a lot of untouched ground regarding hybrid environments and languages that merge the ideas of text-based programming with visual-based programming. in addition, the dataflow paradigm is also underused.

there's also other computation targets other than CPUs such as FPGAs. it's silly in verilog and vhdl how you often have to manually label up wires where in labview, the tool and language do that for you. you simply draw your circuit out as you would in a diagram.

common complaints against visual programming have almost no weight. for example, "oh, you can't diff visual programs like text programs". yes you can. it's not like computers were gifted by god text-based diff programs. these things were developed and iterated on. labview has a diff mechanism. it isn't great, but my point is that people view text as some sort of innate trait of programming and all the tools are "natural" and already exist as if they just poofed into being once programs started being written. but if we put some thought and work into the tools for visual programming, similarly to how it has been done with text programming, it's possible to create really powerful environments and languages.

where visual programming languages often have a ceiling is abstraction. for example, labview has object-oriented features, but it doesn't go far. i program on the side in f# and racket, which have an amazingly high ceiling for abstraction, but interacting with these languages is often frustrating coming from labview. there are many things that are simply easier in labview. some of that is tooling. some of that is the visual language and dataflow paradigm.

There are a lot of things to respond to in this.

The purpose of Scratch is not to have all programming done in an environment like Scratch. It's to ease students into precisely what the author wants (textual programming environments).

Scratch eliminates several categories of errors for students.

Students in younger grades often lack an understanding of grammar and its utility or applicability. Consequently, the novel (to the student) syntax of programming languages can be vexing: Why is "while if x > 3" not allowed? The rules of the languages proscribe it. So now they're learning language rules and the concepts of programmatic/computational thinking.

They can no longer make typos when writing out keywords and variables. Keywords are typically available as draggable icons that they can then "fill-in-the-blanks" for. Variables become draggable icons or can be filled in with a simple drop down or text field that filters a drop down. Try explaining to an 8-year-old who still Writes Like this and SomeTimes tHis that there's an actual meaningful difference between "name" and "Name".

Scope is made explicit. Variables and logic exist within certain scopes so scoping errors (trying to use a variable outside the appropriate scope) becomes impossible as well.

Are all these constraints great for pedagogy? I don't know. But after teaching a number of high schoolers (now years ago), I'll say it would have been awesome to have had Scratch. They got along well enough with Dr. Scheme (now Racket), but a lot of time was spent fighting with the language that could have been abbreviated or eliminated with a different environment.

In the case of block-based editors, you can actually get version control. Usually, they're just a constraint-based, visual editor over a textual representation. So the underlying source files can be version controlled and compared quite easily (if the implementors considered this in their implementation).


Other thoughts:

This reminds me of an ex-girlfriend who legitimately thought it would be better if all math were written as:

  To calculate the average of the population sum up
  each of the members and divide by the count.
That is a precise definition of average (arithmetic mean), but it is impractical to use language like this in place of pages of algebraic expressions.

                  Ʃ pop
  average(pop) = -------
is a better method of expression once you have more than one equation in play.

Graphical representations of processes can, similarly, be more useful than their textual counterparts (though not in all cases, I wouldn't argue that).


But here's a fun one:

Problem: Identify binary strings (read left-to-right, last digit read is the least significant) that are multiples of 3. So 11 = 3, 1010 = 10, etc.

Do this with just regular expressions. No tables, no diagrams, nothing.

Ok, now do the same thing but with a standard DFA diagram or state transition table.

Much easier, right? Now, once you have those last two you can write out the textual regular expression and embed it in your program with relative ease. And maybe that's how you want to store it in your code. Or if you don't want to use a regex engine you could encode the table form into your program easily enough, and still retain its visual characteristics.

Are visual programming environments going to replace most of our programming languages and work? Probably not. But it's foolish to discard them based on the complaints in this blog post without considering the reasons why they have value.

text-based representation is visual.

Last I heard text is visual... so then what is text but a form of visual programming?

In essence text is visual programming limited to set of symbols and tokens parsed in left to right order.

Who said that this arbitrary set of rules is the best way to represent programming logic? The number of concepts for abstraction encompass many things in the visual realm. Graphs, procedures, arrows and even 3D space can be used to represent logic... Obviously with a space this big, text is unlikely to be the best way.

Text based programming is simply a local minima that is seemingly the optimal minimum. The problem is we're so deep in this minimum that it's hard to climb out.

> Last I heard text is visual... so then what is text but a form of visual programming?

But the author of TFA defines what he means: "A visual programming language is one that allows the programmer to create programs by manipulating graphical elements rather than typing textual commands."

The mistake in that definition is saying "rather than" rather than "as well as".

I think that's splitting hairs.

We all understand the difference between "texual" and "manipulating colored boxes, drawing arrows and generally using a non-textual interface". I'd rather we addressed this rather than the accuracy of the author's wording...

I partially agree with the author in that programming is inherently complex, and that "visual" UIs are inadequate once the program reaches a sufficient complexity level. Also, textual manipulation tools tend to be ubiquitous and more importantly, non-propietary; I'm naturally wary of other tools.

I disagree with the author in that I think there's still room for exploring graphical UIs.

Aren't you "manipulating graphical elements" when you type in textual commands?

No, not in the sense the author is talking about. See my response to the sibling comment.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact