In my career I’ve often been asked to rewrite Access DBs, InfoPath forms, or SharePoint sites developed by amateurs into something more usable. My early reactions where along the lines of “what insane person tried to do this in Access.” I realized at some point that was the wrong approach in my thinking. These systems are working software delivering value. I now say “wow, congrats on doing so much on your own. Let me help you take this further.”
I was just starting out as a programmer (professionally) myself, and once I was finished doing whatever I was sent to their office to accomplish, they would pull me to the side and show me their applications and pick my brain about ideas and ways to improve them or if it made sense to try a "more serious" programming language.
I was always completely floored by how much they had accomplished with every-day tools that people generally slog through out of necessity. Some of these guys were selling their applications for a solid slice of monthly income and support as well. "Normal" people building real, and truly useful applications with real domain knowledge are an absolute inspiration for me.
It reminds me a lot of web development, actually. Because everyone in their potential client-base had Excel, just as we all have browsers now. So building in Excel made perfect sense. I see the same thing happen with Wordpress or Shopify these days. Building these amazing applications that extend far beyond the purpose of the underlying application.
My early interactions with those people them helped me immensely when I went on my own full time, where I would walk into a small business and see this complicated app built upon - something - by an overworked "tech person of the office", who was clearly a bit sad that their creation was about to be replaced by some stranger. That was the person I was going to get to know and work with daily to get the application right, and I made sure they were part of every conversation.
Most of my projects are greenfield these days, but I still hold those programmers - in the truest sense - in the utmost highest regard. I still love to discuss their projects with them in the rare cases that I get the opportunity.
I think John Ohno does a good job of articulating what's so great about these tools in his article "Big and small computing" .
Rarely ever do I find myself programing for the sake of programming, it's always to serve some goal, to make some thing, fix some problem. If I write the worst code, in the worst way, and it still makes the light flash or the robot arm move, etc, then i succeeded.
I make heavy usage of the MIT Android app maker thing in my arduino and esp32 projects, and I'll be the first to admit that I use it so heavily because I have some minimal grasp of it and I've wrapped my head around most of its paradigms.
I'll admit that because I've only got a hammer, every problem is a nail, but taking away my hammer because im using it wrong just means I cant build anything.
Also, if people can build computers in Minecraft that don't achieve anything and get praise for it, people who solve real problems with creative (ab)use of something really should get kudos, too. You're not even using anything "wrong".
Actually, the perceived need for "dumb" DSLs is sprung from the same misconceptions as mentioned in the article and the end result is inevitably that simple problems are made somewhat simpler and complex problems turn into impossible problems.
A proper DSL extension in an unobtrusive host language is another thing but also hard to find. If a Lisp syntax is acceptable then that's the canonical example.
I don't want to get depressed overe all misconcieved excuses for test automation languages, process automation, configuration templating systems, etc, so I'll just stop here.
With regard to LaTeX or HTML, you could even argue that their goals could be better accomplished with some sort of graphical interface, which sounds a lot like… visual programming!
It should be possible for implementation language programmers to create new primitives, that visual language programmers can easily use without learning the implementation language.
And that extension interface should be part of the visual programming language from day one, good enough for the visual language to use itself for most of its built-in primitives, not an afterthought nailed onto the side.
A good professional programmer will feel (and in fact be) constrained and slower by a VPL. That’s frustrating and likely a misuse of resources. I just think the author missed the point that VPLs generally aren’t for “us”, they are for people who something else for a living.
It makes sense to use a visual programming language to create basic CRUD applications. A central task list with fields that are specific to the business process can go a long way in improving operational efficiency in a business unit. Beats an excel spreadsheet.
You cannot implement a system for every idea because you will not have enough developers and you will waste time implementing ideas that sound good but are useless. Visual Programming applications like Access allow less technical users to implement their ideas. Some of these ideas will go nowhere. This is fine because you will not have wasted developer resources. Some of these Access applications will solve genuine business problems. It may be difficult to decipher them but it is no more difficult than trying to figure out what users want. At least in this instance, you have an application you can test against.
I also understand why people dislike this idea. In many cases, it's really hard to extend something created by non-professional-programmers. I'm one of the founders of Luna ( http://luna-lang.org ), which is a visual programming language blending the boundaries between visual data-flow designing and conventional textual programming.
As a contrast to the mentioned article in this topic, I'd encourage people to also read our blog post about why visual programming could actually be very useful: https://medium.com/@luna_language/luna-the-visual-way-to-cre...
And yes, this:
This is not 100% real programming, because it does not include sensors, but you do create constructs that interact with the environment and accomplish goals. It's close enough to show what's possible with certain interfaces.
There have been much fewer resources allocated to the development of these concepts. Mostly for historic reasons.
The sad reality is that both critics and proponents of visual programming are often uninformed about prior work, user studies and related research rooted in cognitive and developmental psychology.
More in these videos: https://vimeo.com/179904952 https://vimeo.com/274771188
I really enjoyed this paper “A Taxonomy of Simulation Software: A work in progress” from Learning Technology Review by Kurt Schmucker at Apple. It covered many of my favorite systems.
It reminds me of the much more modern an comprehensive "Gadget Background Survey" that Chaim Gingold did at HARC, which includes Alan Kay's favorites, Rockey’s Boots and Robot Odyssey, and Chaim's amazing SimCity Reverse Diagrams and lots of great stuff I’d never seen before:
I've also been greatly inspired by the systems described in the classic books “Visual Programming” by Nan C Shu, and “Watch What I Do: Programming by Demonstration” edited by Alan Cypher.
Brad Myers wrote several articles in that book about his work on PERIDOT and GARNET, and he also developed C32:
C32: CMU's Clever and Compelling Contribution to Computer Science in CommonLisp which is Customizable and Characterized by a Complete Coverage of Code and Contains a Cornucopia of Creative Constructs, because it Can Create Complex, Correct Constraints that are Constructed Clearly and Concretely, and Communicated using Columns of Cells, that are Constantly Calculated so they Change Continuously, and Cancel Confusion
Also, here's an interesting paper about Fabrik:
Danny Ingalls, one of the developers of Fabrik at Apple, explains:
"Probably the biggest difference between Fabrik and other wiring languages was that it obeyed modular time. There were no loops, only blocks in which time was instant, although a block might ’tick’ many times in its enclosing context. This meant that it was real data flow and could be compiled to normal languages like Smalltalk (and Pascal for Apple at the time). Although it also behaved bidirectionally (e.g. temp converter), a bidirectional diagram was really only a shorthand for two diagrams with different sources (this extended to multidirectionality as well)"
There are also a lot of disparate visual programming paradigms that are all classed under "visual", I guess in the same way that both Haskell and Java are "textual". It makes for a weird debate when one party in a conversation is thinking about patch/wire dataflow languages as the primary VPLs (e.g. QuartzComposer) and the other one is thinking about procedural block languages (e.g. Scratch) as the primary VPLs.
I think spreadsheets also qualify as visual programming languages, because they're two-dimensional and grid based in a way that one-dimensional textual programming languages aren't.
The grid enables them to use relative and absolute 2D addressing, so you can copy and paste formulae between cells, so they're reusable and relocatable. And you can enter addresses and operands by pointing and clicking and dragging, instead of (or as well as) typing text.
Some people mistakenly assume visual programming languages necessarily don't use text, or that they must use icons and wires, so they don't consider spreadsheets to be visual programming languages.
Spreadsheets are a wildly successful (and extremely popular) example of a visual programming language that doesn't forsake all the advantages of text based languages, but builds on top of them instead.
And their widespread use and success disproves the theory that visual programming languages are esoteric or obscure or not as powerful as text based languages.
Other more esoteric, graphical, grid-based visual programming languages include cellular automata (which von Neumann explored), and more recently "robust first computing" architectures like the Moveable Feast Machine.
Robust-first Computing: Distributed City Generation (Moveable Feast)
>A rough video demo of Trent R. Small's procedural city generation dynamics in the Movable Feast Machine simulator. See http://nm8.us/q for more information. Apologies for the poor audio!
Programming the Movable Feast Machine with λ-Codons
>λ-Codons provide a mechanism for describing arbitrary computations in the Movable Feast Machine (MFM). A collection of λ-Codon molecules describe the computation by a series of primitive functions. Evaluator particles carry along a stack of memory (which initially contains the input to the program) and visit the λ-Codons, which they interpret as functions and apply to their stacks. When the program completes, they transmute into output particles and carry the answer to the output terminals (left).
One person who's done a lot of thinking about this is Bret Victor, and I encourage anyone interested to watch his videos: http://worrydream.com/
Here's a project that's trying to split the difference between visual programming and text-based programming: https://luna-lang.org/
Personally, I've been creating visual programming languages for years for such domains as visual effects, allowing artists (people with very limited / none programming skills) to create outstanding physics simulations systems or automated geometry processing by their own. Seeing people being able to create something that was so far not accessible for them is amazing.
Readers of this topic might also be interested in reading our blog post about why visual languages are actually very useful here: https://medium.com/@luna_language/luna-the-visual-way-to-cre...
Another important aspect is that humans use words to communicate, so some amount of text is inevitable (and welcome) in visual programming tools.
Also, text-based tools use the most common and open format of all, a stream of ASCII (UTF-8) characters. This allows to mix and match text-based tools (editors, compilers, formatters, linters, etc). Visual languages were (and are) dominated by proprietary formats. This keeps visual programming tools in silos, profitable for the vendor but preventing the wild expansion that text-based programming languages periodically enjoy.
For example, the article claims that the idea behind scratch is that programming is fundamentally easy but text makes it hard. I don't think that's the idea at all, but rather that programming (i.e., modeling a program) is fundamentally hard, but the difficulty is compounded because text is not a natural way for the current crop of humans to think about a program.
Another example is his criticism of the failure of enterprise UML tools--presumably the problem with these is that UML is just not a very good way of _modeling a computer program_, regardless of whether the interface to UML is visual or textual.
Another example is the criticism that the current crop of VP tools are procedural. Why can't a visual programming language be functional?
Another is the claim that VP is bad because the current crop of tools combines visual and textual programming (I'm not sure this is even true; I don't recall LabVIEW requiring text, for example). Is there any fundamental reason a programming model can't be purely visual? Why can't you represent a whole programming model (all the way down to primitives) visually?
I think most of these would be solved with more trial and error and more investment; none of these support the thesis that VP is fundamentally a bad idea.
To be clear, I think this article has a lot to offer people who are trying to solve visual programming issues, but it doesn't make the case that it sets out to do--that visual programming is fundamentally a waste of time.
It didn't try to replace text based PostScript programming, just augment it: there was an interactive shell window you could type expressions to, which had a stack "spike" sticking out of it representing the PostScript stack, which would update in response to the commands you typed, or you could drag and drop objects to manipulate the stack and the state of the system.
You could perform "direct stack manipulation" by dragging objects up and down or on and off the stack, open up nested objects and functions in an outliner, adjust the point size and formatting style, open up special editors on data and code, drag and drop objects and code around, etc. It was like a live Smalltalk system, in that you could explore (and vandalize) the entire state of the window system, and use it as a debugger to inspect and modify and debug processes of other NeWS clients (including itself).
The authors point on assuming reduced complexity makes me think that he doesn't understand how complex mathematical function models are even when the block diagram looks very simple to the untrained observer.
Simulink can generate C code for productionization, but to write custom algorithms, you typically need to write S-functions, which are typically in C, C++ or Fortran.
Visual environments let you set up topologies quickly and reliably. It's not really programming as such though -- kinda more like scaffolding. Any kind of complex logic is still more efficiently achieved in code because many abstract ideas cannot be efficiently expressed graphically. Visual abstractions are more useful for ideas that can be concretized (typically data-flows type ideas).
Source: used and taught Simulink for many years for control systems engineering.
While not efficient, thanks to Turing completeness and it's analog relatives, anything you can implement in code you could implement in Simulink flow. It took me about an hour to gin up Pong for example. I think that if it is fair to consider Scratch a visual programming language, then Simulink fits the bill.
I'm also not sure if Simulink (without S-functions, .m or any external programming language) is Turing complete, and even if it is, Turing-completeness is a very low bar for a programming system.
Not trying to be contrarian, but from my experience, despite its many logic subsystems, Simulink isn't really meant to be a general purpose programming system so much as a simulation system for time-varying outputs. It's difficult to impossible to write most normal programs in it.
Simulink models are Turing complete. You can set a discrete clock simulation (very common with the DSP toolbox) and implement Flip-Flops/Boolean logic. They are also for lack of a better term, Shannon complete, that is they have all the analog components to satisfy general function computability. They also have flow control outside of those conditions and full memory storage. If you were infinitely bored and long lived, you could write Windows in Simulink flow logic and run the modeler to make a VM.
I agree with you here. Simulink absolutely isn't meant to be a general purpose programming language. It's an extremely domain specific programming language but one that in it's specific domain offers considerable advantages over textual programming. I don't want to advocate writing software in Simulink because it's a bad idea, and I only do it myself in very limited cases because I have a hobby of analog computing and Simulink is very good for that at a certain level of abstraction.
Mostly, I just take umbrage with the Article and the author's reasoning. Simulink is a counterpoint to some of his arguments but those counterpoints were made well elsewhere in the thread so I was just providing an example.
Sorry if I am starting to drone on or come across hostile.
One of the interesting things I found was that the 2-dimensional layout helped a lot in remembering where stuff was: this was especially useful in larger programs.
I believe Labview barely scratches the surface of what is possible.
Sharp Learning Curve
Difficult to find help or code snippets online
Lack of decent Version Control
Finding the function you want is hard (requires navigating several submenus)
*All of the icon symbols look the same
Also, while LabView offers a great platform for an object oriented coding style, I feel like the 2d layout always results in a messy sprawl rather than layers of abstraction. This could be because LabView is used by people with less traditional software background, or it may be that making new functions in a bit of a pain in LV.
There is also the fact that LabVIEW is old. Like REALLY old. LabVIEW came out over 30 years ago and as such has a lot of legacy cruft along with lacking some of the more modern goals such as good scaling source control. LabVIEW was also designed by a hardware first company and has always been geared towards building that hardware ecosystem.
Finally, yea, poor programming practices exist in all languages. In visual languages, it makes them hard to read. Sometimes this forces you to be clever to reduce the flow complexity by increasing the logic or mathematical complexity which is it's own can of programming worms we could debate for hours.
In my experience it is harder to implement good programing practices in a visual programing environment. For example, abstracting, consolidating and refactoring visual code is much more challenging because of the sprawl of wires...
The main benefits of visual languages is that they allow domain specific people to work in methods that are natural to the problem and closer to what they are used to. Some problems can be represented very easily visually but are a big pain in code such as physical systems. Data flow and timing diagrams lend themselves very well to visual descriptions and can help prevent race conditions. WYSIWYG editors open up software domains to huge swaths of people who go on to produce some incredible work. Yeah, ultimately, people run up against the limitations of these programming environments and then you either transition to a less friendly but more scalable code, turn into spaghetti mess, or give up.
I think the biggest thing text based code development has for it is that text is inherently "open" and simple to share so tools get built for it while most visual languages tend to be proprietary, niche, and locked down. Maybe one day we will see a solid FOSS visual programming language/environment that hits above it's weight and can drop into other languages specifically for handling types of problems. I think that would be ideal but articles like this one that are close to straw man arguments don't help get anyone excited to work on it.
My view on this is that text is one of the best media we have for representing precise, information dense data.
What I got from the Houdini presentation was that, when someone is trained to produce precise, information rich data using other media e.g. artists then other representations can be as effective.
Great for prototyping stuff fast, but hard to maintain in production. It was designed for people who aren't programmers to be rapidly productive especially when interfacing with hardware (in a lab environment). However, debugging complex topologies can be a real challenge.
We have Labview models that are being rewritten in conventional programming languages due to aforementioned challenges.
I've got friends telling me I should switch over to VS now that it runs on Linux, but I'm too stuck in my ways to change, and I like my workflow minimalist as possible. Most of the time, I rapidly prototype something on a development server and then once I know the idea is feasible, I'll move over to Kate and write it more cleanly, since Kate has Konsole built in. I can make changes and run them immediately.
LabVIEW is used in production in a lot of serious environments.
LabVIEW is a toy for EEs to write prototypes in. That would have been okay if it had stayed there -- but the problem is that people are actually distributing "applications" using it, and trying to maintain them is nearly impossible.
Here's an example of a place where Labview just shines: I needed something to plot 'rolling' analog and digital signals, i.e. basically provide a visualization of the inputs of a combined analog/digital input card. With ability to pause the thing, each line in a different color, data cursors, ...
The data is acquired by C++ code, but since Labview has this stuff built in getting the whole thing up and running is just a matter of setting up a communication protocol between the C++ part and Labview. I just went for TCP/IP and got the thing up and running in a couple of hours. Has been used like that for years now. Not exactly a prototype, nor a toy. It just does this one thing, it's all we needed, and it does it extremely good.
Yes, drawing out some things in a visual IDE and having it work is nice. The problem is that eventually LabVIEW is going to update their runtime and it won't work anymore. Now what?
Take the time and do it right up front, and then when it needs to be updated people aren't cursing "whoever decided to do this in LabVIEW years ago" as they often do in these situations.
Ok, but again what do you suggest is 'right' then, any example? Because as laid out, for me, it is right, since we have zero problems.
Every application I've ever seen written under labview is expected by its creators and its users to crash randomly and continually for no explainable reason.
Hmm, strange. Sounds to me like those creators must be doing something wrong. I mean, we've been running a couple of Labview applications for +10 years and I honestly think none of them ever crashed (where 'crash' means suddenly stops working without apparent reason because of a bug in Labview itself, not come to a halt due to programmer error).
Also wen to, I don't know, 3 or 4 updates, without much problems. Again I think it just comes down to hwat I said first: the hard part might be fuguring out how to do Labview right, I guess.
Especially don't be mean about other people's environments for the sake of being mean about their environments. Which is what you just did.
LabVIEW is a cash cow that NI pushes on schools and junior EEs to make them think they're software developers, it allows them to write horrible cruft that is generally impossible to maintain. In almost every case I've seen, it's actually easier to rewrite the whole mess from scratch rather than try to decipher what someone a decade ago did as a hack that got turned into something people depend on.
You're clearly one of NI's rabid fans, you believe what you believe, but if you take personally criticism of a really poor tool, you know the rest..
I can create things in LabVIEW if I have to, I've done it and I've debugged and rewritten other people's abominations in it as well. The concept of visual programming languages is badly flawed for a variety of reasons and LabVIEW portrays each of them thoroughly.
We've been building visual languages in the past for people in different industries, including Visual Effects one. Watching people with limited programming skills to create outstanding things on their own is truly amazing.
The author well formulates the limitations of VPLs (Visual programming languages).
The rant seems to jumble in VPLs for general purpose programming with VPLs for domain-specific usecases. So although VPLs may be "a bad idea" for general purpose computing and does a good job of explaining its downsides, it is a little reckless in saying VPLs are a bad idea overall.
The author also offers two straw man arguments to make their point, though I don't believe that these things are common misconceptions at all:
1) Abstraction and decoupling play a small and peripheral part in programming.
2) The tools that have been developed to support programming are unimportant.
Lastly, the author uses Scratch as its representative VPL example, even though it is probably the most far removed from "Visual Programming" and more in line with something I'd call "Block-Based Programming". These are far from equivalent and a poor choice to use for talking about general purpose VPLs (though the author does admit that it may have been a poor choice later).
Shapes are better built with our hands. They're often filled with text, which is more easily said than written.
Someone can already use a visual programming language to simulate the ocean in UE4 in 2018, in 20 years we'll have a mainstream general purpose visual programming language. More people might be using a visual programming language to make their AST than a keyboard, which is as comparatively cumbersome to gestures/talking as punchcards are to keyboards.
So far, any attempts to model abstractions for general purpose VPLs have ended up more difficult to parse and read than textual code (that argument can also be made for UE4 Blueprints).
Talking about "shapes are better built with our hands" isn't a solid argument for why general purpose VPLs are bound to happen. Note, too, that I would love for them to.
From: Alan Kay Date: Thu, 3 May 2018 07:49:16 +0000 (UTC) Subject: Re: Blocky + Micropolis = Blockropolis! ;)
Yes, all of these "blocks" editors sprouted from the original one I designed for Etoys* more than 20 years ago now -- most of the followup was by way of Jens Moenig -- who did SNAP. You can see Etoys demoed on the OLPC in my 2007 TED talk.
I'd advise coming up with a special kid's oriented language for your SimCity/Metropolis system and then render it in "blocks".
------------- * Two precursors for DnD programming were in my grad student's -- Mike Travers -- MIT thesis (not quite the same idea), and in the "Thinking Things" parade programming system (again, just individual symbol blocks rather than expressions).
From: Don Hopkins Date: Fri, 4 May 2018 00:43:56 +0200 Subject: Re: Blocky + Micropolis = Blockropolis! ;)
I love fondly remember and love Thinkin’ Things 1, but I never saw the subsequent versions!
But there’s a great demo on youtube! https://youtu.be/gCFNUc10Vu8?t=24m58s
That would be a great way to program SimCity builder “agents” like the bulldozer and road layer, as well as agents like PacMan who know how to follow roads and eat traffic!
I am trying to get my head around Snap by playing around with it and watching Jens’s youtube videos, and it’s dawning on me that that it’s full blown undiluted Scheme with continuations and visual macros plus the best ideas of Squeak! The concept of putting a “ring” around blocks to make them a first class function, and being able to define your own custom blocks that take bodies of block code as parameters like real Lisp macros is brilliant! That is what I’ve been dreaming about and wondering how to do for so long! Looks like he nailed it! ;)
Here’s something I found that you wrote about tile programming six years ago.
Etoys, Alice and tile programming ajbn at cin.ufpe.br () 6 years ago
I have been trying the new version of Alice <www.alice.org>. It also uses tile programming like Etoys.Just for curiosity, does anyone know the history of Tile Programming? TIA,
Antonio Barros PhD Student Informatics Center Federal University of Pernambuco Brazil
Alan Kay 6 years ago
This particular strand starting with one of the projects I saw in the CDROM "Thinking Things" (I think it was the 3rd in the set). This project was basically about being able to march around a football field and the multiple marchers were controlled by a very simple tile based programming system. Also, a grad student from a number of years ago, Mike Travers, did a really excellent thesis at MIT about enduser programming of autonomous agents -- the system was called AGAR -- and many of these ideas were used in the Vivarium project at Apple 15 years ago. The thesis version of AGAR used DnD tiles to make programs in Mike's very powerful system.
The etoys originated as a design I did to make a nice constructive environment for the internet -- the Disney Family.com site -- in which small projects could make by parents and kids working together. SqC made the etoys ideas work, and Kim Rose and teacher BJ Conn decided to see how they would work in a classroom. I thought the etoys lacked too many features to be really good in a classroom, but I was wrong. The small number of features and the ease of use turned out to be real virtues.
We've been friends with Randy Pausch for a long time and have had a number of outstanding interns from his group at CMU over the years. For example, Jeff Pierce (now a prof at GaTech) did SqueakAlice working with Andreas Raab to tie it to Andreas' Balloon3D. Randy's group got interested in the etoys tile scripting and did a very nice variant (it's rather different from etoys, and maybe better).
I think this is underselling the point.
Lack of good source control is a non-starter for any programming paradigm. If you want to do visual programming or any other kind of programming that doesn't dovetail with directory tree of text files, you are not going to get anybody serious to work with you until you fix that problem.
The first problem you should solve is not how to compile the code, nor how to debug the code, but how to store it.
None of these models scale to teams of 10, 20, 100 people. Even at 4 they begin to become tedious.
The biggest problem with software developers is confusing "successful" with "applicable to everybody", and this is a prime example. It doesn't matter if it's successful. It's not applicable to my work. Excepting, perhaps, for Wiki-like workflows, which are less than 10% of my duties.
Pure Data (Pd) is a visual programming language developed by Miller Puckette in the 1990s for creating interactive computer music and multimedia works
This is the one shinning example where a visual language actually accomplishes something that a general language can't not. Make programming accessible to creative people.
This is the only visual language that I felt was worth learning.
On the topic of procedural sound, you might enjoy this talk about Generative Systems with Will Wright and Brian Eno:
At Maxis, we also developed a bespoke visual programming language for The Sims (first used in SimCopter), called "SimAntics".
We never officially released "Edith", the version of The Sims with the object editing and visual programming tools built in, but users have reversed engineered the byte code, and developed text based tools for programming custom objects with SimAntics.
Here's a transcript and video demonstrating it:
The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo
This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that Don Hopkins developed for The Sims with Will Wright at Maxis and Electronic Arts.
Link to "Edith Sims Programming Tool" section:
You can get connectors for pretty much anything eg. Raspberry pi robot/ home automation, twitter, http, email, mqtt, SQL, openCV, parallel processing.
The dashboard allows you to create UIs visually, and I've ported it to run on Android + IOS:
Allowing this to run on mobile means that you can develop/monitor systems on mobile, cluster the systemsphone with other nodered instances, and develop mobile apps (the package is basically nodeJs, nodered and a cordova webview which can be hooked up to display the nodered dashboard).
The important things to note here are that the underlying language to the visual DSL is flexible enough to do everything, and standard ways of building visual extensions (aka nodes) exist.
The main thing missing on the Dashboard is a table node, which would allow a much more flexible layout of data (sorting, trees etc).
I think this point the author makes is an interesting one because I see it form a different perspective. I think the biggest issue is that we mix presentation with representation.
> The lack of good source control is another major disadvantage of most visual programming tools. Even if they persist their layout to a textual format, the diffs make little or no sense.
At the same time tabs vs spaces is still a thing. In languages which aren't whitespace sensitive, the diffs similarly don't make any real sense in my opinion.
I believe (and have no idea how to implement) that the representation of a programming language should be modified more directly and the rendered presentation (textual, visual, interpretive dance) should be irrelevant to the tooling.
The block based visual programming embraced by scratch and things like the BBS micro:bit are really good, my 7 year old son can do really cool things with these systems (also check out the new beta https://beta.scratch.mit.edu/ which is done with React for anyone interested )
You can even add this kind of visual block based coding into your own systems using something like https://developers.google.com/blockly/
I think it's a little shortsighted to think just because there's been tools that didn't really work out too well that the idea is bad
The main immediate downside that I can see is that normal programming refactoring tools are probably not going to work... yet.
I feel that this is a big opportunity for AirTable, MS & Google (for their Office software), and Salesforce to look into.
Maybe once AR and VR really take off, visual programming will have a major professional revival.
Snap! is a visual "blocks" programming language like Scratch, but with the full power of Scheme:
First class functions.
It's the culmination of years of work by Brian Harvey and Jens Mönig and other Smalltalk and education experts. It benefits from their experience and expert understanding about constructionist education, Smalltalk, Scratch, E-Toys, Lisp, Logo, Star Logo, and many other excellent systems.
Visual lexical closures.
User defined blocks including control structures.
Macros and special forms.
Call with current continuation!
Adding Machine Learning Blocks to Snap!
ProgKids is a Russian site that integrates Snap! (and Python) with Minecraft, so kids can visually program 3d turtles that move around in the world and build things!
ProgKids. Строим дом, а потом ещё пару (Building a house, then another couple)
ProgKids. Куда же без зверей? (Why do you not have animals?)
ProgKids. Как работает Snap? (How does Snap work?)
More links and stuff I've written about it on HN:
Perhaps the growth of VR and other 3D interfaces will breathe new life into visual programming?
"Well 3D is the number one question. And my answer is, depending on what mood I'm in, we need to crawl before we fly."
"Or I say, I need to actually preserve one dimension to build the thing and fix it. Imagine if you had a three-dimensional computer, how you can actually fix something in the middle of it? It's going to be a bit of a challenge."
"So fundamentally, I'm just keeping the third dimension in my back pocket, to do other engineering. I think it would be relatively easy to imaging taking a 2D model like this, and having a finite number of layers of it, sort of a 2.1D model, where there would be a little local communication up and down, and then it was indefinitely scalable in two dimensions."
"And I think that might in fact be quite powerful. Beyond that you think about things like what about wrap-around torus connectivity rooowaaah, non-euclidian dwooraaah, aaah uuh, they say you can do that if you want, but you have to respect indefinite scalability. Our world is 3D, and you can make little tricks to make toruses embedded in a thing, but it has other consequences."
Here's more stuff about the Moveable Feast Machine:
The most amazing mind blowing demo is Robust-first Computing: Distributed City Generation:
And a paper about how that works:
Plus there's a lot more here:
Now he's working on a hardware implementation of indefinitely scalable robust first computing:
Programming is visual, full-stop. The main reason why “visual programming didn’t catch on” is because there can’t be a semantic equivalent to placing programming elements on a 2D grid. A program executed things On sequence, so it’s convenient to program top to bottom. That’s why text is better suited to represent it.
Dataflow programming is not generally executing in sequence. And that’s where visual programming shines.
The economy would collapse if you took them away, so I think that's a pretty good measure of how important and widely used they are.
I wonder if instead of new programming languages or VPLs aren't the problem, but lack of meta data structures (going beyond a class or interface) that tie together data with expected structure and their implicit validations (including relateable data) to code that performs business rules (booking airplane tickets) and code that draws a UI (ticketing screen).
I know about 10-15 years back this was called "4GL" (fourth generation languages) but instead of that taking off, a sludge of just new procedural programming languages took hold (rust go ts etc..). I think VPL tries to fill that same niche space where the goal is to think meta-like, not procedurally about each input box. If the system understands and records data in similar ways, we can eventually get to a place where data is ubiquitously stored in the structure the system defines, not the programmer defined way.
So while right now we may not have the correct primitives for 4GL or VPL, maybe one day and that's why these ideas won't die - because it is likely going to be the way that software is designed in the far (or near?) future. Especially in the wake of AI where we might have a chance to let the system write the UI/business logic.
In other words, instead of thinking of AI as writing code in C/C++/Rust, etc.. AI will be in charge of moving data - at first in human defined chunks (likely spec'd out in VPL or 4GL structures), but eventually for AI to define its own objects of related structures. Eventually getting to the point where us mere humans ask AI to build things that fit human process.
The last step is that humans would no longer define the process, and AI be able to incrementally add processes with the data it has, by defining the most optimal path for humanity whether it be tax rates or human production or smoothing out effects of electricity futures based on incoming solar mass ejection predictions. Eventually in the final stages of humanity, defining the work output each human needs to make during a day. By this point enough people will have opted out and the giant war between the owners of AI and the people will start. Killing off most people, but at the same time many of the oligarchs will have been assassinated by members of the public that were willing to rip out their tracking chips in the left eye.
Enough of the planetary population will be gone and totally untrusting of anyone, but we'll have finally visioned our AI future. Won't it be grand!
Oh sorry off topic again yeah we probably should stay away from Visual Programming.
For instance, you can't really plot a 25-dimensional object, but the same object is trivial to manipulate with equations.
It's clear to most people that both approaches have their place. In geometry, the visual component is very important to develop important intuitions about how objects behave in low dimensional space. On the other hand, if one were to insist on the visual, one would be stuck with purely intuitionist ideas and never be able to move into more complicated realms where you cannot visualize objects graphically (e.g. high-dimensional space).
The Bourbaki group was a group of mathematician who aimed to put mathematics on a rigorous foundation by developing the abstractions that shaped modern math, and by moving mathematics past its intuitionist foundations. Some argue that this caused modern "geometers" to lose their feel for geometry because everything became about symbols. But without this development, a lot of modern discoveries might not have been possible.
As a child I was more inspired by seeing the inner workings of something than being told X was possible if you learn to code. Visual languages can better expose the guts of the working automaton.
That said, I agree that [current] visual programming languages have shortcomings that limit them to simple / educational scenarios.
It really just depends. What visual systems provide is accessibility for novices to create value. That sometimes means planning to replace them, or enhance them with skilled developers. In my mind, it creates jobs. Not always great jobs, but at least security in that Software Development will always be a viable career, with lots of opportunities.
The key as far as I can see is not to allow things to get too complex and call out to "real" code components over a certain level of complexity.
I wouldn't be worried about that. Just because people are reading it doesn't mean they are agreeing with the conclusion. It's all just stuff to think about.
Where it becomes worrying is when people vote for something of that quality.
The main difficulty was scale and tradeoffs between power vs ease-of-use. We spent a lot of time creating "subroutine blocks" for the most commonly used functions and optimizing for UI/ease of use that we weren't doing much else. Furthermore, pretty much every integration, library etc. had to be converted into its visual equivalents for the language to be of any use and all the linear algebra/candlestick libraries were taking up so much time that the rest of the product was suffering. Stuff that often makes sense to a professional programmer had to be "simplified" so the UX is more optimal. E.g. data structures.
If you look at VPLs in production, there's a reason most of them tend to be "imperative commands only" i.e. a glue language that mainly strings together other subroutines. The concept of objects, or even structs is completely eliminated in the VPL.
There are exceptions to this however, the Unreal Engine's Blueprints is a lot more flexible than most.
If you want to get started building your own version of Google's Blockly, here's a good guide:
Throw in an immediate-mode UI library like React and you might even scale up indefinitely with plain HTML (most browser VPLs stick with SVG/Canvas).
For prettier node-based types of languages, try these libraries:
Bonus tip for anyone looking to implement any: use code generators. AST -> Visual programming component conversion may save you lots of time (assuming that the language has mature enough tooling and labour is extremely expensive)
in my personal opinion, scratch and blockly have very little to do with visual programming. they are nice little experiments but just repeat what's bad about text-based programming into visual blocks, which just compounds the issue. i honestly am surprised mit and the media lab have dumped so much time into scratch.
i have said this before, but the most productive visual programming language and environment is labview. it certainly has its problems, and i am a big critic of it, but its visual programming style and dataflow paradigm are more powerful than people realize. it has some seriously good ideas that aren't found in many other languages, and it had many innovative features well before other languages. for example, doing concurrent programming in labview is a breeze.
on the other side, you see that modern IDEs are starting to add a lot of visual content to help the programmer out. the languages remain text-based, but the environments are adding visual features. i think that there is a lot of untouched ground regarding hybrid environments and languages that merge the ideas of text-based programming with visual-based programming. in addition, the dataflow paradigm is also underused.
there's also other computation targets other than CPUs such as FPGAs. it's silly in verilog and vhdl how you often have to manually label up wires where in labview, the tool and language do that for you. you simply draw your circuit out as you would in a diagram.
common complaints against visual programming have almost no weight. for example, "oh, you can't diff visual programs like text programs". yes you can. it's not like computers were gifted by god text-based diff programs. these things were developed and iterated on. labview has a diff mechanism. it isn't great, but my point is that people view text as some sort of innate trait of programming and all the tools are "natural" and already exist as if they just poofed into being once programs started being written. but if we put some thought and work into the tools for visual programming, similarly to how it has been done with text programming, it's possible to create really powerful environments and languages.
where visual programming languages often have a ceiling is abstraction. for example, labview has object-oriented features, but it doesn't go far. i program on the side in f# and racket, which have an amazingly high ceiling for abstraction, but interacting with these languages is often frustrating coming from labview. there are many things that are simply easier in labview. some of that is tooling. some of that is the visual language and dataflow paradigm.
The purpose of Scratch is not to have all programming done in an environment like Scratch. It's to ease students into precisely what the author wants (textual programming environments).
Scratch eliminates several categories of errors for students.
Students in younger grades often lack an understanding of grammar and its utility or applicability. Consequently, the novel (to the student) syntax of programming languages can be vexing: Why is "while if x > 3" not allowed? The rules of the languages proscribe it. So now they're learning language rules and the concepts of programmatic/computational thinking.
They can no longer make typos when writing out keywords and variables. Keywords are typically available as draggable icons that they can then "fill-in-the-blanks" for. Variables become draggable icons or can be filled in with a simple drop down or text field that filters a drop down. Try explaining to an 8-year-old who still Writes Like this and SomeTimes tHis that there's an actual meaningful difference between "name" and "Name".
Scope is made explicit. Variables and logic exist within certain scopes so scoping errors (trying to use a variable outside the appropriate scope) becomes impossible as well.
Are all these constraints great for pedagogy? I don't know. But after teaching a number of high schoolers (now years ago), I'll say it would have been awesome to have had Scratch. They got along well enough with Dr. Scheme (now Racket), but a lot of time was spent fighting with the language that could have been abbreviated or eliminated with a different environment.
In the case of block-based editors, you can actually get version control. Usually, they're just a constraint-based, visual editor over a textual representation. So the underlying source files can be version controlled and compared quite easily (if the implementors considered this in their implementation).
This reminds me of an ex-girlfriend who legitimately thought it would be better if all math were written as:
To calculate the average of the population sum up
each of the members and divide by the count.
average(pop) = -------
Graphical representations of processes can, similarly, be more useful than their textual counterparts (though not in all cases, I wouldn't argue that).
But here's a fun one:
Problem: Identify binary strings (read left-to-right, last digit read is the least significant) that are multiples of 3. So 11 = 3, 1010 = 10, etc.
Do this with just regular expressions. No tables, no diagrams, nothing.
Ok, now do the same thing but with a standard DFA diagram or state transition table.
Much easier, right? Now, once you have those last two you can write out the textual regular expression and embed it in your program with relative ease. And maybe that's how you want to store it in your code. Or if you don't want to use a regex engine you could encode the table form into your program easily enough, and still retain its visual characteristics.
Are visual programming environments going to replace most of our programming languages and work? Probably not. But it's foolish to discard them based on the complaints in this blog post without considering the reasons why they have value.
In essence text is visual programming limited to set of symbols and tokens parsed in left to right order.
Who said that this arbitrary set of rules is the best way to represent programming logic? The number of concepts for abstraction encompass many things in the visual realm. Graphs, procedures, arrows and even 3D space can be used to represent logic... Obviously with a space this big, text is unlikely to be the best way.
Text based programming is simply a local minima that is seemingly the optimal minimum. The problem is we're so deep in this minimum that it's hard to climb out.
But the author of TFA defines what he means: "A visual programming language is one that allows the programmer to create programs by manipulating graphical elements rather than typing textual commands."
We all understand the difference between "texual" and "manipulating colored boxes, drawing arrows and generally using a non-textual interface". I'd rather we addressed this rather than the accuracy of the author's wording...
I partially agree with the author in that programming is inherently complex, and that "visual" UIs are inadequate once the program reaches a sufficient complexity level. Also, textual manipulation tools tend to be ubiquitous and more importantly, non-propietary; I'm naturally wary of other tools.
I disagree with the author in that I think there's still room for exploring graphical UIs.