- Visual computing is pretty heavily used, unreal engine, blender, houdini, etc. All have a very similar node based visual programming system. It seems to work pretty well (better than text) for most of what they use it for. I think because of the ease of jumping in the middle and making small understandable changes.
- Many programming languages today have a format that is like <tree of files, each containing> <set of items, each containing> <list of expressions>. It would be nice if that <set of items> step was treated as an unordered set instead of an ordered one, with editors having a better understanding of how to bring up relevant elements of the set onto your screen at the same time. Split pane editors, "peeking" in editors at code definitions, etc. hint at how this should work, but I don't feel like they do it as well as possible.
- A small amount of "visual augmentation" might benefit most programming languages, I'm not sure I can explain this better than linking to a few images of what emacs auctex package does to latex http://lh6.ggpht.com/_egN-3IJO0Xg/SpIj6AtHOTI/AAAAAAAABj8/O1... https://upload.wikimedia.org/wikipedia/commons/4/42/Emacs%2B...
The biggest problem with these tools is scalability and maintainability. You will hear many stories in the games and VFX industries of nightmare literal spaghetti code created with visual programming tools that is impossible to debug, refactor or optimize.
Visual programming seems easy for very small examples but it doesn't scale. It has no effective means of versioning, diffing or merging and usually lacks good mechanisms for abstraction, reuse and hierarchical structure. It doesn't have tooling for refactoring and typically lacks tooling for performance profiling.
Some of these problems seem to be more fundamental and others like they could potentially be addressed with better tooling but that tooling never seems to emerge.
I've got a lot of experience with shader programming and have never found node based shader editors to be better than text over the long term, although there are some nice visual debugging features which are rarely implemented in text based IDEs (though I have seen it done). I've also found visual scripting to all too frequently get out of hand and have to be replaced with code due to being unmaintainable, undebuggable or unoptimizable.
I think there is possibly fertile terrain to explore in trying to get some of the benefits of visual programming approaches while avoiding all these downsides but many of us have been burned enough to be very skeptical of the majority of visual programming systems that don't even try to fix the worst problems.
I would argue that the shader editor in UE3 had none of these properties. It showed you cycle count and each step of the graph visually for debugging.
Also, I don't mean to be blunt but you aren't the target for those tools. Where they shine is when you have a level artist that needs to make a small tweak to how a shader looks. With those systems you don't have to loop in a dev to make it happen. You still need a solid tech artist to make sure things don't get out of hand and they're not a tool for every problem but in the domains where it aligns you see 10x gains on a regular basis.
It's extremely powerful, but it comes at a cost because it's nigh impossible to create diffs between different versions of Blueprints AFAIK.
I did see this (https://forums.unrealengine.com/community/community-content-...) the other day which looks pretty cool, essentially being able to embed some C++ at the Blueprint level, but I don't think that will necessarily solve the problem.
Why not? What makes their representation undiffable?
See the sibling poster, they did actually add a visual diff/merge tool in a much earlier version and I missed that.
There's been ways to diff binary files forever. Doesn't mean it is a great idea to store source code in that manner though. With UE4 you're not really supposed to be able to edit the Blueprint "code" outside the UE4 Editor.
Color solve this one problem just like they solve the text equivalent better than annotations. This is not fundamental.
> lacks good mechanisms for abstraction, reuse and hierarchical structure
VLSI circuit designers have some abstraction mechanisms that are not that bad. This one is fundamental, but it's not as bad as most people say.
> It doesn't have tooling for refactoring and typically lacks tooling for performance profiling.
I can't imagine how those could be fundamental problems either.
In principle the largest problems of visual programming are our lack of capacity of understand complex images, compared to text, and the added information that most visual languages add (making the complexity worse) (that do not apply to things like GUI designing). I don't see any other showstopper, but those two are really bad.
I'm not sure I agree with this. Not as a fundamental law, at least. I find complex text pretty difficult to understand. Sure, I can read the words or even expressions, but to really understand how everything is connected and how the data and control flows through what's written, I find that incredibly difficult. A large part of programming is keeping that contextual information in my head so that I don't have to re-evaluate it all again.
Which is the reason why I love pen & paper and boxes & lines when I try to map out some code, a data structure, algorithm or idea. Obviously everyone is different, but for me, when things get too complex, I reach for images and diagrams to help me understand and form a clear mental model of what's going on or what I want.
Having said that, I've yet to find a visual programming language that does this. When I used Max/MSP a number of years back, I found it helped me think in "code" as I could map out ideas visually, reducing the need for pen and paper, but it had a ton of shortcomings. It gave me a glimpse of what it could be though, and I think the problems could be solved to a point where I can skip pen and paper altogether. We're not there yet, but if you think about how much time and effort went into making textual languages, its no wonder visual languages are so far behind. Also, its clear that not every task is well suited to visual languages, so the best environment would need to be a mix of both.
certain cases are still much better done in text and statebox itself is written in text, so nobody is claiming programming itself should be 100% pictures, but then again, I think it can be done and I think there is merit to it.
after all, text is also some sort of picture; on the screen but also in your head
when done right, we claim, you can target many different things ("semantics")
what we claim is something like, the compositional aspect of many such node based systems can be described as a certain type of mathematical object (monoidal category) ~ we can build an editor for that and then map that "dsl" to particular targets (image processing, state machines, etc)
This has been my biggest complaint with both Unreal and NI Reactor... you can build some brilliant things, but not without creating a mess of connections that becomes very difficult to reason about, let alone work on. A lot of production "visual programming" diagrams are spaghetti code, without comments or a changelog you can study.
While written code tends to the same way (just try diagramming the class tree in most software), at least we have tools for dealing with and reasoning about it in text form.
I know in Antimony CAD, you can even instantiate a function, edit the function's flow or individual python subroutines, and the delta is saved for that instance.
I know the hardest graphical programming I've had was with the Lego Mindstorms ev3. There were multiple "simplifications" that removed functions along with a nigh-unusable gui (the if block was this huge encapsulating thing, and other branch codes also had onerous things on screen).
this is why we work in a typed, purely functional setting.
spaghetti is difficult to deal with, for starters, you need "compositionality", so that you get no undefined or emergent behaviour.
then second you need some form of "graphical macros", or a "meta-language" for diagrams, code that generates diagrams or "higher order functions" for diagrams.
Arguably, the whole point of having a diagrammatic representation with good formal properties is to provide new mechanisms for accomplishing these things. Of course these mechanisms, features etc. won't quite resemble the ones that are in use with text-only languages.
this is the point exactly
The right tool for the right job. For 50% of a game's code visual programming is absolutely the right tool. For some other parts it probably isn't.
Unity is a very popular game engine that doesn't have an official visual programming solution (they're previewing one in the very latest version). Unity has a powerful level editor that is used to lay out the levels in a GUI tool but no visual scripting / programming tools. The majority of Unity games that currently exist therefore do all level scripting in C# code. Many other games engines have no visual scripting solution and all level scripting is done in either a scripting language like Lua or in some cases in C++ code. Unity has sprite editors, visual GUI builder tools etc. but those are not what is generally meant by "visual programming". The closest Unity has had until recently was its graphical animation state machine editor.
anyway, I would argue both are valid examples is graphical progamming, but they happen at different levels.
the "node based" tools usually define some sort of function or system, ie. a "type"; for this you need category theory to describe how the diagrams look and this is not what any editor I know does, but it means a whole of difference.
And the map editors are for defining "terms of a type", given a definition of a "map datatype" there is a graphical way to edit it.
when we talk about graphical programming we are initially focussing on the first, well defined graphical protocol definitions. you can think of it as type checked event sourcing, where the "behaviour" or "type" is described by a (sort of) graph representing a (sort of) state machine)
but we have relatively clear idea's to extend this to the second case as well.
The difference with other (older) approaches is that in the last 20 years a lot of mathematics appeared dealing with formal (categorical) diagrams or proof nets, etc. that we leverage. I claim we (the world) now finally really understand how to build visual languages that do no suck.
hence statebox :-)
Visual programming tools attempt to map logic to a (usually) 2D domain where there is no natural or intuitive general mapping. The representation has both too many degrees of freedom (arbitrary positions of nodes in 2D space that are not meaningful in the problem domain) and too few (connections between nodes end up crossing in 2D adding visual confusion due to constraints of the representation that don't exist in the problem domain).
I've been exploring colored Petri nets for our product and they do seem to have promise for certain use cases though so I do think it's an interesting area to explore.
In general this is true, but the diagrams we use at Statebox are different in the sense that there is a completeness theorem between the diagrammatic language and an underlying mathematical structure (a category). In this case the mapping is sound by definition.
Also, it is worth stressing that our diagrammatic calculus is topologically invariant, meaning that the position of diagrams in space is meaningless, everything that matters is connectivity. This is also the approach originally used by Coecke and Abramsky in the field of Categorical Quantum Mechanics, which is getting huge success to define quantum protocols :)
Your writeup didn't convince me that "category theory" adds any significant value, and neither does it help inform as to what category theory actually is. How does statebox improve upon existing node-based programming implementations?
CS formality and big words misses the point of visual programming entirely, in that it is to simplify the process of software creation to make it more approachable to non-programmers. Unless your UX is absolutely top-notch you are going to lose these novice users as they struggle to deal with the constraints without a good reason or UX to do so.
Also- The memetastic design of statebox's main page is a pretty big turnoff :(
> Why does category theory magically transform node diagrams into something usable from something not? Unreal blueprints/Reactor schematics/whatever are quite fine in their current form, even if their usage falls apart in advanced constructions. Is statebox going to magically make huge node-and-graph-designed programs reasonable?
Your writeup didn't convince me that "category theory" adds any significant value, and neither does it help inform as to what category theory actually is. How does statebox improve upon existing node-based programming implementations?
nothing magical, just good engineering and UX design and solid theoretical underpinnings.
cat. th. does add value: there are many ways to build diagrams and build syntaxes for diagrams, but they are not all equivalently powerful or general. but it turns out that there are diagrams that _are_ suitable, and this is what we use.
It will improve upon existing diagram tools in that it gives a formal theory of how they work, so you can really build huuge diagrams and still be sure everything works.
I didn't write the blog post, but I could try to write one about the value of category theory, because it is often misunderstood. It is however very abstract and takes the mind a while to see the value off, which is not so easy to convey.
> CS formality and big words misses the point of visual programming entirely, in that it is to simplify the process of software creation to make it more approachable to non-programmers. Unless your UX is absolutely top-notch you are going to lose these novice users as they struggle to deal with the constraints without a good reason to UX to do so.
oh, yeah this is something often misunderstood, we are not trying to target novice developers (yet). we need to develop a lot of stuff and CS formality is right now still the simplest way to understand the system. I mean, we are not trying to be arrogant or puffy or something, but for instance the way we realise our compilation is with "functorial semantics". we have a functor between categories that does the trick. We could call it something else, but it doesn't help (at this stage).
anyway, if we do our job well then all the category theory would be under the hood and you just get a nice UX for coding with diagrams.
> Also- The memetastic design of statebox's main page is a pretty big turnoff :(
opinions differ :) I thought it was quite funny 2 years ago and many people thought so as well and then it got turned into this homepage.
at the moment we don't really have time to spend time on the site, but it will def. be changed in the future
Lots of people got their starts programming a GUI in their IDEs, how can you say "nobody" would do this?
Smalltalk would be an exception. The graphical elements are provided for you to make your own inside the image on the fly, or you can modify the IDE as needed.
Design, or encode? Because visual designers in the latter sense typically have many shortcomings, and are usually used mostly for preview purposes.
Maybe the better informed dislike, but the bulk of the dislike in general is usually of the form "this kind of thing never works, nobody uses it", when in fact it is being used, in multiple disciplines.
> scalability and maintainability
> easy for very small examples but it doesn't scale.
this is very true, that is why we do it differently; we clearly define the semantics of our diagrams and take guidance from category theory in this. this is different from other graphical languages; we try to assume the minimum but then guarantee you that some stuff is always preserved.
think of it like deterministic, pure functional programming, but with diagrams.
> usually lacks good mechanisms for abstraction, reuse and hierarchical structure.
> f versioning, diffing or merging
very important points, we try to address this by having everything based on immutable, persistent data structures with built in content addressing; similar to git for instance.
diffing and merging is very complicated and still research but there are many hints that this can be done
Rather, in order to understand a certain aspect of the system, I imagine picking just one of the nodes and then asking the IDE to show me, say, the immediate inputs. Some of these may be semi-hidden (code folding!), others may show more detail, etc.
BTW, I've been thinking it would be great if these systems had a textual representation in the vein of Graphviz' dot language. So one could have the best of both worlds. For diffing, a simple textual diff could do, but one could come up fancier semantic diffs in the same vein semantic diffs for code or XML exist.
I found Max/MSP code could look extremely tidy, as I would group my functionality together in a similar way as I would in a textual language (short single-purpose functions and such). You can write horrible spaghetti code messes in textual languages too, its just we've used enough textual languages to have learned how to abstract our code and how to organise it for maintainability. Its not an inherent feature.
we are really trying to avoid coming to this style of visual programming
write once, read never
For simpler usecases the nodes still make a ton of sense (e.g. compositing, shaders, ...) And I say that as somebody who loves CLI and programming.
But overall I agree with other people that I wouldn't want to maintain anything particularly large in that format. The Max "patches" were on the order of big scripts at most.
I would say the spaghetti aspect of max is the main complaint with visual programming.
To modularise it (small diagrams), you need to contain the behaviour of the boxes (ie. typed purely functional code).
And to generalise it (audio, video, microservices, ...) you need to separate the syntax from the semantics.
(took me about 15yrs to figure out a way to do this properly :-)
Node based systems is a great pattern for certain things, but large scenes get unwieldy quick. Even if the project files are text, diffing with version control is useless. Profiling is often difficult--a pet peeve of mine when apps load all the whole scene file and often the geometry when you just want to change a parameter.
You can see node based systems "grow up" by adding variables or attribute references, which means your data isn't just flowing down the graph, but you have to track references in this new dimension. You then often see encapsulation (hda/otl, Gizmos), which can really help with re-use but create more limitations.
Programming visually is usually called FBD or LAD. We use often FBD for simple logic. It's easy to read, even for inexperienced maintenance guys. LAD is a no go for me, but it seems a lot of guys still like it.
As long there is a simple and clear code structure, it is a good thing.
Today, special since Siemens made SCL (kind/like of Pascal for PLCs) usable and in there new IDE (old was Step7, new is TIA), we use it also a lot.
Today it's even possible to mix FBD/LAD and SCL. So you can make al simple logic in FBD or LAD, and then calculate things in between in a SCL network.
(the only (not so great) ex. I found)
As user of both worlds in my job, I can say both worlds have a right to exist. It's just like is C better than C++ or is Python even the most best ... is a car better than a bike .. is a house better than a tent ..
I've seen sprawling, massive ladder logic jumbles that made no sense and were completely undocumented before. Once visual-style plc things reach beyond a certain level of complexity, if they are undocumented they can be a nightmare to use.
I don't know if this says more about the medium itself or that a lot of PLC guys just don't know or were never taught proper standards to follow in writing their code. Either way I've seen a lot of really bad PLC code.
That's also true for text based programming languages, not a property specific to visual languages.
They have often not been taught basic things like don't name a variables or identifiers that have no significance like 'b123' and are in a workplace where as long as the lines are running properly nobody cares. There are leagues of difference between what I would consider a pretty messy codebase at say, some B2B enterprise software company, and a large codebase maintained by people who actively don't know how to program for lack of a better description.
As you can imagine, I've seen also a lot more or less funny ladder logic.
And yes is true, most plc programmer don't have a "just software" background and yes a lot of plc software is not super pretty but with the old IDEs it was also not so easy.
Translate an IF statement you need to add jumps, so it's not really the same code.
Also for ex. the very simple Siemens LOGO controller has a simple software to program it. But if you use FBD, it's just possible to use a tag once. So if you need a tag multiple times, so you have to draw lines from the single tag. Even for super simple stuff it get's messy super quick.
Back in the day, in VisualWorks, with the RefactoringBrowser, you could bring up a browser for a search, say, everyone who implemented
What's more, you could write scripts to pop up such query browsers automatically. They would also be saved in the "image" and just pop up to the same state when you restarted the environment. On top of that, you could write syntactically accurate code transformations against all of the above, even writing ad-hoc code against the meta level or even runtime state from the middle of a runtime debug session.
A small amount of "visual augmentation" might benefit most programming languages
Agreed. Where various visual programming have fallen down over the past 3 decades:
1) Scaling complexity -- If diagrams get too busy, and there's no good way of managing complexity. This especially applies to multiple programmers changing the same diagram.
2) Scaling size/optimization -- Many visual programming systems in past decades could bog down and become marginally responsive or unusual when managing large systems.
If you can handle those two, you will have a huge leg up towards a viable visual programming augmentation.
They work well for stream processing.
They can often be more elegant in how they define and consume inputs. Many more parameters can be supported in a visually pleasing way, which cleans up parameter over loading and makes it easier to compose function block together without making Tuple types.
This could very well exist but I think a visual Lisp would be interesting.
also stream processing is possible, we can (at least in theory) use the same diagram to compose state machines or stream processing functions or DB queries or ...
OTOH I always wonder if it would not be beneficial if our brain and the code would not rely on spacial distances in code files, but on _call_ distances.
Should our spacial map of the code not be the call graph instead of the structure in files and disk?
Smalltalk of course has its own problems. It suffers from one of the major problems of many less popular languages, namely the lack of libraries. But I agree that I wish more (or all) languages were as good at debugging, peeking at definitions, jumping to different things, searching code, etc. Of course editors can make up some of the problems, but it's not quite the same.
Jetbrains makes an editor called MPS that is used to make DSLs and you can include things like tables and diagrams into the code. In places where you have very specific requirements and structure, it can help experts in the domain produce the logic for it. That's the same with level editing, level editors need good creativity and views of spaces not experts in C++, so a DSL and graphical editing is great, because it lets them focus on something else.
That said, when it comes to the code behind things, text gives you an immense amount of expressibility that can't be replicated very well with graphical things. It's the same reason why a lot of developers prefer the command line to graphical configuration; you get far more expression for your expertise in a text environment. You get every combination of letters/symbols on the keyboard entered through a large physical interface; using the mouse to click on things feels slow by comparison.
You see that in this approach nets serve the purpose of giving an high-level understanding of how the code behaves. You still have the freedom that you get by using text in filling the net with meaning, but you gain also this high-level overview that saves a ton of work!
IDEs sorta move in that direction, though not too fast. You can list methods in a class, jump to the definition of a method, etc.
Not many seem to realize that Java's strict OOP structure is (or was) a mover of IDE functionality: you can statically describe the structure of the entire program at a high level, and code is only contained in methods, so you have methods as organizational units, to which you navigate and otherwise reason. So now we have IDE functionality that can move methods and variables around like they're toy blocks.
for non linear, you'd end up with full on circuit theory which requires an abstraction jump (but in a way, that's still visual ~programming)
That said, I think that visual programming which is not aimed to data-flow can be very interesting. For example Lisp is pretty visual (or topological) in a sense.
Most programmers don't like FP, so they probably don't tend to such tooling.
That said, I think many of them seem to find it harder, especially in the beginning. But I think we have to conquer familiarity first.
I think the best answer is that text, being more dense, is actually the simpler way to represent a complex program. Big applications written in diagrams tend to wind up being harder to read than the equivalent in text. Visual diagrams are also difficult to search, scan, or replace programmatically.
1. Limited in what you can do.
2. If you need to do anything out of the norm it's either impossible or very difficult to find which menu entry you have to set in which way.
3. Doesn't have a proper diff.
4. Needs a slow compile process to actual code to work, making TDD strategies impossible.
5. Attracts the wrong kind of developers (those who don't look into the generated code and make all kinds of mistakes, those that don't understand anything about how computers work, etc.).
6. Hard to debug, because there's little to no debugging support.
7. Impossible to use with the wealth of great tools available for text manipulation.
8. Impossible to search properly, because it's not just simple text.
9. Very prone to vendor lock-in.
10. Doesn't interact well with versioning systems.
I don't remember when I saw this presentation. But it hit the nail on the head. If you go into a Korean McDonald's you have a visual menu. It enables even a foreigner who doesn't speak a bit of Korean to order a bacon cheeseburger. However if you want anything special (like no tomatoes), all of a sudden you need the language interface.
Language and text has evolved because it's necessary to describe the kind of complexity we have in the real world. That's why text is amenable as a representation of programs. They eventually represent a similar level of complexity as the real world. Of course it's easier to teach somebody to point at the McDonald's menu, but they won't get anything complex done, and they need somebody to work the abstraction for them (i.e. a real programmer).
>> 3. Doesn't have a proper diff.
What makes it not have a proper diff?
I mean, GraphViz can have a diff. You could even go as far as taking the graph of the program, generating a GraphViz graph out of it and then taking the diff and simply colouring the nodes that made it into the diff.
I'm very much for using something like GraphViz for visualizations (for exmaple integrated in AsciiDoc) to have a very good diffable plain text documentation that's nice to look at.
These are two ways of representing the same concept: you could save the visual representation as a text document.
So what makes Visual Programming not have a textual representation? What makes them inherently incompatible with each other?
Because as of now, what I am seeing are the comments from people who used primarily proprietary tools, and those proprietary tools don't allow you to edit textual representations of the graphs directly because it makes the vendor lock-in so much easier.
Nothing really, but you would need a Visual programming environment that maps one to one to your language. So that you could switch between coding in code and looking at or modifying the visual representation when it's helpful.
This would probably limit what you can do in the GUI, just as GraphViz limits control of the graph in favor of automatic layouting.
The vendor lock-in is one of the main issues I mentioned above.
If you have a good idea how to visually represent and modify C# or some other language I wouldn't mind that as an additional tool. I would think it's even very helpful for getting an overview. However the code should still be the master. And open-source is probably the only way to have such a tool that's actually good.
And if you can map one language to the GUI you could probably port this to many languages.
I was mostly concerned with what we have right now (LabView, Simulink and others).
Building controls are a popular field for visual programming because the "wires" seem like they'll be intuitive to non-programming tradespeople.
If we have a little visual program that controls the temperature in a room with a thermostat block and a baseboard heater, that's fun. We can play all kinds of games hooking up limit blocks and schedule blocks and whatever. Intuitive.
Imagine we have seven thousand rooms and some of them have only a baseboard heater but some have an air conditioner, some have a lighting interlock, some have a heat recovery unit. 16 possible configurations of a room and you have rooms in each set.
Now... apply a visual program across all of those sets.
In traditional programming this is simple. A data structure representing room configurations and few conditional statements in your function or loop or whatever. In visual programming you're mostly looking at either making 16 versions of the program and applying each to its associated rooms manually or else passing a bunch of conditional variables around.
Those variables aren't visual anymore. So now you have a whole layer of abstraction that's no longer visual. And really, the parts that fit into the visual model are usually the easy bits. The hard parts still require abstract reasoning about stuff that isn't linked together. Pretty soon you're just using the visual language to draw the inside of your "functions" but your high-level datalinks or scripts are entirely non-intuitive and might as well be in text.
That's 10,000 links. You can't actually draw lines for each of those links to the room information. It's literally impossible to put them on a screen, let alone process them visually. So you have to refer to the variables by name and you still end up with 100 links to draw between variable names and function blocks.
Once you refer to variables by name the relationship between the source and destination is no longer visual. The visual part is just inside any given encapsulated function. So now you're programming with parameter passing but instead of just typing text you're also pulling away to move the mouse and try to line blocks up on a grid all the time.
This should be pretty obvious while coding in text. Imagine every time a you call a function there has to be a line somewhere that represents that function call. It's too much visual information. You can abstract away some of it by saying "oh, we have a block here that goes out and grabs all the room numbers", but now a lot of the logic starts to depend on the contents of those blocks and it's no longer clearly visual.
Visual diagramming is great for roughing out an idea of how something works but enforcing a rigorous correspondence between the diagram and the function of the program turns out to be a lot of trouble for not very much gain.
This. Whenever I investigate a new authoring tool, the first thing I want to know is whether there is a plain text (preferably json) serialization that I could import and export. A particular tool I otherwise love where this is sorely missing is Zapier.
exactly, every picture represents an "equivalence class" of expressions (or code).
I say equivalence class, because the picture represents many formulas: by topologically distorting the picture you get different code. however, they all behave in the same way. 2+3 = 1+4 = 5
These are semantic properties that most binary operators do not in fact have.
E.g. C syntax:
A << B
A = B
A - B
A / B
A % B
A < B
For example, GC was conceived in the 1950's, and through at least the late 1990's it hadn't taken off "in a more mainstream way", and people were saying "this idea comes back around every few years" as evidence that it never would. It turns out that GC is actually a pretty good idea, but the average computer prior to 2000 or so wasn't so good at it, either in hardware or software. That didn't make it a bad idea. It was a good idea that we weren't yet great at implementing.
What I'm hearing about visual programming today sounds very similar. Everyone has a list of complaints, but they're very specific, and very fixable. No good editors -- agreed! The solution, then, is to write a good editor, not to throw out all of visual programming. (Remember what the machine language programmers said about compilers, back before we had decent text editors?) No good diff tools -- agreed! But once upon a time, text didn't have good diff tools, either, so we wrote them. And so on.
If you start with the premise that visual programming is bad, then you will see a list of 10 problems as evidence that it's insurmountable. If you start with the premise that visual programming is good, then a list of 10 problems is your TODO list for the next year.
How do you talk about them? How do you write about them?
It may well be solvable - I'd like it to be, I spent a couple of years of my spare time toying with visual languages, and I still think as an idea there is a lot of potential. But you need a notation that can be read out in a way that makes semantic sense, the way mathematical notation (despite my very many reservations about mathematical notations) can. And mathematical notation is a good example of all the problems it brings in terms of tool support to even ensure you can reliably typeset it without having to include images of it.
I think it's more likely that we'll see improved interfaces to decorate or explore a textual code-base better, though, possibly with some languages starting to be designed with such tools and representations in mind if/when we start to see something closer to a standard emerge.
in general it is harder to build tools for graphical languages, I think this has been prohibitive. parser are hard, but diagrams require constraint solvers and what not, on top of the parsing.
but we came a long way since 1990, both on our actual abilities to build the stuff (JS and the browser can do powerful visual things) but we also understand functional programming and it's relation to mathematics much better.
anyway, thanks for your comments! Nice thread and you are 100% right with the TODO list; we actually build a tool (here is a basic pre-version https://github.com/wires/roadmap-viewer) to manage the intricate roadmap needed for this big project. Stay tuned! :-)
It's easier to reason when you have a visual representation of the solution, even if text is a more concise way to represent complex relationships.
On the contrary, we often diagram to abstract away from details.
That is the challenge a visual programming tool needs to overcome: Most likely we'd need to diagram only some parts. Most likely the level of details and which details we want will depend greatly on who we're talking to and what we're trying to address.
The whole point of visual programming seems to be to abstract more, but inevitably they always hit the exact same problem. The abstraction leaks, and now you have to implement an ugly hack.
The only case I see for visual programming is as a standing for a DSL. If you have a very specific domain where you need a nontechnical person to be able to rewire some things often and autonomously, you might need visual programming (although you might be better off with a 50 line text file)
They are also more concrete and less abstract, which again, is easier, memory wise.
And maybe because of all of those, they also require less focused attention.
On the other hand, it's possible that it's incidental, that textual languages designers don't care deeply about that niche, or that textual languages(and their libraries) seem to evolve towards power/complexity, or that such requires deep work on an IDE, which is a big barrier-to-entry.
Fundamentally though, icons and pictures are easier to understand. We can train ourselves to recognize other constructs that don't need to be represented with a fixed set of runes and left to right reading order.
Create a master of two languages one visual and another textual and he will have a real unbiased datapoint of which is better.
but add (formalised) graphical scaffolding to glue parts together.
Same with visual programming. Why hasn't it taken off? It must be because it's bad.
Maybe whether something takes off or not has more to do with external factors than to do with how good something is.
1. The sheer physical labor involved in creating and maintaining programs. I was going home each evening with severe eyestrain and wrist fatigue, due to the fine mouse work and clicking through menus.
2. Programs are more readable until they get bigger than one screen, then all hell breaks loose. You can arrange things in sub-programs, and use the equivalent of subroutines / classes. These are good techniques in any language, but it compounds the physical trauma problem exponentially.
On a separate note, I wonder if text based languages persist because it's just easier to create them. As a result, people are more likely to experiment with new languages, libraries, and so forth, if the format of a program, and its inputs and outputs, are text. If you want to invent a new graphical language, you have to create a full blown graphical manipulation package, and make it work on multiple platforms, just to get started. That's a huge amount of work, and it doesn't necessarily attract the same people who are interested in language development. The result is a more vibrant pace of development in languages if you're willing to give up graphical representation.
Also 3, you really miss source control tools that are smart enough to merge changes by different people in different branches. And useable diffs.
#2 sounds like you might want to take a look at https://en.wikipedia.org/wiki/DRAKON. Albeit Petri nets, as described in the OP article, also address many of these grievances.
As long as the programs are relatively simple or are amenable to subroutines, it is one of the easiest languages to learn, teach, and make simple changes. My only complaint for our use case was lack of good source control.
The Fibonacci diagram isn't any clearer than the code. The Petri net animation seems as likely to obscure as it is to enlighten.
I think there's space for making better use of graphical environments, and modern IDEs are already stepping up this kind of capability - code folding, mouseover hints, small automatic parameter annotations. I still haven't seen any case for visual programming.
The trick seems to be recognising when this class of tool are applicable and when they aren't. In particular, I've seen some horrific things built in "visual" integration tools that apparently "didn't need developers" but were far more complex than some normal code would have been.
I think much of what makes VP fail at some point is in the details. Programming is often mostly about the details which is why abstractions seem to leak all the time.
When you have think about the details then a denser representation is extremely helpful. Visual Languages don't generally do good at showing enough of the details at once.
Aerodef, automotive, etc all use some form of visual programming. Controls engineers rarely write code, the systems are way too complex. E.g. I highly recommend watching this video from JPL to give you an understanding of where such tools excel. It's about simulating, iterating and then having scientists and engineers autogenerate the code they couldn't possibly write or test
E.g. cruise control
Unreal 4 blueprints I guess :)
For low-level code such as "add two and two together" they are pain. But for expressing "get complex entities, retrieve, mix and match required components, and output a set of complex entities as a result", it's surprisingly good. If used correctly :)
Disclaimer: only judging from Youtube videos and tutorials
The general approach LV takes is to model computation as a data flow graph. Constructions like iteration, selection, etc. are (were?) all modeled as rectangular regions within the graph where portions of a graph can be swapped out for others or run multiple times. Graphs can also be nested to provide a means of abstraction. Execution has gone through changes over the years, but it's efficient: compiled to machine code with LLVM, and there are also versions that compile LV code to run directly on FPGA's (on some of the hardware products sold by the same company). It also takes advantage of the implicit parallelism that sometimes crops up in data flow graphs.
All in all, LV is theoretically quite impressive. (And since it's been sold for I think over thirty years, commercially quite impressive too.)
As a software engineer, though, I never fully acclimated to the way it worked. If laying out textual code is a challenge, laying out a 2D graph is much worse. The same thing goes for defining sub-graphs - 'naming a function' becomes the much worse problem of 'drawing an icon', or maybe even drawing a family of icons with a common theme. (Although I think LV's been extended with a nicer icon editor to help with this.) Input is similarly a challenge... textual tasks that can be split across two hands and ten fingers become focused on a single hand/finger. (I had to rethink my input devices both during and after the time I was using LV to avoid RSI issues.) And there are also issues with source code control. LV has some stuff baked in, but there are many years of industry-wide experience managing textual representations of code and some good tools for doing it. Switching to a different representation for code means, necessarily, deviating from that base of wisdom and practice.
So, while I think it's a powerful tool (and something more engineers should be familar with) it's nothing I'd want to do my daily work in.
It made sense to the EEs because they were used to staring at wiring diagrams. I think we'd have a lot more programmers if we could develop similar programming methods that appeal to different ways that people think.
I think this is very well put. I occasionally present on programming topics to my sons' classes. Even as early as 8 or 9, programming is well within their intellectual capacity (and often their desires). So programming tools that don't erect huge barriers to entry... maybe lots of latent value.
It works marvelously if you want put together a UI to process measurements from a fairly wide variety of instruments. Someone who is very skilled in LabVIEW can run circles around most expert Python or VB/C# .NET programmers trying the same thing. Seriously it's awesome. You get parallelism "out-of-the-box" with no problem at all. BUT... these benefits only hold true within the problem domains that LabVIEW is good at.
Once you try to use LabVIEW for truly general-purpose tasks it becomes either unwieldy or no better than open-source tools. Eventually, I had to drop LabVIEW when more and more work involved databases, dealing with network protocols, and heavy integration with API's outside of the NI ecosystem. The $3-5K per developer "seat" is also a barrier to entry for some orgs.
I think there's a place for visual programming for certain types of DSL's. Version control, diff'ing, modularization and some of the other things folks are complaining about here are just technical obstacles that can be overcome or worked around with a bit of creativity.
Agreed. In a world where so many good developer tools are essentially free of charge, this aspect shouldn't be overlooked. Neither should the fact that it's fundamentally a proprietary language offered by a single vendor.
I anecdotally see people moving towards Matlab and Python for automation these days, though its harder without the incredible amount hardware support provided by NI
For a while, NI provided tools in that space too. There was a product called LabWindows, which was centered around C, and a product called Hi-Q that I remember as being similar to Matlab. I assume that the non-LabView story these days is mostly a public API combined with other people's development tools. (At least that's what I'd hope it would be, given the expense of developing programming langauge tooling.)
> though its harder without the incredible amount hardware support provided by NI
Agreed... the hardware offerings are rather amazing and growing every day (even into some fairly specialized and high-end domains).
Text is a graphical, visual representation. While there are sometimes alternate ways of expressing things, this idea that text is not visual is weird. "Non-textual" representations is better, because we already have a rich, complex capability in good ole symbols.
It's ultimately 1D for the computer (a string); but so is an image (which according to you would encode 2D media) and any other media expressible in a countable number of bytes.
1) flatten it down to 1D (e.g. a table becomes a JSON array of objects)
2) move it into a higher dimensional structure like a database. Now you have two problems.
If your programming paradigm supports higher dimensions to tackle your problems, it just gives you a higher level platform to start tackling your problems. Before you could maybe deal with 4 dimensions at the same time at most, now you can deal with up to 5 or even 6 - we don't yet know what new solutions to problems smart people could come up with when being given such tools.
Just as an example, how often do you see binary logic problems in the form of complex if-then-else procedural structures - what if you could represent two decision factors in a tabular form and let the IDE work out the missing cases for you? That's one of the ideas behind subtext.
Point is I think we agree - if you think I'm fundamentally wrong I'd like to know more exactly where.
Consider, a large part of the literature of machine learning is talking about taking a hypercubes and consequences of most distance/volume measures. (Quickly searching, https://datascience.stackexchange.com/questions/27388/what-d... talks about this.)
that sounds like a cool way of seeing things, could you explain what dimensions you're referring to?
That make sense?
Everything we can see is visual in that regard. That's what not "visual" in visual programming (or visual diagram etc) means.
Visual programming hasn't taken off because it isn't as good as text for the majority of use cases. This is borne out by more than 50 years of people writing computer programs.
> Visual programming hasn't taken off because it isn't as good as text for the majority of use cases. This is borne out by more than 50 years of people writing computer programs.
...writing computer programs with text. this bias is massive because nearly no one writes visual programs. a lot of that is due to availability of visual environments, some of which are quite expensive (e.g., labview). most of it is due to bias.
if every single computer science and computer engineering major starts of their education with being taught "this (i.e., text) is how you program and interact with computers" then that sets up a massive bias that is nigh impossible to overcome. any study that does not address this bias is flawed. hence, we have people, who have never even programmed in a visual language, proclaiming visual programming doesn't work. i constantly hear text-based programmers say "oh, visual programming is good for niche things or trivial examples, but it doesn't scale", but yet i have developed large, complex applications with visual programming environments. you know why? because i treat it as real software. i modularize. i take data abstraction seriously. i accept the dataflow paradigm as the key paradigm and build things off of that.
and that leads me to another point. the dataflow paradigm does not map well to text-based languages at all, but it is a very powerful paradigm. visual programming languages are very good at the dataflow paradigm, and so from that alone, they seem to be required if we are to efficiently program dataflow-based systems.
Visual programming is inefficient in its use of screen or paper real estate. How many screens full of Scratch would it take to represent a complex program? Too many to realistically read or scan.
Here again, visual programming is not new. If there were some great advantage in it, it would be more widely adopted. This is a solution in search of a problem.
so here, you are taking what amounts to a toy and something directly marketed towards children as your shining example of a visual programming language.
in my opinion, labview is the most complex, general-purpose visual programming environment, and people have clearly written rather large, complex applications with it. why isn't it adopted more? a lot of reasons, one of the biggest ones being cost. the other the association with a particular domain, but i consider it a general-purpose language. even if one gets passed that, there is still a huge bias that text is still THE way to program and interact with computers.
well put, couldn't agree more, I often use scratch as an anti example of diagrammatic programming.
visual programming has this topological aspect to it (move a box around and don't change the program), and it demands compatibility with this from the underlying thing, whatever it is, state machines or stream processing functions or whatever.
labview is a great example of a practical use of it, which some relatively minor, but critical modifications you can use it for so many more applications... maybe one day statebox can fill that gap, but we have a lot to do still
Having a language compile to C (as many languages do) does not at all mean that writing in that language is the same as writing in C and that the compiler is essentially just an IDE or for C. Languages, visual or textual, encourage us to think in a certain way. Even though a visual language compiles down to the same thing as a textual language does not necessarily make them equivalent and certainly doesn't say anything about the kinds of problems they help you solve or the type of thinking that they encourage.
I actually really, really like the idea of non-textual interpretations of data and code, and use them when text is cumbersome. But they are difficult to do well. Jupyter is an excellent step in the right direction, and a reason it has taken off, I think. Albeit within the statistics subset of software development.
I think it is hopelessly dismissive to denigrate the efforts of software researchers over the past 50+ years and say that they didn't try to figure out better ways of representing ideas. There have been great minds trying to figure out a better way to describe computation. Math and textual programming languages are what they keep coming back to! However, I think we should definitely keep trying, but the basis of new efforts has to be an understanding what text (and math formula) gives us.
people always say this, but it almost always come from people who haven't actually tried to do so or seriously programmed in a visual language before.
> comes from a lack of understanding what text is and how well it works.
i don't think that's the case at all. i am a proponent of both text-based and visual programming languages, hoping to understand hybrid approaches better. if anything, i know where text really doesn't work. for example, text-based languages are terrible at representing the dataflow paradigm and are often more complex than they need to be. visual languages are rather good at this. so we have the situation that text-based languages are terrible at dataflow and no one cares about visual languages, so the dataflow paradigm remains relatively unused, only showing up implicitly in actor-based systems and in minimal ways (simply as pipes) in functional programming languages.
> I think it is hopelessly dismissive to denigrate the efforts of software researchers over the past 50+ years and say that they didn't try to figure out better ways of representing ideas.
i don't think so at all because i really don't think they've tried to specifically understand visual paradigms. i haven't seen efforts to do so, but i would love to see examples if they exist. i have spoken to a rather well-known computer scientist at a conference, and when i mentioned my interest in visual programming, i was immediately cut off mid sentence with the exact phrase "i don't believe in it".
It's worth noting, though, that text is a serial or linear way of encoding information through symbols. In that respect it inherits its structure from spoken language. The significant visual aspect of text is the forms of the glyphs. Typographers definitely do interesting visual things with text, but this hardly ever happens in programming language syntax.
On a similar note, I get annoyed when Eugenia Cheng claims that the diagrams of category theory are "visual". They are graphs of nodes and edges, which means that they convey topological information, i.e. they are not images of anything in particular. Category theorists deliberately use only a tiny, restricted set of the possibilities of drawing diagrams. If you try to get a visual artist or designer interested in the diagrams in a category theory book, they are almost certain to tell you that nothing "visual" worth mentioning is happening in those figures.
Visual culture is distinguished by its richness on expressive dimensions that text and category theory diagrams just don't have.
 "Visual representation has always been a strong component of my work in category theory, which is a very visually driven subject: we turn abstract ideas into diagrams and pictures, and then take those pictures seriously and reason with them."
I think I understand what you mean, but I'd say this unbounded "richness" is precisely what you must avoid in a programming language. In a programming language you want constraints, not freedom. Your visual language must convey specific, unambiguous information and not be open to interpretation or confusion. A program is inherently closer to a formula than to art.
As a complete and irrelevant aside: I wouldn't assume visual artists will consider category theory diagrams artistically uninteresting. Artists are an unpredictable bunch, capable of finding beauty in the most unlikely things ;)
Name-dropping category theory via including a diagram in artistic work doesn't mean that the diagram itself is visually relevant. It would essentially be functioning as a sign rather than an image. (Feynman diagrams are a totally different story.)
As far as richness goes: I don't think programming languages have to be austere. "Richness" might be a divisive term to use for it. I just mean high degree of expressiveness. Also, I guess, a high degree of elaboration. A rich type system is not necessarily a dangerously self-indulgent, inconsistent, dangerous one—it might instead be the end result of a rigorous process of elaboration according to strict criteria.
This is called "mathematical notation".
I remember thinking, "this can probably be visualized as a little graph in a frame in the corner."
Now that I'm far more comfortable, I doubt I'd use one. But I think visual tools are a brilliant idea for spanning the gap between beginner and expert. At some point you just stop using tools and you disable them in your editor.
Bonus anecdote: I vividly remember making a leap at around 9 or 10 years old from the LEGO Mindstorms visual programmer to writing nonsense AppleScripts and having one of those emotional floods of epiphany about my power over my computer.
> diagrammatic reasoning in particular, is a formidable tool-set if used the right way. That is, it only seems to work well if based on a solid foundation rooted in mathematics and computer science
Code is formal: the symbols have a nearly-unambiguous interpretation as a running program. If there are to be diagrams they must be formal and systematic in their use of notation.
In particular, the author is right that some control systems and interactions are best thought of as state machines, and statecharts are the traditional tool for reasoning about these.
However, the bitcoin ATM example gif in the middle also shows what the weakness is: that's not a complete diagram! It lacks all the error and early return states, timeouts etc that would be needed in a real system. Admittedly this plagues traditional programming as well - qv. exceptions vs. Go style mandatory error checking.
Trying to do a whole program this way also seems like a bad idea; that's the hell of UML that was briefly a fad (qv Rational Rose). Maybe we can just sprinkle a bit in the right places?
Having the diagram represent "control flow" seems to be an anti-pattern, since it gets bogged down in detail. Either dataflow, event flow or state transition seems to work better.
I sometimes wonder if the right way forward would be something a bit like the old "pic" language: https://www.oreilly.com/library/view/unix-text-processing/97... ; SVG is too complex and too XML to cleanly interleave in a program, but pic might work well.
This is similar to free monads, where the choice of specific interpretation of the DSL is delayed, and even multiple interpreters can exist
This is not a new idea at all - I'm strongly reminded of the whole Model Driven Architecture brouhaha back in the 1990s and early 2000s. Needless to say, it didn't work. You can't do "code generation" from a high-level architectural diagram and expect to end up with a functioning system! At best, if you really do it right, the high-level design might enable you to specify type-like properties that constrain the low-level implementation in a broadly helpful way. But you still have to write all the low-level code!
 And this is in actuality incredibly optimistic - MDA and UML didn't even manage that! Instead, all the pretty diagrams (1) were inherently fuzzy, so they did not embody any actual constraints on the implementation, and (2) even in the best of cases, got immediately out of sync with the ground-level truth of the actual implementation.
> if you really do it right, the high-level design might enable you to specify type-like properties that constrain the low-level implementation in a broadly helpful way
so this is exactly what we do. we have a general way to specify boxes and wires and if you give me some sort of type system and a functor and voila, we can produce some well behaved code. nothing is hand-wavy about it, or "complex", like specialized flags or properties of boxes, just some simple mappings
Error handling is not really a problem, in fact, a lot of the time is spend thinking hard about each and every error that can occur and modelling it out ~ making sure the machine behaves well.
Problems are rather about machine synchronisation and how processes communicate (or rather, can we autogenerate proofs about their behaviour under communication).
thanks for the comments
Maybe there's a place for for text and visual languages together, I'm not sure. Sometimes I have the feeling while staring at an editor with 3 text files on screen that a huge chunk of my brain is sitting on the wayside collecting dust. Yes, there is some amount of visual/spatial thought involved in navigating text, and maybe even mapping that text/project onto an abstract visual space, but I doubt this really takes full advantage of our extremely powerful visual cortex.
I once got dumped a BI project that was written in a visual tool. Simple things like tracing how a field was derived was impossible, because there was no search function. I could search the XML file it produced but that was impenetrable.
Am I allowed to condemn all textual programming based on the last textual program a company asked me to maintain? It had a single function over 1500 lines long, which took over a dozen parameters, had well over a dozen exit points, re-used variable names and values (sometimes unintentionally) between otherwise independent sections, and had zero documentation. And I had to fix a bug in it. "Impenetrable" does not begin to describe it. Text-based programming is a disaster.
Yes you can make horrible code in text too, but it is a lot easier to not do that.
Maybe you're right that it's unfair for me to dismiss the paradigm as a whole based on one bad experience. But it just seems to me that the mindset lends itself to a write-only environment.
but this is mainly because those BI tools themselves and the graphical languages they use have issues.
when done properly, diagrams can be searched similarly to Haskell's hoogle, Purescript's pursuit, etc... in addition you could specify a template diagram it should match
Otoh, it does often lead to job security.
While most people don't write networks visually, they often the most effective tools to communicate one's results (whether it is a neural network architecture or just a single block).
Moreover, in particle physics people you Feynman diagrams a lot. And they are nothing more or less than a graphical representation of summations and integrations over many variables.
When it comes to languages, while there are some interesting approaches (e.g. https://www.luna-lang.org/) the only one I actually used was LabView (in an optics laboratory, where it is (or at least: was) a mainstream approach). For some reason, even https://noflojs.org/ didn't catch enough traction.
Coding uses the keyboard. Once you get good at using it a keyboard is a much faster input device than a mouse or a touch screen, especially for complex highly structured input like text or code.
Visual programming relies on the mouse or the touch screen so you spend an inordinate amount of time clicking, tapping, dragging, positioning, etc., all of which is irrelevant to what you're trying to achieve.
Maybe a visual programming system that used readily learnable keyboard input or even some novel form of touch panel or mouse input that eliminates the need to futz around with dragging and dropping and positioning would be the way to go?
Or... https://neuralink.com ??
Let me show you an example (gif)
I press "Ctrl + space" to open QuickDrop, type "irf" (a short cut I defined myself) and Enter, and this automatically drops a code snippet that creates a data structure for an image, and reads an image file.
If you are efficient at this type of input, the "dragging/dropping/rearranging" is similar to refactoring that you would do in an IDE.
The only difference is that there is something called secondary notation in many visual languages (people are not aware of that, I'm only familiar with it because I've done research on the topic - it is how the code is visually arranged).
How code is arranged is kind of a quality parameter for visual code (google examples for "spagetti code"). There are typical patterns that are instantly recognizable to an experienced user, ways of using distance and direction to group connected parts..
I actually played with alternative forms of input for LabVIEW, mainly gesture control and "drawing" on tablets. It sounds like a fantastic idea, but only for 5 mintues. After that, your hands start to hurt. The reality is that keyboard and mouse are heavily optimized tools for input (minimal movement of fingers, and we have lots of muscle memory), and don't restrict you. It's like saying "I can type xxx word per minutes" and thinking that typing faster would help you code faster..
no wires, composition of boxes, no vi bindings yet, but def. something wanted and what we are going for
I want vi/emacs/spacemacs for diagrams
but once that's working, I'm all for neuralink + VR for "diagram dreaming"
The first one is concerned about solving problems (domain experts). They don't really know much about programming, they know how to solve problems though. Once in a while someone clever creates a visual tool for them and they become magically super productive relative to their peers. Be it business process automation (BPMN, workflow automation), signal processing, simulation programs, webscraping, even Excel. However as these people get proficient, in their top-down learning of programming, they start to hit limits of the tool. Then you can see the typical spaghetti code, because the visual tool lacks basic programming constructs like looping, functions and conditional which would nicely compose the mess away. Additionally it can't scale beyond RAM and is hard to put into version control, because they are not in control of the text representation of objects they work with, even though the software uses it under the hood.
The second group of people are programmers. They start learning bottom-up, ie. from conditionals, loops, functions, threads, etc. to actual problem solving. They know all the stuff about proper branching, version control, how to structure the code, programming paradigms, etc. They don't get stuck in spaghetti code, because they have super composable functional languages, where any pattern or duplication can be abstracted away as a function.
There is a huge gap between the problem-solvers and program-creators.
Anything which can be represented as a visual language can be also represented as a text. Unfortunately we don't have textual programming languages powerful or intuitive enough to cater to top-down folks.
I would go as far to suggest that our current formalisms are insufficient for this task. Lambda calculus is very bad abstraction for working with time and asynchronous processes for example. Workflow automation, where 99% of cpu time you just wait for real world tasks, doesn't map to lambda calculus well. Other formalisms like pi calculus or petri nets are much better suited for this and unsurprisingly the visual programming tools often resemble a petri network.
This is quite the assumption!
Bottom-up text-based programming leads to much greater complexity because most programmers don't properly model their software (e.g., with state machines, statecharts, Petri nets, activity diagrams, etc.).
But it's not entirely their fault -- code is inherently linear. Mental models are not - they're graph-based (i.e., directed graphs, potentially hierarchical). Text-based code is merely trying to shoehorn graph-based mental models of what the code should do into a linear format, which makes it less intuitive to understand than a visual approach.
Correct, it was an exaggeration. Bottom-up programmers are supposed to have the tooling not to end up with convoluted code, but they somehow manage to do it anyway.
> Bottom-up text-based programming leads to much greater complexity because most programmers don't properly model their software (e.g., with state machines, statecharts, Petri nets, activity diagrams, etc.).
I'd argue that these text-based programming languages and computation models don't correspond to human intuition when they solve a problem and that is the main problem.
> But it's not entirely their fault -- code is inherently linear. Mental models are not - they're graph-based (i.e., directed graphs, potentially hierarchical). Text-based code is merely trying to shoehorn graph-based mental models of what the code should do into a linear format, which makes it less intuitive to understand than a visual approach.
This is one of the limitations of bottom-up code. It is easy to represent linear program flow. It is not sufficient though, as you point out, problems in real world are graph based in general.
On the other hand not all code is linear. Eg. looping is a typical cycle in a computational graph, or petri net when you represent a data flow graph.
init --> loop body --> end
The same applies to parallelism - does it represent divergent choice, or a fork/join structure where independent computations can be active at the same time? You can't make both choices simultaneously within the same portion of a diagram! Of course you could have well-defined "sub-diagrams" where a different interpretation is chosen, but since the only shared semantics between the 'data flow' and 'control flow' cases is simple pipelining that's so limited that it isn't even meaningfully described as "visual", it's hard to see the case for that.
The top down folks in math disciplines have Matlab Julia and R. Visually they often use simulink or labview. These last are more library than language.
Because of their math background the Matlab usually gets done in a functional style which Matlab supports really well. No spaghetti.
in order to make diagrams _as_ composable as functional code, you need proper theory of diagrams and a "compiler" that checks your diagrams for "type errors"
The big advantage I can see is isolating the visual noise. I'm now surprised that we don't have syntax coloring for regex built into our editors...
But I agree a full regex builder sort of makes that entirely moot.
Anyway, I don't think the problem is that the idea of visual programming hasn't been around. It's all over the place, if you look. And has been for ages. The problem is likely that it doesn't solve a problem that developers who use text-based programming languages have.
My main concern would simply be: What do I lose by going visual?
With text-based code I can read every single character and make sure it's all precisely how I want it. Would that be easier or harder in a visual environment?
If the gains can't definitively offset those sorts of costs, then I'm not going to be very interested.
Visual sound environments are also tools that are taught to music students in conservatories, who don't have any programming ability. These are tools to make sound / art, and people who make sound / art aren't necessarily programmers.
> With text-based code I can read every single character and make sure it's all precisely how I want it.
and when you make art you aren't necessarily looking for the precise. Some people just throw random objects on their patches and wire them almost randomly until it sounds good ; writing the patch is entirely part of the artistic process. That's a completely different mindset that "client wants feature X in Y time, what's the most easy way for me to achieve it".
You're right in that art is guided by a different set of goals than client work -- it's inherently more exploratory.
But it's inaccurate (and unfair) to call it imprecise. (Also unfair to assume artists work "randomly.") If you're an artist who cares about your work, you put a tremendous amount of effort into achieving your vision just as you see it. If a tool fails to work as you need it to, you'll abandon the tool, whether it's a paintbrush, a chisel, or Max/MSP.
The fact that two major commercial sound programming environments are visual doesn't necessarily mean people who use them don't understand computers and are just randomly throwing crap at the wall: It means they work best for the professionals who use them to get their creative work done. They are, after all, relatively expensive pieces of software.
(It's also inaccurate to assume artists and musicians don't ever have programming ability. I'll point to myself as an example.)
Precision comes in after the randomness, in the craftsmanship; that's what differentiates a skilled person from an unskilled person; both can do the first part.
When you're programming, or participating in any craft that doesn't prioritize uniqueness or expression, the precision starts from the beginning, though. The only randomness sometimes is where you start, not what you start.
anyway, interesting discussion
For example, each module in an audio program you're wiring together is basically a function with a ton of internal state. But you also need to be able to create your own modules composed of other modules, and maybe some lower level functions, e.g. EQ, signal sources, delay/reverb, etc.
The only visual system I've used is GNU Radio, which requires XML/Python/C++ to create your own blocks AFAICT.
but these are different problems:
- macros for diagrams, n-bit-adder for any n
- "boxing up" diagrams, or some sort of nesting of diagrams
- management of state
there are separated concepts in our tool, macros are a 'meta language', in principle one can write functions in the host language that generate diagrams and then proof that those diagrams are well behaved in some way. there is theory on how to do this for diagrams, but we have not implemented any of this yet
boxing up is a natural operation of the system; we get that for free sort of
state is handled by folding functions over the history, so you get a nice and clean way of dealing with state
but you are right, this ability is very important; it is a lot harder to do tho, but it's possible now I think
In academic research, visual languages have largely been used to study non-programmers performing programming-like tasks (e.g., ), but I think that is just the tip of iceburg!
When I started out there was talk of Java Beans that could be used to program visually. Since then I have encountered a number of visual programming tools, and none of them have been much good and generally don't make things any easier.
The best visual programming I have used was MS Access in that you could actually produce something useful without touching a lot of code. But you did need to understand database design to anything useful with it.
The access query builder is a perfect example, it took an understanding of relational databases to use it. Instead of SQL you would drag and drop the tables together. And set up the conditions in dropdown menus. So basically swapping typing for dragging and dropping. Overall it was about as much cognitive overhead as SQL (maybe a little less for not needing to remember the exact syntax for a left join).
I'm saying that there has been little work that investigates how to design effective visual programming tools. The visual component has to be a high priority in the system's design, not just an afterthought.
I don't disagree, but it absolutely is the most overwhelming part to newbies, and is responsible for keeping more people out of the field than pretty much any other aspect of software development.
I hope more people bring those languages into the rest of the programming communities.
Are you working in that particular field ?
I think text scales slightly better but it could just be that we don't have the tools to scale graphical programs yet. But in the end, the hard bit is figuring out what you want the computer and then explaining it correctly to the computer. The exact symbols you use to explain it are much less important.
Edit: Added more points
Of course anyone can make spaghetti in any language, text-based or visual, so don’t think of that site as “proof” or anything. Just a little bit of entertainment that’s all.
It is an easier way to organize than just a table of pin numbers as opposed to "make programming so we can get a job but without the math or other gross technical stuff that" clueless advocates.
Also more for field programmable gate arrays than code although you can work with gates directly along with some analog inputs and outputs.
Honestly people can read and process text much easier then they can follow active visual nodes and lines. And I'm not talking about an abstract diagram; we are talking about visually designing something detailed enough to be executable. I think most people imagine visual programming based on pretty high-level abstractions but that's not the reality of programming.
Anything even moderately complex is too big to see all at once but could easily fit on a few screens of text. Making changes is even worse. I can easily move to the top of this paragraph and add another paragraph (which I just did). Have you ever tried moving around 20 visual nodes? Almost impossible to do easily. I just re-wrote a few sentences; easily 6 clicks and typing in a visual environment.
I am the first person to admit that visual programming goes wrong about 95% of the time, but it doesn't have to... also nobody is expecting everyone to swap out their tools for statebox haha
I definitely want a good demo video on the page.
but listen, I get it, I tried literally 50 different such tools, they all suck. Nobody in their sane mind decides to write one if you don't have a good reason. Certainly not in Idris :-)
but we have good reasons:
1) compositional theories of diagrams exists
2) such diagrams can express many "topologically equivalent" expression in the same diagram, this is a huge win! and very different from (AFAIK) all existing node-based editors
3) use cases in decentralised computing require fault-free tools that work like this
but really, I think generally applicable usable 2d-syntax exists...
But fair enough, I hear this a lot.
OTOH, a side-effect of working categorically is that we really assume very little, so we can apply the system is many things (at least the core doesn't restrict us, of course you still need to write code etc).
this is unlike all other systems which are domain specific.
you should think of it much more like an alternative syntax for bits of constrained Haskell code... anyway, thnx for the feedback, I am mainly surprised to see this on HN =)
but I come pretty much from the same standpoint: that visual programming sucks; but it doesn't have to be, that is what we are trying to demonstrate.
However, why does it have to be a binary Text OR visualization? Code gets loaded from a file into an AST, from there you can transform between textual AST representation and other visual representations.
I expect the right answer in the end will be both AST based code views and visual tools to show flow, time/space requirements, interfaces/coupling and such.
Imagine writing code, then using the scroll wheel to reference the overarching project graph. Perhaps with:
- edge size/colour representing function call frequency,
- vertex size/colour representing time/space complexity
- some other cue to represent datatypes.
- another view to represent and inspect datatypes in your application/query them. Ala Quantrix or Lotus improv "spreadsheets".
- Another view showing state changes and functional code.
Dual syntax representation
"Luna is the world’s first programming language featuring two equivalent syntax representations, visual and textual. Changing one instantly affects the other. The visual representation reveals incredible amounts of invaluable information and allows capturing the big picture with ease, while the code is irreplaceable when developing low-level algorithms."
Sure, visual programming might “not suck”, but it sure as heck isn’t a viable tool for serious software development outside of a few specialized cases.
Text is glorious and portable. I can run emacs in a terminal. I need a software stack just to look at my code like I need a hole in my head.
The thing I really hate about visual programming is that programs are fundamentally tree structured and 2d space introduces extra degrees of freedom which, for me, impose additional cognitive load.
The only parts of a program that reduce to a tree are incidental to their functionality - the nesting-block structure of procedural, C-derived languages, and the tree structure of a filesystem.
For you, apparently yea, but for others maybe not. We don't all value the same things. Visual languages are especially friendly toward new and young learners of programming.
> programs are fundamentally tree structured and 2d space introduces extra degrees of freedom which, for me, impose additional cognitive load.
Interesting, you say this, because I draw most trees in 2d, whereas text is a 1d list of statements. To me, a 2d space better maps to the programs I tend to build.
Most visual tools handle version control poorly. Refactoring and reuse is a joke. They often require a lot more work to do simple things that can otherwise be expressed with a simple expression.
What I do like about this article is how the author shows many different projections of code and I think that's really the key. I personally think before we try to tackle the difficult problem of building a visual editor, a better problem to solve would be projecting code that's written in a simple text editor into different graphical formats.
There are a few challenges with that because turning existing languages into visual representations is a challenge itself. One can imagine the challenges of projecting a function in your code base into a graphical representation.
Let's say you wanted to project a class's method to a sequence diagram. In order to keep your diagram clean, you'll want to display and emphasize method calls to external services and elide local method calls, which aren't so relevant in your diagram. How do you solve that? Do you require the programmer to annotate the various aspects of their code? Do you assume some sort of convention? That might work with green field projects, what about existing code that already violates conventions? What if you want to have different projections of the same code? Do you litter your code with annotations?
The key point is different diagrams communicate and emphasize some things often at the expense of other things, and automatically figuring out what needs to be communicated is hard for humans, let alone tools.
Personally, I'd be interested in a set of DSLs that would not only project the different aspects of your program, but would work together to verify the correctness of your program. You might have a simple predicate DSL you'd use to describe what invariant conditions must always hold. Maybe you'd have a DSLs to describe how services communicate with each other.
I would really like to see people who are researching visual programming to take these issues seriously and not discount the value of text. There's not one reason why text prevails, there are many reasons. If that's not convincing, consider The Lindy Effect which suggests that the future life-expectancy of a technology tends to be proportional to it's current age. In other words, if something has dominated for 50 years, knowing nothing else, it's likely to continue to dominate. Forks, chopsticks and wheels are elegant solutions that have been around for thousands of years. I'm not saying text is the same, but I feel like a lot of people (not all) working the on visual programming tend to underestimate text.