As someone who's heavily used Blueprints, I can tell you they really work. You have no need to write a single line of code to program the behaviour of an entire game including physics.
You do need C++ though if you're going to write stuff like A* pathfinding, complex scene setup like hex based 3D platforms. Again it's all doable in Blueprint and I'd often do it first in BP and then rewrite it in C++ for perf sakes.
So visual programming isn't broken and it works, and not necessarily at DSL level. The UX of the editor has to be fantastic though, for noobs as well as experts. That means good support for quickly producing nodes, finding existing code, refactoring support and Git support. You may think it's tedious but with habits and patterns, with a few key stroke I could program an entire component's behaviour in no time.
One thing that was magical is that with a heavy code base of visual nodes, the brain can find entire sections very quickly, a bit like that cave analogy in the article. You could group nodes with coloured rectangle etc. and the way they were laid out you'd have instant recollection of its internal workings. By seeing a vague layout of nodes I knew this was about joystick controller, player controller, shader, etc. It's a massive advantage and takes less strain on the brain to "READ" again the code. It makes code re-discovery so much easier. If I had some gnarly behaviour it would instantly show in the pattern of nodes as how they were laid out.
I'm convinced that if we had "as good" editor for web technologies, we could produce microservices and front-end code in the exact same fashion without writing a single line of code. It's just such a massive investment to _do it well_ I don't see it happen any time soon.
I've put together a POC which allows NodeRed to run on iphones/android phones and allow visual programming on the device: https://github.com/alexisread/noreml
What's missing there is a table component to really finish off being able to create a more general-purpose UI.
Exactly, that's what I tried to explain – brains are incredibly good at navigation in spatial environments. There are tricks to use this for memorization – "memory palaces" [1], for example, and there is an increasing evidence that some layers of cortex contain so called "grid cells" [2] that solely responsible for the spatial information encoding.
I never used Blueprints myself, and get the idea of experience from Youtube tutorials. It definitely "works" for the specific tasks, but still, Blueprints from Hell [3] exists for a reason.
Absolutely agree on the UX part of the editor. To make people switch from A to B, B should be twice as cheaper and 10x as good.
Mandatory shout out to: https://luna-lang.org, which tries to take a Blueprints-like approach to the next level, making it possible to write general-purpose code both visually or textually, and insta-translate it back and forth between visual and textual representation. While being fully open-source.
I’ve been working on something similar to a blueprint for TypeScript. You can point the editor at typescript files and it will parse them and generate nodes. You can then hook the nodes up in the editor and build programs. You can also build nodes out of other nodes.
I’m still pre launch. Hoping to turn this into an MVP by the end of the year.
> One thing that was magical is that with a heavy code base of visual nodes, the brain can find entire sections very quickly, a bit like that cave analogy in the article. You could group nodes with coloured rectangle etc. and the way they were laid out you'd have instant recollection of its internal workings. By seeing a vague layout of nodes I knew this was about joystick controller, player controller, shader, etc. It's a massive advantage and takes less strain on the brain to "READ" again the code. It makes code re-discovery so much easier. If I had some gnarly behaviour it would instantly show in the pattern of nodes as how they were laid out.
This works if you did the grouping and layout. How well does it work if someone else did that, and the way they think doesn't match how you think?
On the other hand, that may be no different than if someone else did the directory structure, the grouping of data in objects, the naming of variables, and so on.
So taking a devil's advocate approach on this... What would you think about text if you instead organized your code into logically partitioned submodules and followed a very easy to understand organization?
In my opinion, when working with a codebase that is well organized, it feels like reading a blueprint just by the folder and file structures. So the question from the devil's advocate, are we really in need of visual methods for programming? or do we just need to get better at organizing code?
Edit: Assume that good organization lends to easy copying/generation of submodules for use elsewhere
The "enterprise pockets" requirement usually ends up with (practically) nobody learning it themselves, nobody knowing about it, nobody knowing how to use it and nobody considering using it.
Any programming tool your average joe programmer cannot try for themselves in their own computer might as well not exists nowadays and at this point it'd only rely on how good the marketing drones of the development company are at convincing non-programmers of other companies that they should use that tool, which in turns means that the tool doesn't really have a need for actually being good, just give that feel to whoever makes the decision to get it.
If a developer cannot use these tools themselves they wont know them, in turn they wont promote them to others, they wont propose them to companies, the companies wont need them and they wont seek developers with knowledge of those tools and as a result knowledge of those tools wont be "tasty" to many companies.
The only way to break this dependency loop is to convince non-developers (executives, etc) to use those tools but at that point the incentive shifts from making those tools good for use by developers to making those tools good in convincing non-developers to buy them.
The only "big buys club" i see working on here is one of people who couldn't learn something else.
You should already have seen this with Turbo Pascal and Delphi and how these products moved from cheap $50-$100 products even students could afford and focused providing a simple yet comprehensive package to grossly priced overbloated, hype-driven, bullet-point executive-presentation-friendly monsters with tons of bugs and an over-reliance on bundling 3rd party components. And how this sort of move made their products instead of being widely known and loved by developers to become relics that only stay alive because their owners are preying on those who at the past chose them (because they were good at the time) and cannot move off.
I have used OutSystems for almost a year, I think it is almost the best in this area. But I still don’t like it in many ways. Its normal paradigm is simple imperative programming, and usually over abstraction. Some simple logic flow just easier to be read in text.
I can conclude OutSystems is not a completed solution for general purpose development, it is just a platform which provides simple template, components, programming. For systems which are data entry (what they call digital transformation), it is ok with an crazy license. However, if you want to implement some business logic, you are better to implement them in general programming language.
The components system really show what is leaky abstraction. For some plugin I have to understand what is doing behind. For native Cordova plugin, it is crazy to debug.
In a short run, it saves time, however, in a long run, it don’t have any advantages due to two reason: Its development speed will not able to catch up with technology trend, therefore, sometimes of its advantage is no longer an advantage, for example it have components, but frontend framework also have many components.
(Its component framework just copied from the open source, it is hard to find the documentation of the embedded version)
The eco-system just sucks, very few plugins because its plugins are not easy to make, and many of them are not supported by the Outsystems so most likely the plugin author will answer your question on the forum.
True I have seen it happen with Turbo Pacal/Delphi, but the biggest issue there was Borland almost going broke and continuous change of hands, which destroyed the trust on the product.
There aren't "toy" versions of Cisco, Oracle, DB2, SQL Server OLAP, Sitecore, AEM, LifeRay, WebSphere, SAP, Hybris, PTC, Aicas, Ricoh JVM, Gemalto M2M, ....
And yet there are plenty of projects available to work on with those stacks, including green field projects in companies adopting them.
Borland was going broke because they abandoned their core target market for chasing after the enterprise - and that was during a time before open source even got its name and developers expecting tools for free was a thing.
And have you seen the reputation these systems you mention have among developers (assuming they know about them, after all my original point was that the "enterprise pockets" requirements lead to obscurity - with the obvious exception of those that have been around for a very long time) and how clunky they are? Turbo Pascal and early Delphi never had the reputation that Oracle has nowadays.
And sure, you can find works with them but these systems are a minority and when they are used they are practically never chosen by developers themselves.
> At least I’ve never seen properly conducted studies involving psychologists or neuroscientists upon introducing yet another new fancy feature into a programming language.
I studied Mechanical Engineering, but did my Master Thesis on "Evaluation of MicroPython as Application Layer Programming Language on CubeSats"* - way out of my depth.
I tried to find some prior work on how to chose programming languages for specific domains, and how to judge if a language is the right tool for a task - and there wasn't much.
I even tried to measure how "easy" a programming language is, in a hilariously misinformed user study. There were 8 people, I was measuring the time it took them to implement small snippets in Python vs. C. I had this one dataset with weird outliers, where a person who performed really well, just took forever for some of the tasks ... later one of the participants told me he texted during the study.
And although I felt really bad about my really bad study, I also felt like the avantgarde of the field^^
Anyway, if we can find someone to fund it, I'd like to see brain activity scans of programmers using different languages. We already know smartphone usage rewires our brains. I wanna see the brain of JavaScript developer compared to a Lisper, Rustacean, ... .
> I wanna see the brain of JavaScript developer compared to a Lisper, Rustacean, ... .
Makes me wonder if a person using multiple different programming languages would have a different looking scan depending on which of their preferred languages they were using.
Actually that wouldn’t entirely surprise me if it did. I think I think in different ways when I program in Rust vs JS or Python at least. (All three of which I use and enjoy for different kinds of purposes.)
I like the distinction between spatial and temporal aspects of programming languages.
I'm wondering if the author knows Lisp though, which helps translating between those two views of code in important ways (like Homoiconicity).
Also, when trying to understand code, we typically run a simulation of that code in our head. Since we have limited capacity doing so, having a simple flow with simple data is helpful.
If we hit the limits of our internal code processing capabilities, we are switching to a different mode: Experimentation, supported by tools.
In non-interactive languages that typically is a 'unit' test or a sample app or a debugger.
In interactive languages it is the REPL.
A REPL allows you to experiment with the mental models we acquired from reading the code.
Confirming hypotheses by letting the machine run experiments, validating the outcome, deepening our understanding of the code.
This is where I see visualization being most useful. Show the data before our hypothesis, show the data after we've run the experiment (i.e. v'=f(v)). Not just the type of data.
That's where I'd like to see better tools. (One example of this I'm currently using is re-frame-10x, which shows data epoch oriented and has neat diffs of what changed when an event was processed (i.e. time has moved forward).
I find it baffling that Lisp is still written in text. It's pure AST, so it's easy visualize and manipulate it as a tree. Small trees (definitions of functions and data structures) are composed into big tree that forms application. Why not do all development work (reading/writing) on this tree directly? In my view the Lisp should be first candidate for visual programming.
Agreed, re-frame and its tooling is awesome. BTW is there any way to drop into REPL from context of event handler?
I don't understand your surprise. Text has many compatibility, readability, and uniformity advantages. How are you going to diff a tree? How are you going to make the display consistent across platforms & tools? How are you going to represent the details of the program?
In the end, you're still just projecting abstract objects onto a 2d plane, and text already does that in a maximally flexible way. Anything else is less.
A program's abstract syntax tree in memory is still a form of text; it's just chopped up (not a linear tape of symbols), and some of the representations are altered. For instance the number 5 becomes 101 or perhaps 10101 (01 type tag) or whatever. Bits are still text: reams of text made of two symbols.
The so-called visual programming languages do not get away from text. They show various boxes or blocks on the screen, but the contents of those blocks are pieces of text.
Every single screenshot of a visual programming language shown in this article is nothing but snippets of text inside boxes.
If we take a Lisp expression like (a a a) and show it as three boxes, each labeled with a, that is not an accurate tree representation. The actual tree is a DAG, in which there are three pointers leading to a single symbol whose name is a. If we show three boxes, then we do not have anything close to a purely visual AST language. We have a language of textual tokens which require passing through a reader so that multiple occurrences of the text a get turned into the same object.
Isn't the most popular Lisp editor, emacs, AST aware? I'm a filthy vim using pleb but I'm under the impression that emacs is aware of s-expression encoded ASTs and has all sorts of 'chords' for manipulating the AST.
To get to the next level and scale of bigger programs, you need to leverage the tons of work information visualization researchers have done.
I always point people to Prof. Tamara Munzner's work[1][2] that allows you to skip over all the dead ends that people have tried over the decades and gives you a "thinking framework" on how to design visualizations.
[2] really interesting. A shame the preview only covers the table of contents. There’s a search feature, but you are not allowed to see the search results :-/
Visual programming have been quite successful in some constrained domains. The fundamental imitations is that visuals does not represent arbitrary levels of abstractions very well. The visual way to represent abstractions is symbols (e.g symbols on a map), but when you go a level deeper to create symbols representing other symbols, then you end up with text.
I'll probably disagree here. Symbolic meaning is attached to the abstraction only on the second stage, when you have to express it. You can totally understand things and their relationships without attaching words to them – actually, people who learn foreign languages should be familiar with this feeling, when you understand thing, but can't attach words to them.
Here is a quote of Albert Einstein (his actual quote, this time, from "Ideas and Opinions" :)):
> (A) The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be “voluntarily” reproduced and combined.
> There is, of course, a certain connection between those elements and relevant logical concepts. It is also clear that the desire to arrive finally at logically connected concepts is the emotional basis of this rather vague play with the above-mentioned elements. But taken from a psychological viewpoint, this combinatory play seems to be the essential feature in productive thought — before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.
> (B) The above-mentioned elements are, in my case, of visual and some of muscular type. Conventional words or other signs have to be sought for laboriously only in a secondary stage, when the mentioned associative play is sufficiently established and can be reproduced at will.
A programming language (visual or textual or whatever) is an expression. How thoughts and ideas are represented inside the mind is a different question.
Quite right, for the sake of a visual mental support of a 3d imagine you’re limited to three dimensions, where as the mind doesn’t have these limitations and is n dimensional.
Can you provide any support to the claim that mind is "n dimensional"? My understanding is that findings in grid cells research suggest that brain's mechanism to deal with spatial information is exactly "3-dimensional", i.e. optimized to handle our real world space. Which kinda makes sense from the evolutionary perspective.
That's why even a simple attempt to "understand" 4-dimensional object wreaks havoc in your mind, and we have to reduce it to some sorts of 3D projections or explain it with math formulas.
It’s because concepts are just dimensions of information, they don’t always have a physical representation, you can associate and create different levels of abstractions, have different views, sort of like an OLAP cube. You’re doing this all the time, consciously or not, you’re deciding on abstractions that cross cut problem domains. The author touches a bit on it, where he talked about zooming in and out for example to the right level, those are dimensions of the data beyond 3D, or wanting good code to visually look a certain way is another dimension. So those are possible in his model, but you can imagine needing say a function to be in two places at the same time since it’s related to two pieces of code, but you don’t want to duplicate it or draw lines cutting across the diagram to show this. The mind has no problem with say a function being in two places at the same time but not duplicated, but obviously this is impossible in the physical model.
It’s because you’re caught on thinking dimensions are physical, if you realize it’s all really just data it’s pretty simple since you’re doing it already.
Yeah that requires practice in getting out of thinking physically.
There’s a well known and very old exercise back from the 9th century where you visualize a man sitting in a conch shell, and the man has not shrunk and the shell has not changed size. If it’s a struggle, what you’re seeing is just habit and nothing to do with the mind amusingly.
So that direct sense of understanding something, without any helping intermediates like symbols or visual aids, is just the mind directly. It is its own thing. Another way to look at it, is if everything you thought needed a symbolic or physical/visual aid, you wouldn’t be able to represent everything, you wouldn’t be able to think.
Language fills that gap easily, which is just another form of symbolism. I can think of love because I know the word, feeling the emotion by itself doesn’t bind to the concept.
I guess we’ve spiraled down into the mentalese debate, sorry about that.
Yeah, I've tried a handful of visual programming tools in the 90s before I got fluent in a normal programming language. It seemed very easy to start with the first steps and it was motivating how all the possibilities seemed to be one click away. But in reality the fun stopped really quickly and getting further would require deep understanding of the tool, i.e. reading a lot of - incomplete - documentation.
Another tool, I forgot the name but it's used in certain parts of science, in particular it at least used to be in use at the LHC. It seemed quite complete but complex things would easily be difficult to see, at least in the short time I did something with it.
IMHO Borland Delphi is the furthest visual programming went so far. I never really understood why IDE developers didn't pursue that path any further. (I know there's still a Delphi but almost nobody uses it.)
Yeah probably. Still this is something I think of, also wondering whether eventually programming might become commoditized by new tools. So the question was already asked by other commenters, maybe there's an argument on can make why this won't work.
Yes sure. Sorry, this is quite a stretch from the topic because Delphi hardly allows logic to be written visually in contrast to the other tools mentioned. But you can 'click' together your whole UI. It is very intuitive where to find the most important handlers (onClick for buttons, onSelect for list...), you double-click, a piece of code gets generated and your cursor lands within the generated stub code for that.
Also you have a good browser for objects. Actually one time I was playing with some Direct3D plugin, that was really easy to get started with and to move something around (in the 90s!).
Not knowing anything about maintainability or how to approach problems, this was really fun to work with. I think the Microsoft Visual Basic might resemble these things to some degree, but I never tried it. Also Object Pascal is a more powerful and concise language in comparison.
Oh and I didn't mention the fabulous documentation. They really did their homework, once you built the help search index you could search for each available library function. Really cool stuff actually. I've tried other IDEs like Borland C++ Builder, KDevelop and the QT Designer, NetBeans, Eclipse but they didn't even come close to that. After a friend strongly recommended me to work with editors I completely lost interest in IDEs actually. (Nowadays I'd use some React boilerplate to quickly assemble a UI, but that was quite some work and practice ;))
Yes, and for illustrating a process of mostly transformations of data, visual representations are great - especially when you need to help a client understand what is happening.
This article seems to be mostly ignoring what's going on in the sidebars of a decent IDE: file explorers, outlines, type hierarchies, and so on. These are pretty decent ways of getting an overview and navigating a codebase.
There may be a way to do better, but it seems unlikely to be a bunch of abstract nodes and straight lines between them, whether in 2D or 3D.
Interesting topic. I haven't gone as deep in this rabbit hole but I've also written "similar" tool that I've used as a visual help [1] when reading programs written by other people. It's been useful in understanding bottlenecks and weird code patterns / dependencies. I've even printed the graph sometimes and written down notes on the paper, drawing circles around logical groups, etc.
I ended up going with separate color for each type / package.
Nice. I've been doing similar drawings a lot – on whiteboard, on paper, on tablet, a few times even drawing them to use as the argument in pull-request debates, because words weren't enough to show the point.
> The brain handles them very differently on the lowest level, but what’s interesting is that we can use spatial representation to represent temporal information and vice versa, and, actually, often do precisely that.
The author of this article would probably really enjoy diving into research regarding sign languages, which are by there very nature both spatial and temporal languages - in other words: kinematic languages.
I got the pun, but that's actually interesting point. Natural text also has both spatial and temporal components: when you read fiction, some paragraphs describe the scene, the character, the relationships, and others – how they evolve, change and interact.
For this reason, I sometimes enjoy watching movie before I read the book it's based on – the movie (visualization) so much better and showing how things look like, while text is still superior in following the story, living development of characters and so on.
So in this article I used graphics a lot – and even more so in the original slides. I actually had to write WebGL programs to show that visual aspect. That's one of the things that frustrates me a lot – it's so hard to just visualize what you mean, without being professional animator or programming 3D scenes. I'd love it to be as simple as writing a post on social network.
Textual literacy is still orders of magnitude more accessible, cheaper and widespread.
Outstanding post. Author might find it interesting to partner up with game designers on visually representing massive amount of data for quick grok by the masses.
i don’t think visual representation of structure is actually the hard part. Logic and computation seems to me is the only difficul part. We’re still writing cooking recipes the same way , using lines of text, even if the ingredients are extremely easily represented with images or icons.
absolutely. It seems that inevitably you’ll have to abstract a pack of computation under a single concept representing what the computation is globally about.
hence the general focus on isolation , interface and composability.
Once you’ve abstracted a set of computation under a good concept , you should be able to understand it using just a name.
You do need C++ though if you're going to write stuff like A* pathfinding, complex scene setup like hex based 3D platforms. Again it's all doable in Blueprint and I'd often do it first in BP and then rewrite it in C++ for perf sakes.
So visual programming isn't broken and it works, and not necessarily at DSL level. The UX of the editor has to be fantastic though, for noobs as well as experts. That means good support for quickly producing nodes, finding existing code, refactoring support and Git support. You may think it's tedious but with habits and patterns, with a few key stroke I could program an entire component's behaviour in no time.
One thing that was magical is that with a heavy code base of visual nodes, the brain can find entire sections very quickly, a bit like that cave analogy in the article. You could group nodes with coloured rectangle etc. and the way they were laid out you'd have instant recollection of its internal workings. By seeing a vague layout of nodes I knew this was about joystick controller, player controller, shader, etc. It's a massive advantage and takes less strain on the brain to "READ" again the code. It makes code re-discovery so much easier. If I had some gnarly behaviour it would instantly show in the pattern of nodes as how they were laid out.
I'm convinced that if we had "as good" editor for web technologies, we could produce microservices and front-end code in the exact same fashion without writing a single line of code. It's just such a massive investment to _do it well_ I don't see it happen any time soon.