Hacker News new | past | comments | ask | show | jobs | submit login

I discovered TouchDesigner [1] a few months back- it’s an incredibly powerful visual programming tool used in the entertainment industry. It’s been around well for over a decade and is stable enough to be used for live shows. Deadmau5 uses it to control parts of his Cube rig [2]. I’ve seen a few art installations based around it as well [3].

There are some really amazing tutorials and examples here: https://youtu.be/wubew8E4rZg

[1] https://derivative.ca

[2] https://derivative.ca/community-post/made-love-touchdesigner...

[3] https://derivative.ca/community-post/making-go-robots-intera...




TouchDesigner is indeed super cool. And I correctly guessed what that tutorial was going to be before I clicked on it. :) Mathew Ragan is an excellent tutorial maker. He's also relaxing to listen to.

TouchDesigner really showcases the enabling nature of visual programming languages. You can see your program working and inspect and modify it while it is working. These are very powerful ideas, and visual programming languages are much better platforms for ideas like this.

People, i.e. traditional programmers, are really hard and down on visual programming languages. Meanwhile, people who use LabVIEW, TouchDesigner, vvvv, Pure Data, Max, and Grasshopper for Rhino are all extremely effective and move quickly. People who are experts in these environments cannot be kept up with people using text-based environments when doing the same application.

Text-based programming is limited in dimensionality. This can become very constraining.


> Text-based programming is limited in dimensionality. This can become very constraining

In contrast, I've often thought that visual programming tools are much more limited dimensionally, and that's why they can become difficult to manage beyond a low level of complexity: you only have two dimensions to work with.

With the visual programming tools, the connections between components need very careful management to prevent them becoming a tangled and overlapping web. In a 2D tool (e.g. LabVIEW), you could make a lot of layouts simpler by introducing a third dimension and lifting some components higher or lower to detangle connections - but then you'd face similar hard restrictions in 3D.

Text based programs suffer from no such restrictions; the conceptual space your abstractions can make use of is effectively unlimited, and you can manage connections and information flow between pieces of code to maximize readability and simplicity, rather than artificially minimizing the number of dimensions.


> the conceptual space your abstractions can make use of is effectively unlimited, and you can manage connections and information flow between pieces of code to maximize readability and simplicity, rather than artificially minimizing the number of dimensions.

How does this not apply to a visual language like LabVIEW? Just because you draw the code on a 2D surface doesn't prevent abstraction and arbitrary programs. The way I program LabVIEW and the way it executes is actually very similar to Elixir/Erlang and OTP. Asynchronous execution and dataflow are core to visual languages. You are not "bound" by wires.

When you write text-based code, you are also restricted to 2 dimensions, but it's really more like 1.5 because there is a heavy directionality bias that's like a waterfall, down and across. I cannot copy pictures or diagrams into a text document. I cannot draw arrows between comments to the relevant code; I have to embed the comment within the code because of this dimensionality/directionality constraint. I cannot "touch" a variable (wire) while the program is running to inspect its value. In LabVIEW, not only do I have a 2D surface for drawing my program, I also get another 2D surface to create user interfaces for any function if I need. In text-languages, you only have colors and syntax to distinguish datatypes. In LabVIEW, you also have shape. These are all additional dimensions of information.


> Text-based programming is limited in dimensionality. This can become very constraining.

> When you write text-based code, you are also restricted to 2 dimensions, but it's really more like 1.5 because there is a heavy directionality bias that's like a waterfall, down and across.

These are two really good points. Text-based code is just a constrained version of a visual programming environment. So far most attempts at VP have attempted to represent code in terms of nodes and lines (trees), but that does not necessarily need to be the case.

The interesting about VP is that is presents a way to better map the concept and structure of programming as an abstract concept to how we physically interface with our coding environments.

It's far from likely that character and line-based editing is the mode of the future. Line-editing maps to the reality of programming in, to my eyes, such a limited way that is seems the potential for new interfaces, and modes of representation is wide open.

It's not that, as some people stubbornly say, there's no better alternative to text-based programming, but I think we just haven't conceived of a better way yet. We're biased to think in a certain way because most of us have programmed in a certain way almost solely by text and most of our tools are built to work with text. But that doesn't necessarily mean that the way we've done things is the best way indefinitely.

There's so much unexplored territory. VR opens up new frontiers. What if the concepts of files, lines, workspaces were to map to something else more elemental to programming as an abstract concept. What if we didn't think so much in terms of spacial and physical delineations, and instead something else? Blind programmers have a different idea of what an editor is. Spreadsheet programs "think" in terms of cells in relation to one another. What about different forms of input? Dark Souls can be beaten on a pair of bongos. Smash Bros players mod their controllers because their default mode of input isn't good enough at a high level. Aural and haptic interfaces are unexplored. Guitars and pianos are different "interfaces" to music. Sheet music is not a pure representation of music.

I think there's the mistaken belief that text == code, that text is the most essential form of code. Lines, characters are not the essential form of code. As soon as we've assigned something a variable name, we've already altered our code into a form to assist our cognition. Same with newlines, comments, filenames, the names of statements and keywords. When we program in terms of text, we're already transforming our interpretation of code and programming; we've already chosen and contrained ourselves to a particular palette.

What is the most essential form of code are (depends on the language, but generally) data structures and data flow. So far, our best interpretation of this is in the form of text, lines, characters, inputted by keyboard onto a flat screen - but this is still just one category of interpretation.

All this is to say is that text is not necessarily the one and only way, and it's too soon to say that it's the best way.


These are all excellent points, and I agree whole-heartedly. I'm glad someone else gets it. :) I'm going to favorite this comment to keep it mind.

The way I see it is that we've had an evolution of how to program computers. It's been:

circuits -> holes in cards -> text -> <plateau with mild "visual" improvements to IDEs> -> __the future__

I think many programmers are just unable to see the forest through the trees and weeds, but visual languages see a lot of power in specific domains like art, architecture, circuit and FPGA design, modeling tools, and control systems. I think this says something and also overlaps with what the Racket folks call Language Oriented Programming, which says that programming languages should adapt to the domain they are meant to solve problems in. Now all these visual languages are separate things, but they are a portion of the argument in that domain-specific problems require domain-specific solutions.

So what I believe we'll have in the future are hybrid approaches, of which LabVIEW is one flavor. Take what we all see at the end of whiteboard sessions. We see diagrams composed of text and icons that represent a broad swath of conceptual meaning. There is no reason why we can't work in the same way with programming languages and computer. We need more layers of abstraction in our environments, but it will take work and people looking to break out of the text=code thing. Many see text as the final form, but like I said above, I see it as part of the evolution of abstraction and tools. Text editors and IDEs are not gifted by the universe and are not inherent to programming; they were built.

This has already happened before with things like machine code and assembly. These languages do not offer the programmer enough tools to think more like a human, so the human must cater to the languages and deal with lower-level thought. I view most text-based programming languages similarly. There's just too many details I have to worry about that are not applicable to my problem and don't allow me to solve the problem in the way that I want. Languages that do provide this (like Elixir, F#, and Racket) are a joy to use, but they begin to push you to a visual paradigm. Look at Elixir, most of the time the first thing you see in an Elixir design is the process and supervision tree. And people rave about the pipe operator in these languages. Meanwhile, in LabVIEW, I have pipes, plural, all going on at the same time. It was kind of funny as I moved into text-based languages (I started in LabVIEW) to see the pipe operator introduced as a new, cool thing.

In general, I have many, many complaints about LabVIEW, but they do not overlap with the complaints of people unfamiliar with visual programming languages, because I've actually built large systems with it. Many times, when I go to other languages, especially something like Python, I feel I've gone back in time to something more primitive.


When VP is mentioned I think people automatically assume hairy nests of nodes, lines, trees. Slow and inefficient input schemes.

Text-based representations have tremendous upsides (granularity, easy to input, work with existing tools, easy to parse), but they also have downsides I think people tend to overlook. For example, reading and understanding code, especially foreign code, is quite difficult and involved; involves a lot of concentration, back and forth with API documentation, searching for and following external library calls ad nauseum. Comments help, but only so much. Code is just difficult to read and is expensive in terms of time and attention.

> Text editors and IDEs are not gifted by the universe and are not inherent to programming; they were built.

Bret Victor has some good presentations that addresses this idea. One thing he says is that in the early stages of personal computing, multitudes of ideas flowered that may seem strange to us today. A lot of that was because we were still exploring what computing actually was.

I don't dislike programming in vim/emacs/ides. Is it good enough? Yes, but... is this the final form? I think it'll take a creative mind to create a general-purpose representation to supersede text-based representations. I'm excited. I don't really know of anyone working on this, but I also can't see it not happening.


I'd propose that LabVIEW is just as dimensional as any text language while still presenting a 2nd dimension of visual reference.

LabVIEW has subVIs which are effectively no different from any subroutine or method in another language. LabVIEW has dynamic dispatch so it can run code with heterogeneous ancestry. You can launch code asynchronously in the background, which isn't even necessary to accomplish multi-threaded execution in LabVIEW (though there are plenty of other gotchas for those used to manual control of threading along with a couple of sticky single threaded processes that might get into your way when trying to high level reusable code). You can even implement by-reference architectures adding yet another way to break out of the 2D-ness of its diagrams. Perhaps a new development for most here will be that LabVIEW is now completely free for personal use (non-commercial & non-academic). Still, like some have pointed out, LabVIEW really shines with its hardware integration. It's the Apple of the physical I/O world. The only reason I avoid it for anything larger than small projects is it needing it's not tiny run-time engine which isn't any different from .Net distributables just more.... niche?

With text you get top to bottom lines of text (a single dimension) and any additional dimensionality has to be conceptualized completely in your mind... Or in design tools like UML... which display relations in a 2D manner. SQL design tools these days provide 2D visualizations in a graph-like manner to relate all the linkages between tables. User stories, process flow, and state diagrams are (or at least should be) mapped out in 2D in a design document before putting down code. How does the execution order of functions and the usage of variables provide any more freedom?

All I want to establish is that LabVIEW is another tool in the toolbox. People used to text are used to SEEING a single dimension and thinking about all the others in their head or outside the language. LabVIEW places two dimensions in front of you which changes how you can/have to think about the other dimensions of the software. With skilled architecture of a LabVIEW program the application will already resemble a UML, flow, or state chart. I do agree that some stuff that feels much simpler in text languages such as calculations are much more of a bear in LabVIEW; tasks that are inherently single dimensional in their text expression suddenly fan out into something more resembling the cmos gate logic traffic light circuit I made at uni.

I do embedded uC development with C/C++, I do industrial control systems and automated test in LabVIEW, and I even subject myself to the iron maiden of kludging together hundreds of libraries known as configuration file editing with a smattering of glue logic AKA modern web development (only partially sarcastic, if I never have to look at a webpack config file again I'll die happy). I (obviously by now) have the inverse view of most in this thread. For simple stuff I use C#. Microcontroller based projects I use C/C++. For larger projects I'll use LabVIEW.

Then, when something has to run in a browser I stick my head in the toilet and smash the seat down against my head repeatedly. Then I'll begin to search google for the 30 tabs I'll need to open to relearn how to get an environment setup, figure out which packages are available for what I'm trying to do, learn how to interact with the security of the backend framework I'm using, learn the de facto templating engine for said framework, decide which of the 4 available database relation packages I want to use for said backend, spend a week starting over because I realize one of the packages I based the architecture around was super easy to start with but is out of date; has expired documentation; conflicts with this other newer library I was planning on using for some other feature... Now I need a cold shower and a drink.

Cheers mates!

P.S. I do find a lot of modern web development fun, but the mind-load on top of all my other projects and professional work can be a bit much. I'm sure someone that started out in webdev has the same exact vomitous reaction to something like LabVIEW.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: