Hacker News new | past | comments | ask | show | jobs | submit login
Unit: A visual programming system [video] (youtube.com)
104 points by spiralganglion on Aug 17, 2023 | hide | past | favorite | 38 comments



Things like this have been done many times before. Houdini lets you create shaders with node graphs and plenty of other programs have done the same thing. It has a lot of advantages in that more constrained space.

The problems are always the same but I think they aren't clear until someone has already gone down this road.

1. Building expressions out of nodes is much less information dense that just typing them out. What would be a single line of text can end up being a lot of nodes and taking up a lot of screen space.

2. Expression ordering isn't as clear, since it is a graph of dependencies and to figure out what runs when you have to keep tracing dependencies backwards.

3. Branching is tricky for the same reason. A graph of dependencies makes branching much less clear due to order and a scrambled graph of nodes.

4. Loops are more difficult because they also depend on the order of execution and graphs can make that difficult to decipher and difficult to illustrate.

5. State becomes difficult because you need ordering and it depends on external dependencies.

A lot of graph problems can be seen in haskell too. You have something that works very well when what you are doing is all a series of transformations, but there are large chunks of programming that don't fit in to this model.


1) Agreed. I wish more of these systems had code nodes that you can just write expressions in. Sadly the big contenders are tuning for a subset of functionality so expression nodes are often an after thought or a mismatch because the graph system doesn't mix with the compiled language underneath.

2) I think this is both unsolvable but can be mitigated with more tooling. The advantage of the graph systems is exactly because they are non-linear. But code is easier to read and understand linearly. I hope one of these graph systems adds an "unrolled" debug view. It would be nice to read the execution of the past n nodes linearly in a transient visual log. The common "highlighted path" view is good but incomplete.

3) I disagree. We're all just used to doing the scoping passively when reading functions. I think the explicit scoping of the dependency graph is often much more clear. Buuut... the explicit graphs edges make branches tedious as you have duplicated dependencies drawn all through the branch. Perhaps a branch paradigm where the many deps enter the branch node and are then forwarded to the branches or even some kind of implicit scoping would make this better.

4) Yes, loops are a challenge because of the common UX used. The hot path highlighting falls apart when you loop over the same set of nodes. Again, an unrolled view might be useful. There's also the concept of dropping some loop types and focusing on for-each and batch/collection processing. I think passing lists or iterating over them solves some of the loop issues. ...buuuttt, many graph systems don't seamlessly handle both lists and single items. Maybe some Lisp influence would help. Graph systems could use something as simple and easy as mapcar.

5) I don't know...most graph systems I've played with had an easier grasp of state because dependencies were so clear. Still, many graphs I see resort to graph scope and global scope. Would be interesting to see state scoped to some group of nodes.


I'm impressed with the declarative programming model (aka reactive programming), and the ability to compose new blocks/visual components as a result.

It feels like we're finally starting to overcome all that we lost in the decades since VB6/Delphi for Windows.



Delphi yes but moving to enterprise costing models lost the market.

.Net no they lost because .NET ios not as easy as VB6 - VB6 is more like a GUI shell language linking controls together. Easy to quickly do something but if you need more detail then not a good language you have to write the controls and that requires another language. .Net threw that ease away, if you are at the level of wanting to alter controls then yes it is better but I think having the two languages was better for enterprises. However nowadays the replacement for VB6 is web programming.

There are others like Xojo


Delphi still has a big enough market to host an yearly conference in Germany, and otherwise there are occasional tracks on .NET/Windows conferences like BASTA.

.NET Forms is definitly as easy as VB6, even more so, because C++ was out of the picture.

VB required delving into VBX (C++), and even VB6 OCX doesn't support everything in COM, meaning some kind of controls also required diving into C++ for VB 6.


I will argue .NET's Windows Forms is an easier and so much better successor to VB in all aspects. It has even survived through the .NET Core and .NET era.


The problems spoke for themselves over the course of the presentation. Something as simple as a counter was very complicated to look at, filling the screen with jiggling nodes. The presenter then introduces a bug while trying to decrement, and is unable to debug it.


I spent quite a bit of time (about 8 years) looking into and working on VPLs. What I found was that our perceptions and interactions with various abstractions such as programming, mathematics, or even music (sheet music), are shaped by our individual mental models.

I code using programming languages because it was what I was taught in college and it's what I've used for decades. For me, something like SQL looks amazing and is super easy to rad. Whereas others may see SQL as something that is complicated to look at full of jiggling relations.

In the same way, a VPL may initially appear complicated, but this is often a reflection of unfamiliarity rather than inherent complexity.

As with many technologies, given time and development, there's potential for something groundbreaking to emerge.

What may seem complex today could become a new standard tomorrow.


Those aren't arguments against visual programming languages in general.


Interesting, and clearly a lot of work's gone into this (60,000 lines of Typescript), particularly the UI, which is impressive (if, sometimes, over the top). I've been developing a similar system (http://www.fmjlang.co.uk/fmj/tutorials/TOC.html) and it's interesting to note the similarities and differences.

Similarities: code as directed graphs (less obvious in FMJ); can only connect outputs to units of compatible type; if and wait (looping is handled differently); sticky values; sliders. These design decisions are practically forced on you, but are often absent in earlier visual dataflow languages (e.g. Prograph, LabVIEW).

Differences: (1) inputs are named in Unit, ordered in FMJ (though they're named in formulas and edges can be labelled). (2) I experimented with automatic code layout but found this was too slow and not always what I wanted. Well done for getting this to work. (3) FMJ is now fully homoiconic - this maybe isn't a priority for Unit.

The Unit design philosophy is explained in https://github.com/samuelmtimbo/unit/blob/main/src/docs/conc... . This doesn't mention earlier approaches (e.g. the Manchester Dataflow Computer, Prograph) and it seems to be based on vaguely similar ideas developed more recently (Morrison's Flow Based programming; possibly React and similar systems for web development - I'm unfamiliar with these).

I have a number of questions:

(1) How does the type system work? Is it Dependently typed, Hindley-Milner, or something more basic? (FMJ is Hindley-Milner, with dependent typing partially implemented). How are new types be defined?

(2) How is the visual representation stored? One criticism I faced was that people wanted a readable textual representation which would work well with existing version control systems, a problem I have now largely solved.

(3) How are runtime errors handled?

(4) Is recursion supported? (I assume yes, but I didn't see any examples.) What about macros?

(5) What does Unit compile to? (FMJ has an experimental compiler where programs are compiled by running their source without evaluating their inputs, output is Lisp.)


> 2) I’ve thought about this problem a lot. Like to a very unhealthy degree… To the extent that I’ve been working on a visual dvcs full time for over a year, that interops visual programming tools with textual codebases that use textual version control. When you say you solved this, is it because you expect users of your tool to be able to understand and reason about however you’re storing your state to disk? Xcode does this with the infamous project.pbxproj and it still causes immense merge conflict pain to this very day (and it’s one file), on the other end of the spectrum, large game studios have basically forgone dvcs to use shader editors at scale. If your language tool is simple enough to reason about textually, doesn’t that limit the utility of your language? I don’t mean in the sense that you make it deliberately hard to reason about your ast equivalent but presumably it wouldn’t take much complexity before a bad merge conflict could break relations and make malformed states. How do you handle that without writing a hand rolled merge driver for each vcs you support?

In my humble opinion this is kind of the breaking point of almost all higher order programming tools. I think it’s also the reason most “no-code” or “design-to-code” tools feel like a sham to engineers. I’d love to hear how you overcame merge conflicts for your tool, I’m skeptical, but I’m all ears.


It's low level, but still readable and, if necessary, editable. The problem I had was converting my visual code into a topologically sorted directed acyclic graph, with just vertices and arcs, where the arcs into and out of vertices are unordered. I found a way of doing this (thereby getting homoiconicity for free) and now can just store the textual representation of the graph, with each vertex on a separate line. I'm in the process of replacing the earlier representation, where I just stored lists of objects in an ad-hoc fashion. This works for pure dataflow programs or anything else which is equivalent to a DAG, but would not for e.g. LabVIEW.


That’s interesting to hear and I think I arrived at a similar homiconic IR state for my tool too. I think there are some problems you might run into if you’re doing any sort of relational cascading but this sounds sane and like a good approach. Thank you for sharing!


Hey, thanks for the analysis and questions!

I did check on FMJ many years ago when researching about Visual Programming Languages. I recall the very inspiring notes on the FMJ website (http://www.fmjlang.co.uk/fmj/FMJ.html).

(1) The type system is inspired by TypeScript. It has generics that are resolved when unit pins are connected to concrete types. Right now there’s no way to define new types. Hopes are that type inference alone would be good enough for a start. Likewise, new interfaces cannot be created. Only the system defined interfaces can be exposed, such as channel `CH` and media stream `ST`. That is interesting because it influences the design of machines in terms of common interfaces and protocols for inputs and outputs.

(2) The visual representation is a graph specification stored in JSON. I think JSON works well with current version control systems in terms of diff readability, but I don’t see why custom unit components couldn’t be created to show the diff in graph, visually.

(3) The unit will stop the data flow and emit an “error” event. To catch this event particularly, there’s a `catch unit.

(4) Yes, there’s a few examples of recursive units, such as a `merge sort` in the open source system. The `editor` unit can’t create them from scratch right now though… but it can be done with a meta trick…

(5) The graph is loaded and saved as a JSON. The JSON has to be read by an interpreter that will instantiate all the primitives and make all the connections. Unit specification is language agnostic, but currently the project is all written in JavaScript (works with Web and NodeJS).



And https://unit.land for the demo


This is so eerily similar to a concept I've been thinking about for years. I want to base it off equations not functions though. Bidirectional value propagation. Figuring out how to make that work always stumped me.


I've thought about bidirectional value propagation too, but except for cases where no information is lost (e.g. cons) I'm similarly stumped (e.g. append). I've got bidirectional type propagation working though. There are bidirectional logic gates, e.g. Fredkin gate, but they operate on individual bits.


Can it do audio rate stuff? it reminds me a lot of PureData (or MaxMSP) but with a cooler interface.


Yes, as soon as Web Audio units are released. Then Audio Generation and Audio Processing can run and be integrated with other technologies seamlessly, like UI elements, Canvas and P2P.


Thanks for sharing. Definitely interesting!

I have a number of thoughts on this area.

* We need Graph paging: Nodal editors have too much data to present for large systems. Navigating/panning/scrolling a large screen of data is painful. I started writing a C parser (completely incomplete) with the desire to render simple diagrams for the Postgres codebase. But you know that the diagram would be too large to present every method and its relationship, so you need to paginate the graph. I was thinking of paging each method at a time.

* We can render graphs that are incomplete and additive, we can render a grid of graphs like tiles each all live and interactable separate graphs but also subsets of the available data. It's okay to present the same node again, with different children or connections.

* I've been playing around with what I call "movement programming" which is the idea we step through state of memory step-by-step and grab data to be fed into operators or functions, this generates the instruction (inferred) and the next state is displayed. The goal is that btrees and quicksort could be implemented with point and click. We visualise context on the screen at all times, which are local variables and contextual data.


Finally something that allows to have a personal repository again (after version control just stole that word). And not in Squeak. Amazing work.

(Does it support version control...?)


This is awesome! I have been working on a similar tool. My focus has been keeping as much as possible in a single binary (benefits of using go) so that local development and deployment to production is super easy. It can orchestrate different grpc methods together which is pretty neat. I have been digging using the Protobuf type system to express the data transfer between nodes.


What are you using for the GUI parts?


I feel like this only adds complexity to code.


We add complexity all the time. In the end it comes down to whether it's worth it.


Maybe as a teaching aid or for sharing/presenting ideas. But I wouldn't use it for day-to-day programming.


Yeah I think this is fine for programming that you want to be highly discoverable and generally stays below ~100 operations. Good for graphics and audio. Not good for any significant programming.


Reminds me LabView


Which for me begs the question, is LabView worth it? It's the only widely used Visual Programming Language I can think of.

Every time someone posts a visual programming language prototype the same claims are made over and over again. Like; adds too much complexity, throws out benefits of text editing, eats up too much screen space etc etc.

I'm convinced that many of those issues can be overcome and that until we figure out how to overcome them I'd like to understand which niches Visual programming really makes sense until then.

If LabView makes sense and is worth it, there ought to be more applications where it makes sense!


> It's the only widely used Visual Programming Language I can think of.

Unreal's blueprints are also very widely used in production.


Yeah, is using a visual programming language worth it in that case?


Yes. Non-coders often find it more approachable. These systems are a good fit for games because games have many static assets that need to be referenced. Dragging and dropping to form an asset reference is preferred to using text ids.


Historically LabView had two other key things going for it besides the low-entry-barrier of visual programming.

1. A GUI comes for free; in LabView each program is two windows, each variable you declare shows up once on your code DAG and then again on your GUI as a switch, text box, slider, etc.

2. National Instruments also had a set of libraries and PCI cards for communicating with lab equipment like benchtop power supplies, voltmeters, and even up to big complex kit like oscilloscopes and exotic telecom protocol emulators.

Those two things enabled people to easily wire up a motley assembly of benchtop instruments and orchestrate them together for a unified test of hardware like circuit boards and so on.

Nowadays EE types have better coding skills and those benchtop instruments all have Ethernet ports and REST API's.

But SpaceX famously uses LabView for some things, most notably for the Dragon flight console.


I think the name might be a problem. not really distinguishable especially in programming, hard to google.


Hard to google C and Go but it didn't seem to hurt those languages.


As a counterexample few people know about the programming language called "the" (pronounced "T-H-eh"). It was the first language that had shared pointers, trinary coroutines (where the same thread jumps between three different parts of the code), and a constrain solver in the compiler for resolving the ambiguous multiple base class problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: