Hacker News new | past | comments | ask | show | jobs | submit login
Rethinking Visual Programming with Go (divan.dev)
231 points by techplex on April 3, 2022 | hide | past | favorite | 57 comments



This video about the subtext programming language shows the afaik best idea for temporal programming in a visual way, and it has some distinct advantages to text:

https://vimeo.com/140738254

Having one more dimension for give meaning to is awesome, and there are many brilliant ideas in there. IMO it‘s a great shame it hasn‘t (afaik) yet been fully realized.


We've been adopting visual programming more and more over the years, we just don't notice it.

For example, the tree structure shown in the demo is essentially a navigation tree we're accustomed on every IDE today. Such a view didn't exist 30 years ago, but we take it for granted now. A file system layout may not always reflect the actual dependencies, but it usually reflects categorical relationships. Most IDEs also provide a type hierarchy view now which you can use as an alternative.

Similarly, there are now features for peeking to definition, "teleporting between types", having multiple windows laid out on the same IDE, multiple window support. We're slowly getting better tools to travel the source code.

At the same time, our need for grasping the full source code map has been getting less important too. Prevalence of refactoring and package management tools mostly do the work for you: they work the map, and hide the irrelevant parts.

Not to mention tools like Excel which have always been visual since forever. For some reason, many advocates of visual programming are obsessed with a certain specific Johnny Mnemonic representation of it. But, it doesn't have to be. It can coexist with our existing tools, and evolve over time, which is what's been happening so far.


> Such a view didn't exist 30 years ago

I thought this was in Englebart’s demo 54 years ago?


The OP is mostly about visually-enhanced reverse engineering of high-level software design, as opposed to the "visual programming" of something like LabVIEW. In a way, it seems to miss the point of why not just use UML diagrammatic notation, that was literally designed for doing these things.


UML doesn't scale at all. Its best use is to produce slides. It's less clear than a freeform whiteboard diagram, and actively obstructive to accomplishing a task in practice.

OP's tool has at least the potential to scale to a real codebase in a way that UML could never.


> UML doesn't scale at all.

This is often the problem with visual tools. They work well for diagramming small things, but as complexity increases, the visualization becomes very difficult to follow.


The job of diagrammatic representations is not to "scale", but to effectively display and convey the most salient elements of a high-level design. This is just as true of what OP built as about UML itself.


No one said the job is to scale. It has to scale with code base complexity otherwise it's useless.


> Why VPLs failed?

but they absolutely didn't ? there are entire industries based around VPLs. if you go in most music conservatories in france, you can take a cursus in computer music where you'll likely be learning Max/MSP (https://images.squarespace-cdn.com/content/v1/542ee0e5e4b0e1...) for instance just like you'd learn bassoon or trumpet.

Likewise, every year you can look for the twitter tag #nodevember where artists present what they did in mostly node-based VPLs, like the ones provided by Blender, Houdini, etc.: https://nodevember.io/

There are 200k posts tagged #touchdesigner on instagram: https://www.instagram.com/explore/tags/touchdesigner/ and that's only a small fraction of things

There are more LabVIEW job postings when searching on Linkedin on my country than Rust and Golang COMBINED, by quite a margin (2400 for LabVIEW, 300 for rust and 800 for golang).

It looks like there's a huge "dark matter programmers" effect at work here :)


I reckon most of professional programming done by people whose main job is not programming is either Excel or VPLs.

It seems weird to me, that the saying that visual programming has failed is being repeated still. E.g I think the film and game industries would be nowhere near the fidelity they are today without those tools.

Sure, it might be that VPLs are best used for scripting and not for implementing the core systems, but we are usually fine with such distinctions for "real" languages without thinking they have failed.


I read somewhere that the number of Excel "programmers" in the world is an order of magnitude larger than all the professional computer programmers combined.


Left out are industrial programming languages for PLCs. IEC 61131 languages include function block, ladder, sequential function charts and structured text. The first three are visual. Personally preferred ST as its closest to Pascal and Ada but FB is good for real time debugging PID loops. SFC is good for state machines.

I wish we had more diversity in editing environments for different tasks but all still on one IDE for debugging. Rockwell was my favorite editing environment but expensive and specialized.


The game engines Unity, Unreal, and Godot all have strong support for visual scripting as well.


It's also contextual to the crowd, houdini guys don't live in os/framework land where cruft, legacy and culture change the way people have to program things.. they just dataflow everything naturally and the tool is made for that (and the reward is extremely high).

It's a bit like c/simd vs APL


Are node-based VPLs like Houdini Turing-complete? I've only used Grasshopper, but the sense I got was that it was not Turing-complete. If they're not Turing-complete, then I don't think "programming" is the right word. Computer-aided design would be more appropriate.


The one I work on, https://ossia.io has ways to express loops and conditions in its visual syntax, which can get pretty close.


Ossia looks really interesting - I've wondered for years how the "timeline vs node graph" distinction could be erased.

Who uses it on the whole? How big is the community?

(I'm a developer on Open Brush https://openbrush.app/ and I'm really interested in hearing how other open source creative tools do things)


:) thanks !

> Who uses it on the whole?

it mainly targets media artists. here's a few shows / installations / artworks that used it : https://ossia.io/gallery.html

> How big is the community?

not super big aha, it's a fairly specific niche: people who got dissatisfied with e.g. Max, etc. and needed some form of timeline.


Are finite state automata turing complete? No. But any normal person would say that programming them is indeed programming.


Creating something Turing-incomplete is not the same thing as using something not Turing-incomplete. For instance, the people who created Houdini are programmers. The people who use Houdini are not. I can program a dishwasher using a FSA, but that doesn’t make people who use dishwashers programmers.


I guess I don't know for sure but I think Houdini is Turing complete. You can read, write, and branch on arbitrary data.

Shader graphs support loops which should imply they're Turing complete as well?


> Are node-based VPLs like Houdini Turing-complete?

As a Houdini user and 20+ year professional software developer…

Yes, Houdini's node-based VPL is Turing-complete.


Why does Turing completeness discriminate an activity between CAD and programming? So when one writes CSS or Makefiles they are programming; but if they write systems in e.g Datalog they are doing CAD?


> when one writes CSS or Makefiles they are programming

They're not? At least that's not how I see it being called


For GNU make, it depends on the makefile. https://okmij.org/ftp/Computation/#Makefile-functional:

The language of GNU make is indeed functional, complete with combinators (map and filter), applications and anonymous abstractions. Yes, GNU make supports lambda-abstractions. The following is one example from the Makefile in question: it is a rule to build a test target for the SCM Scheme system. The list of source code files and the name of the target/root-test-file are passed as two arguments of the rule:

    make-scmi= scm -b -l $(LIBDIR)/myenv-scm.scm \
               $(foreach file,$(1),-l $(LIBDIR)/$(file)) \
               -l $(2).scm
The rule returns the OS command to interpret or compile the target. It is to be invoked as

    $(call make-scmi,util.scm catch-error.scm,vmyenv)
As in TeX, the arguments of a function are numbered (it is possible to assign them meaningful symbolic names, too). Makefile's foreach corresponds to Scheme's map. The comparison with the corresponding Scheme code is striking:

    (define make-scmi
       (lambda (arg1 arg2)
            `(scm -b -l ,(mks LIBDIR '/ 'myenv-scm.scm)
               ,@(map (lambda (file) `(-l ,(mks LIBDIR '/ file))) arg1)
              -l ,(mks arg2 '.scm))))
(via https://stackoverflow.com/a/3480982, which gives a Fibonacci example)


CSS is a great example. Someone who only engages with CSS and HTML would best be described as a web designer.



On the other hand, ACL2 isn’t Turing complete but can express many interesting programs: https://www.cs.utexas.edu/users/moore/acl2/v8-4/combined-man...


Are you sure ACL2 isn't Turing complete? I can't seem to find a proof of this.


It’s embedded in a Turing complete language, but you can’t prove theorems about programs written in a Turing-complete language (Rice’s Theorem) so the ACL2 language itself has to be limited to programs that can be determined to halt. The J-Bob language from the book The Little Prover might be a better example.

There’s a whole programming paradigm here of languages that aren’t Turing complete: https://en.m.wikipedia.org/wiki/Total_functional_programming


Whether or not it's Turing-complete is not all that important. The important part is whether or not Turing-completeness is required for what is being created. In the case of HTML and CSS, Turing-completeness is almost always not a requirement.


Turing completeness is not a requirement for any computational task that is guaranteed to complete, either. You only need it to add infinite loop bugs.

Just to be clear - the things you actually care about expressing in a program don't require Turing completeness about 99.99999% of the time. The main exceptions are things like unbounded searches where the code will only terminate if it finds a solution to something, and there's no a priori size bound on the solution space.

We use Turing complete languages as a compromise. Proving your algorithm terminates to the satisfaction of a proof checker is often more work than it's worth. We've decided allowing bugs that a termination checker would prevent is less work to deal with than satisfying a termination checker. But for my daily work, Turing completeness is a practical compromise, not a necessity for expressing the algorithms I need to.


In my opinion, the major disadvantage of visual programming environments is the inexpressibility of complex structures respectively programs. I was doing research on a VP myself and evaluated it with some users that were no programmers. They got used to the principles fast, however they got stuck pretty fast when trying to implement complex behavior. Scaling up the VP in that terms is a balancing act between complexity and simplicity (in terms of user interface design).


Yes, most complex abstractions are hard or even infeasible to represent visually. This is why VLSI hardware design actually went the other way, from visual schematics to purely textual languages as complexity and the role of abstraction increased.


VP has been making mistake of using too many layers of abstraction, unnecessarily increasing complexity trying to avoid code at all cost.


> Why Visual Programming Languages have failed...?

I wouldn't say that is the case, because Scratch is quite popular (for example) and hangs around 20th among programming languages on the TIOBE index. If you look at the RPA world, UiPath promotes itself as a Visual Programming tool, and it's quite popular as well.

However, there are some who complain about UiPath's "boxes on top of boxes". It tends to obscure being able to reason about the code and see important details, which gets hidden under other boxes. And you have to do a lot of typing anyway.

The criteria for what it means to be a success as a VPL, is arguably inappropriate, if you are saying it must be in the top 10 of programming languages. VPLs can and have found various niches, which might be the best they can do, for now.

The problem with VPLs is often "visual overload", where at a certain level of complexity, you are dealing with an entangled spider-web, twisted spaghetti, or straight up clusterf*ck.

Also, not quite sure what the insistence on little lines for many creators are all about, which only makes devolving into eye burning spaghetti only faster. People usually don't want to deal with such a mess, thus rather read or type out the code, so they can have greater clarity and readability. In addition to that, many of us are used to reading and typing.

Consequently, it might be argued that an collapsible outline is a better way to represent code. After all, we often do this when typing (books, PowerPoint, etc...). Particularly when dealing with the imperative or functional styles, which may lend themselves better to being represented in that way, where names can be headings. Probably you want to be able to easily hide or expand to more complex representations.

It seems to me that any VPL will not be able to beat typing out code for the majority at any time in the near future. Still, at the simpler levels or for the younger, Scratch shows that certain VPLs can be usable and find a place. Maybe the way forward is to build on what has had some success, and figure out how to make it more visually manageable.


Scratch is really just a visual representation of a programming language's AST. It's like programming in LISP but using drag-and-drop blocks as opposed to parentheses. And the more general your programming language becomes, the more uniform the corresponding graphical representation as well - you get no help from the "block shapes" encoding different AST forms, everything turns into a soup of equally-shaped blocks where "anything goes". So, pretty much the same as LISP with its soup of parentheses.


Absolutely brilliant essay and work - It's always extremely impressive when someone can both develop such depth in an area that interests them but also write/blog so eloquently about it.

This is also a great example where I can perfectly see VR/AR shining and perhaps indispensable in this particular use case.


I have two issue with dataflow programming languages: 1) They generally do not support batch processing. In the past, when I needed batch rename variables in a dataflow, my solution was to export the flow into an XML, do the batch rename in a text editor, and import the XML back. 2) They generally do not support macros (operators, which would generate new operators). As a workaround, I was using Python scripts to generate the XMLs...


The torch metaphor detracts from the point of the whole post.. no, we're not a caveman with a torch. You can make that same argument about anything vision related since it's just how human vision works.

Do you want to get a quick overview of the codebase?

git log --pretty=format: --name-only | sort | uniq -c | sort -rg | head -10

Now look at the top files and read them in their entirety, then read whatever important stuff they often reference. You now know 80% of what you need without having to bust out the research grade diagrams.


“ We spend 90% of the time reading code, and only 10% writing it, and that 90% happen to be my least favourite part.”

I don’t know - I like reading others’ code, I learn a lot from it and makes the code I do write a bit better.

Of course writing code is easier - you’re expressing what’s in your mind. Reading what others wrote is hard because it’s a reflection of what was on their mind but it will help you understand them more.


Most of the time I read code is because I need to alter it, or I review someone else's proposed changes to it.


When starting to work on a new code base, good folder structure with sub-readme documents in subfolders goes a long way in terms of understanding different pieces. A high level architecture document describing why the project is structured the way it is is also essential.

Once I have a mental model of the parts I’m working on in my head, fuzzy keyboard based navigation (cmd + shift + o in vscode) works best for me to jump between parts of the code base, but it admittedly takes some time (a day or two) to get to that point.

It gets much harder in projects that are unintuitively structured for historic reasons or because the maintainers never cared to pay attention to this.

A good example in my opinion is the Chromium code base where I spent considerable time over the last couple of years while working on an electron app and which has an excellent code navigator at https://cs.chromium.org.


I've been thinking about this a lot recently since this was posted:

https://news.ycombinator.com/item?id=30858311

Tldr; 67% of time spent on a project is maintenance, and 60% of time spent on existing code (ie when doing said maintenance) is spent on understanding it. For a reasonable sized project, that is man-months of effort and a huge cognitive burden.

I've begun to wonder whether tools like literate programming, that have been with us for decades, are only just now starting to look value for money. They require an upfront investment, but you win it back in spades in not having the massive technical debt of painfully unnavigable code that hampers even basic development tasks.

I go with literate programming because I don't think basic documentation cuts it. LP puts code description front and centre and is much easier to keep in step with the source output as a result. Basic documentation by contrast relies on goodwill and trust, and is vulnerable to time pressures.


I think something akin to literate programming is a huge win for heavily-maintained tools.

The trick is that most tools aren't heavily used at all, and thus the large amount of time it takes to build in a more literate style winds up being largely wasted.

At least, that's how it looks from where I sit, as an idealist who loves readable code and plain-text, version-controlled literate-style documentation.


Right here, the #1 mistake on the whole idea:

> Why is it so? An old adage tells us “A picture is worth a thousand words” and intuitively we know that text is clearly not the best way to represent everything. All those bright minds working on the visual programming languages were clearly onto something, but seems like we are missing something essential to make visual programming a reality.

Yeah, that "missing something essential" is not see that write text IS VISUAL. The examples comparing graphs and text show how much BETTER is the text side.

“A picture is worth a thousand words”

is more about how much you talk, and bam, you can show instead. You can do that with TEXT. I will summarize EVERYTHING in a single char:

Done.

See? Programming is using text because text is VERY visual. Fonts are different shapes, sizes, colors, bold, italic, etc: That is a lot of visualization!

Combine with ways to do hyperlink and all that and you get something very powerful.

---

What is the real problem, IMHO, is that there are not much advance in how leverage the rich information that a compiler infrastructure has until recently (with things like language serve protocol, tree-siter, error with diagnostics, etc), and archaic editors with SUB PAR environments (shell, vim, ...) are terrible places to do nicer!

And this critique account for the fact that most are not even on par with TurboVision or FoxPro DOS!

And this combine with the fact some languages (like C) are so bad that is VERY hard to "analyze them", because if the compiler get confused and output thousands of errors then you can't make it better with fancy graphics.

In other words: You need to align everything to make the idea work great.

This is not novel: We already have in the past much better implementations like hypercard, smalltalk, fox pro, borland pascal, etc


On metered bandwidth right now and I just burned so much of it on this site trying to see visual go programming and the more I scrolled the more loaded in at full size and he still hadn't got to what I was hoping to see. Just had to close the tab and wince a little at what I'd just done.


For more context, this was shared yesterday by the author in a thread about visual tools https://news.ycombinator.com/item?id=30891230


Programmers so often “dis” no code and VPL. NoCode is the quiet revolution in software development and broadens access which is a great thing. It doesn’t threaten programmers but they often think it does. I remember many trends which expanded IT industry but many involved thought it would shrink their segment. Almost all tech trends have been additive, very few have resulted in another tech being made redundant. Any new VPL solution should be aimed at bringing non programmers into the industry so that programmers can over time concentrate on things which are not best-fit for VPL/no-code.


I don’t actually think visual coding is all that great (and this is coming from someone with significant node experience in Houdini) but I always appreciate reading a code base that has VISUAL representation before I dive in. I am hoping someone like Sourcegraph or Github come up with a nice fancy way to do this because I think I spend a significant amount of time doing exactly that in my head.


Von Neumann’s Computers and the Brain should be required reading for any programmer who is even slightly philosophically inclined.


A small tool I created for TypeScript files to visualize the call graph: TypeScript-Call-Graph

I thought it would be useful for me, especially while working using the functional programming paradigm.

https://github.com/whyboris/TypeScript-Call-Graph


The end result here is great - reminds me of hollywood "hacking scenes" where the techie person is flying through 3d space to find the critical piece of code.

But... it's actually useful! This is a big step forward for visualizing source code, in my opinion.


Is it useful? It doesn't look like it to me.


TLDR: Programming is all about mental mapping and text-based IDEs are not well suited for the task (agree).

Go is especially well suited for this activity (disagree but irrelevant).

We can leverage 3D modeling to better facilitate this mapping process. Also it could be VR. Also we could reverse engineer these models from existing code (awesome idea, do it please, and blend it with an IDE for when I drill down into nodes).


One tool that I think might be adjacent to what the author is trying to do is Sourcetrail. Unfortunately, the project is no longer maintained (and the repo is archived...), and they never got Go support.


This project is beautiful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: