Hacker News new | past | comments | ask | show | jobs | submit login

This comes up in comp sci every so often since the 80s when it first gained traction. I think the short answer is that boxes and wires actually become harder to manage as the software increases in complexity. The "visual is simpler" idea only seems to hold up if the product itself is simple (there are exceptions.)

To my mind, this is analogous to textual writing itself vs drawing where text is an excellent way to represent dense, concise information.




UE4 Blueprints are visual programming, and are done very well. For a lot of things they work are excellent. Everything has a very fine structure to it, you can drag off pins and get context aware options, etc. You can also have sub-functions that are their own graph, so it is cleanly separated. I really like them, and use them for a lot of things.

The issue is that when you get into complex logic and number crunching, it quickly becomes unwieldy. It is much easier to represent logic or mathematics in a flat textual format, especially if you are working in something like K. A single keystroke contains much more information than having to click around on options, create blocks, and connect the blocks. Even in a well-designed interface.

Tools have specific purposes and strengths. Use the right tool for the right job. Some kind of hybrid approach works in a lot of use cases. Sometimes visual scripting is great as an embedded DSL; and sometimes you just need all of the great benefits of high-bandwidth keyboard text entry.


Exactly. Take as an example something from a side project of mine that I recently did:

https://blueprintue.com/blueprint/jt69wz7e/

Compare that with this bit of pseudocode:

  function inputAxisHandleTeleportRotation(x, y, actingController, otherController) {
    if(actingController.isTeleporterActive) {
      // Deactivate teleporter on axis release, minding controller deadzone around 0
      if(taxicabDistance2D_(x, y, 0, 0) < thumbstickReleaseDeadzone) {
        sendEvent(new inputTeleportDeactivate(actingController, otherController);
      }
      else {
        actingController.teleportRotation = getRotationFromInput(x, y, actingController);
      }
    }
  }  
Which one is more readable for someone who's even a little bit experienced in programming? Which one is faster to create and edit?


That nested if statement, in particular, looks especially awkward in the blueprint.


I use Unreal Engine Blueprint visual scripting a lot, and I like it too.

Regarding your point about complex logic and number crunching can be alleviated by using a 'Math Expression' node... which allows math to be written in textual form in a node.


Holy hell, I didn't know about Math Expression node. Thanks! Now I have a bunch of blueprints to clean up from half a screen of hand-dragged math each.


Blueprints gain so little from adding spatial movement to code blocks. A normal language has a clear hierarchy of flow from top to bottom, with embedded branches that give a stable backbone to what otherwise becomes a mess of wires. I think a DSL with a proper visual editor like JASS in war3, starcraft2 or visual LUA editor works better in the long run because it "fails gracefully" into learning a programming language, which ultimately should be the growth curve & workflow for using a visual scripter.

Blueprints are great for the material editor where everything is going to terminate in a specially designed exit node and there is little branching logic, but even for basic level scripting it is more messy than it used to be in the industry.


I agree about using the right tool for the job. And I know UE4 Blueprints are popular so they must be doing something right. Personally I tried using them for a while and gave up. My own experience was:

1) It took me longer to create blueprints than it would have taken to write the equivalent code. I kept finding myself thinking "I could do this in 5 seconds with a couple of lines of code"

2) The blueprints quickly became unwieldy. A tangle of boxes and wires that I couldn't decipher. I find code easier to read, as a rule.

3) I didn't find it any simpler than writing code. A Blueprint is basically a parse tree. It maps pretty much 1:1 to code; it's just a visual representation of code.


> And I know UE4 Blueprints are popular so they must be doing something right.

I'd argue that 90% of it is just that they work out of the box. You download the engine bundle and you can start doing blueprints. Being able to write normal code requires you to compile UE from source, which is a non-trivial thing for any large C++ project.

I'm pretty sure if they embedded Lua or Lisp directly into the engine, and provided a half-decent editing environment in-editor, that it would meet with success just as well.


To come full circle, hardware today is mostly described by textual descriptions instead of diagrams with boxes and wires exactly because at high enough complexity the schematics get totally unreadable. And even on the board/component level the schematics for almost any piece of non-trivial hardware made after 1990 consist mostly of boxes with textual net labels.


I don't think it is an issue of complexity since in both text and graphics you can encapsulate parts into a hierarchy of sub-blocks. Having to break things up from a single level into little pieces because of the screen size and letter/A4 paper is the problem.

In the 1970s and 1980s I did many projects with huge schematics that you would spread out on large tables and focus on details while seeing the whole thing at the same time. It is an experience that can't be captured at all by a series of small pages, each with one to four boxes and lots of dangling labeled wires going to the other sheets. At that point you might as well just have a textual netlist, which made structural VHDL/Verilog become popular replacements for schematics.

Perhaps VR/AR will bring back visual hardware design?


I think the complexity could be reduced in the same ways that we reduce complexity in text based code. We no longer write programs as one giant file after all.


That is exactly right. I don't understand the dichotomy that arises from text-based programmers.

I have written and maintained LabVIEW applications that exceed a 1,000 VIs. I would argue that such well-architected applications are actually easier to maintain for all the reasons that people expound functional languages for. The reason is that LabVIEW is a dataflow language, and so all the benefits of immutability apply. Most data in LabVIEW is immutable by default (to the developer/user). So the reasons why people prefer languages like F#, Elixir, Erlang, Clojure, Haskell, etc. overlap with visual languages like LabVIEW. I can adjust one portion of the program without worry of side effects because I'm only concerned with the data flowing in and then out of the function I am editing.

Somehow people form the opinion that once you start programming in a visual language that you're suddenly forced, by some unknown force, to start throwing everything into a single diagram without realizing that they separate their text-based programs into 10s, 100s, and even 1000s of files.

Poorly modularized and architected code is just that, no matter the paradigm. And yes, there are a lot of bad LabVIEW programs out there written by people new to the language or undisciplined in their craft, but the same holds true for stuff like Python or anything else that has a low barrier to entry.


> Poorly modularized and architected code is just that, no matter the paradigm. And yes, there are a lot of bad LabVIEW programs out there written by people new to the language or undisciplined in their craft, but the same holds true for stuff like Python or anything else that has a low barrier to entry.

That’s very insightful, and I think nails part of my bias against specifically LabVIEW, as well as other novice-friendly languages. My few experiences with LabVIEW were early on in my EE undergrad. At that point I had been programming for about a decade and had learned a ton about modularity, clean code, etc. The LabVIEW files were provided to us by our professors and, still being a naive undergrad, I assumed our profs would be giving us pristine examples of what these tools could do; instead, to my programmer brain, what we had was an unstoppable dumpster fire of unmanageable “visual code” that made very little sense and had no logical flow. Turns out it’s just because our profs might be subject matter experts on a particular topic, and that topic wasn’t “clean long-term LabVIEW code”. Later on I ran into similar problems with MATLAB code that our profs handed out, but by that time I had clued into my realization. At one point I was accused by my Digicom prof of hardcoding answers because there’s no way my assignment should be able to run as quickly as it did. (I had converted a bunch of triply-nested loops into matrix multiplications and let the vectorization engine calculate them in milliseconds instead of minutes)

Just like LabVIEW, my bias against PHP comes from the same place: it’s obviously possible to write nice clean PHP, but every PHP project I’ve had to work on in my career has been a dumpster fire that I’ve been asked to try to fix. (I fully admit that I haven’t tried to do a greenfield PHP project since about 2001 or so and I’m told the ecosystem has improved some...)

I lucked out with Python and started using it “in anger” in 2004, when it was still pretty niche and there were large bodies of excellent code to learn from, starting with the intro tutorials. Most of the e.g. PHP tutorials from that era were riddled with things like SQL injections, and even the official docs had comment sections on each page filled with bad solutions.


What did the peer review process look like? This is part of the complaint about maintainability.


It looked liked any other code review process. We used Perforce. So a custom tool was integrated into Perforce's visual tool such that you could right-click a changelist and submit it for code review. The changelist would be shelved, and then LabVIEW's diff tool (lvcompare.exe) would be used to create screenshots of all the changes (actually, some custom tools may have done this in tandem with or as a replacement of the diff tool). These screenshots, with a before and after comparison, were uploaded to a code review web server (I forgot the tool used), where comments could be made on the code. You could even annotate the screenshots with little rectangles that highlighted what a comment was referring to. Once the comments were resolved, the code would be submitted and the changelist number logged with the review. This is based off of memory, so some details may be wrong.

This is important because it shows that such things can exist. So the common complaint is more about people forgetting that text-based code review tools originally didn't exist and were built. It's just that the visual ones need to be built and/or improved. Perforce makes this easier than git in my opinion because it is more scriptable and has a nice API. Perforce is also built to handle binary files, which is also better than git's design which is built around the assumption that everything is text.

I think there's a lot of nice features to be had in visual programming languages with visual compares. Like I said in another comment, I view text-based programming as a sort of 1.5 dimensional problem, and I think that makes diff tools rather limited. If you change things in a certain way, many diff tools will just say you completely removed a block and then added a new one, when all you did was re-arrange some stuff, and there's actually a lot of shared, unchanged code between the before and after blocks. So it's not like text-based tools don't have issues.


It may be that I'm just not a visual person, but I'm currently working on a project that has a large visual component in Pentaho Data Integrator (a visual ETL tool). The top level is a pretty simple picture of six boxes in a pipeline, but as you drill down into the components the complexity just explodes, and it's really easy to get lost. If you have a good 3-D spatial awareness it might be better, but I've started printing screenshots and laying them out on the floor. I'm really not a visual person though...


In my experience with visual programming discussions, people tend to point at some example code, typically written in a visual programming tool not aimed at professional software engineers (tools aimed at musicians, artists or non-programmer scientists usually), which generally are badly factored and don't follow software engineering principles (like abstraction, DRY, etc), because the non-programmers in question never learned software engineering principles, and use that as an example of visual programming being bad. They tend to forget that the exact same problems exist (sometimes much worse, IMHO) when these same non-programmers use textual languages.

We're just used to applying our software engineering skills to our textual languages, so we take it for granted, but there exists plenty of terribly written textual code that is just as bad or sometimes worse than the terribly written visual code. That's why sites like thedailywtf.com exist!


yes the flexibility of visual programming gets complex and hard to understand when you try to scale up and have more people involved


You know, I've encountered various systems in my life that I thought were a silly way to do something.

And then I set out to replace them with something "better". And during the process of replacing, I came to understand and form a mental map of the original system in my mind. And at some point I realized that the original system was actually workable. And it was easier to just use the old system now that I understood it enough.


> I think the short answer is that boxes and wires actually become harder to manage as the software increases in complexity. The "visual is simpler" idea only seems to hold up if the product itself is simple

Why? Can you give concrete examples of this? I see this sentiment a lot but never any particular examples.

When you start off with a prototype in a text-based language for something "simple", you don't extend it by continually adding in more and more input arguments and more and more complex outputs. You breakout the functionality and organize it into modules, datatypes, and functions.

Same thing goes in a visual programming language like LabVIEW. When you start adding in more functionality to move beyond simple designs, you need to modularize. That reduces the wires and boxes and keeps things simple. In fact, I liken well-done LabVIEW to well-done functional-code like in Racket, Scheme, F#, SML, etc., where you keep things relatively granular and build things up. Each function takes in some input, does one thing and returns outputs. State in LabVIEW is managed by utilizing classes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: