Guys, I hope you keep in mind that this is an esolang, it's intention is not to be practical, but interesting. And I think in that it succeeded. I also recommend a look into the repo which contains the parser, which could be interesting. For me it was, because I never really thought about how to parse diagrams.
No need for speculation. Go ahead and say it: 124 points on HN's front page objectively proves it was successful in being interesting [to more than a handful of people]. Exploration into minimalist languages sometimes leads to interesting, practical constructions as well like seen with LISP's, Forth, and Lambda Calculus. Far as time wasters go, such experimentation has a higher chance of going somewhere than some others.
I get that a submission like this is not intended as a suggestion for a practical tool, but as a curiosity and invitation to explore.
It's great as a first attempt to delve into the strengths and shortcomings; you can't learn what won't work until you try. The author should explain the motivations after this design, and could get hints from other similar languages to improve the language.
IMHO the notation has several shortcomings that would make it very hard to use in practice, and not the best strategy for working with functions.
Maybe it's my fault for not seeing the purpose of this approach; but in my experience, free-form graph-based language work best when they emphasize working on collections and data streams, rather than on scalar function parameters. For the kind of example problems expressed in the article, a tree-based notation would work better.
Not really to explore, the intention is nothing but to be esoteric. Suggesting things to make this lang easier to use (i.e. less esoteric) is akin to suggesting adding names to brainfuck commands to make it more readable.
Timwi does have some other useful projects though (take a look at his github), all of which are basically centered around parsing. (an IL debugger, a C# parser, generic regular expressions...)
You're right, I didn't get that this is was intended as esoteric. Actually the parser for the language, if it works right from the Ascii art and not an intermediate representation, must be awesome. This also explains why the visual syntax is much better suited for a computer than human readers though. :-)
I'm always torn when I see visual programming languages (e.g. DRAKON, LabView, Simulink...). Obviously, they don't represent the way a computer works very well. Simple visual programs are arguably easier to understand than their text-based counterparts, but large programs quickly become a complete and utter mess of tangled wires and hierarchical blocks.
The deal breaker for me (having worked on a large LabView program that controls machinery for a wind energy system) is the lack of reasonable source control systems and the fact that it is very hard to structure a large program in a comprehensive manner.
While I don't disagree with your comment, it's a little ironic that hardware is designed visually. Instead of writing a netlist by hand, it's drawn as a schematic. The same applies to mechanical, construction or any traditional engineering fields. LabView targets engineers in those fields rather than those in software.
So, you are right that visual programming doesn't represent how a computer software works.. But it's exactly how computer hardware works.
Don't most use something things like SPICE and Verilog? The reason a schematic comes into play with because there are physical and spacial constraints in hardware implementation. However, that's not at the logical level. Pretty sure you don't use any visual language for FPGAs.
"Pretty sure you don't use any visual language for FPGAs."
There are physical and spatial constraints in efficient implementation on FPGA's. Although I don't use them, I saw plenty of examples in my research of people looking at them visually and changing representations to make them more compact or something. Modifying the physical layout outperformed whatever the initial synthesis was.
LabVIEW can also target FPGA's. This capability has been around for ~10 years now. Interestingly, LabVIEW and FPGA targets get along quite well, the structure of LabVIEW programs lend themselves to being mapped to an FPGA's HDL (see http://www.ni.com/fpga/)
In fact, my sentiment extends to hardware design, which is why I've been toying (and building real stuff for work) with SKiDL lately, a Python extension that allows for text based schematic capture for printed circuit board design.
Of course, the actual routing of the traces still must be done by hand. However, during schematic capture, instead of drawing the same Zener diode input protection 100 times with a mouse, I just call the code that builds it in a loop.
I really meant circuit/physical design rather than high level design i.e pre-tapeout. Also PCBs are visually designed though I learnt about using python above and need to check that out.
Those things are inherently mechanical, so there will always be a last visual step before production. That does not mean people don't try to push non-visual tool as far as they can.
We have better abstractions for textual programming because there has been much more research poured into them, but there's nothing preventing visual languages from being used at higher abstraction levels - they don't necessarily represent the computer circuit level.
The typical strongest point of visual languages is that you can represent relations by proximity. This way, you can directly indicate the connection between a caller and its calles with an anonymous connector line, whereas in text you would need to declare a variable name, pass it as a parameter and assign it to a local variable. This makes moving data between components much more verbose in text than in graphs.
Textual programs try to approximate this relations-by-proximity with indentation
levels, but the expressive possibilities in a visual language are stronger. This makes them also more difficult to lay out and organize, though. But I foresee than in the future, better support in IDEs will make it easier to mix visual and textual languages.
It is worse than that: we have 100k years invested in language and 5k years invested in writing language down. Pure visual communication went out of style with the cavemen and was never very effective beyond being used as supplementary material.
Visual has some large advantages as well as drawbacks. It uses a different part of the brain for reading that works more in parallel, and can actually augment your short term memory. However, making visual work requires taking advantage of its concreteness, which textual language styles can't do very well.
> arguably easier to understand than their text-based counterparts, but large programs quickly become a complete and utter mess of tangled wires and hierarchical blocks.
Or the other way around: When you write a tangled mess of code you can still navigate it with /bin/grep, this gets much harder when your code is a picture (though, when stored in SVG, it should be trivial to write a /bin/svg-code-grep). BUT: If you follow pure functional design principles, you rarely have to keep more than one screenful of diagrams in your head.
I think we have to differentiate between "general purpose visual programming languages" and "visual programming languages actually used".
Drakon, respectively its visual aspect, for example was, afaik, developed initially to allow physicists and chemists to model their domain specific knowledge in a much more intuitive way and I think it overall succeeded there. The few snippets I saw were very convincing demos[1].
Of all the attempts at purely visual, general purpose programming languages I cannot remember one that did not fall into any of these 2 camps:
* academic exploration of visual programming languages - nobody expected tangible results but we got squeak, etoys and drakon
* attempts to make programming easier by people who know just enough of programming to understand it is hard[2] - they usually promise tangible results by yesterday but never deliver anything that could count as a practically usable language
I've also worked with LabView and completely agree with your assessments.
I'd also add that line between a simple project which reasonable men might agree that was well suited for LabView and a large project which is destined to eventually become 'a complete and utter mess of tangled wires and hierarchical blocks' is not at all obvious or delineated... and that's when the suffering really begins.
Indeed, I guess. But really, that's the fault of the editors, not the concept. Xilinx graphical editor downright crashed frequently. The professional developers used vhdl or verilog instead, obviously (and even that would have been subpar because of a half assed linux port, I bet).
What I mean is, if the devs had to dogfood their own product, if labview was self hosting, it could become a lot better. But in the background it's all just lowered to an intermediate programming language, the front-end is just grafted on.
It's a marketing tool, just look at the camel case name. I hate it with a passion every time it comes up.
Edit: I said "just" a lot, although, apparently it's quite good for it's intended purposes so take that with a spoon of salt.
I had never thought about the importance of being self-hosting like that.
I missed the point about the camel case name. In any case, it's marketed as LabVIEW (VIEW standing for Virtual Instrument Engineering Workbench), not LabView.
Probably it's not intended as a full development environment, more like a scripting language.
The name just strikes me as unprofessional, or at least not very serious, just like the programming environment that it is, and I guess that's not even a bad thing if that's intended, considering the target group (of technically inclined but uninitiated or at least slow with a key board kind of lab rats).
The clear advantage of text-based programming languages for complex programs is that all the wires and arrows are represented by symbolic names. Instead of drawing a wire, you simply use an entity with the same name (be it a variable, a function, or a data type). This is much closer to natural languages than a tangled mess of wires which becomes unmanageable after a certain level of complexity.
Having recently finished playing MHRD I think there are contexts where the graphical representation is a lot more clear. Granted, that game's language is at a very low level where you write your wires kinda DOT-esque, but I basically had to draw everything from the ALU onwards on paper first.
i don't think it's obvious at all that visual programming languages don't represent how a computer works, but actually, i don't even see how that matters at all.
for labview, i feel large programs benefit tremendously from its dataflow nature. every labview project i have worked on has exceeded 1,000 VIs, and it's often very easy to componentize things such that local changes stay local.
labview works fine with source-code control. the diff and merge story could definitely be better, but they are at least usable, particularly the diff.
Source control shouldn't be a deal breaker I feel. The on disk representation would just need to be optimised for source control. For example an AST in the form of s-expressions could represent the visual structure of the program and would be amenable to VCS manipulation
I really like to use drakon to visualize algorithms. When I had a course that was going thourgh it last year i wrote the algorithms we used up in drakon. I felt it gave a good overlook of the algorithm.
>I'm always torn when I see visual programming languages (e.g. DRAKON, LabView, Simulink...). Obviously, they don't represent the way a computer works very well.
Citation needed. A graphical language can "obviously" represent anything a text-based language can.
One could just as validly make the much-overused "citation needed" call on the assertion in your second sentence - except that we can reasonably require a purely graphical response in this case.
What citation do you need? At the very least, a visual environment could always trivially display each syntax/AST node, which will be the same as seeing a plain text version of the code (but better -- closer to something like Apple's Dylan IDE or Smalltalk environments).
Text has been part of graphical representations for milennia.
The cool part is that you get to decide WHAT text you get to see -- you're not just left with seeing the lower level code.
It could be that, but it can also be boxes representing objects or functions, program modules, interacting programs, at any level of abstraction one likes.
Stripping the text out is a way to see how much information the graphical element conveys.
By allowing arbitrary text in so-called graphical languages, you have turned your original statement into a vacuous truth - an empty tautology - as all text can be diagrammed.
Exactly. A graphical representation and a syntactic representation of code will both have text in them. They just have different syntax for equivalent structure.
The fact that you have to stretch for that analogy, rather than name a visual language that offers more than just an incremental improvement to the process of reading text, suggests that the alleged power of visual programming is mostly hypothetical at this point.
This is almost timely, following the recent 'Why Does Visual Programming Suck' post [1]. Funciton did not get mentioned in the ensuing discussion.
My first impression, from the factorial function example, is disappointment - it does not seem to promise any improvement in clarity, though that may be due to its unfamiliarity.
Um.. no they haven't. Function block diagrams do not support recursion. Unlimited recursion basically means a graphs ability to modify itself at runtime -- i.e., change the wiring.
Generally not. PLCs are designed to be as reliable and fault tolerant as possible, so they don't allow recursion (along with a bunch of other things, like direct memory access with pointers[1]) since it's so easy to accidentally make something either take a really long time, or fail to terminate. You can fake it by piping an output into a variable which is also used as an input, but this makes it explicit that the last value is used then the next value stored, rather than multiple steps of recursion happening in one scan.
Generally the kind of software you develop in structured text doesn't need recursion anyway.
These copy pastas (whether for visual programming languages or approaches to solving spam or whatever else), while witty the first time one sees them, get irrelevant very fast as they only provide generic, blanket criticisms of a design with no consideration for the tradeoffs involved or why they are made.
In other words, you could post such a takedown for plenty of visual programming languages that have very satisfied users who are made very productive by their use. Does that mean they “won’t work”? Clearly not.
These text templates are very easy to post - all it required of you was to skim the page, add an X to the lines where it felt like there was sort of a match even if you don’t really know, and click “post” - but they provide little to no value.
In case that's not clear: this one was freshly made, specifically for this thread, tailored to the strengths and weaknesses of the Funciton language.
I chose the "flame form" format as a tribute to usenet, and for being eye-catching. It's an easy way to digest the cognitive dimensions framework applied to a practical case, which I wanted to spread. Hacker News is a good place to divulge relevant concepts to people interested in visual languages.