Hacker News new | past | comments | ask | show | jobs | submit login

'Visual Programming' failed (and continues to fail) simply because it is a lie; just because you surround my textual code with boxes and draw arrows showing the 'flow of execution' does not make it visual! This core misunderstanding is why all these 'visual' tools suck and don't help anyone do anything practical (read: practical = complex systems).

When I write code, for example a layout algorithm for a set of gui elements, I visually see the data in my head (the gui elements), then I run the algorithm and see the elements 'move' into position dependent upon their dock/anchor/margin properties (also taking into account previously docked elements positions, parent element resize delta, etc). This is the visual I need to see on screen! I need to see my real data being manipulated by my algorithms and moving from A to B. I expect with this kind of animation I could easily see when things go wrong naturally, seeing as visual processing happens with no conscious effort.

Instead visual programming thinks I want to see the textual properties of my objects in memory in fancy coloured boxes, which is not the case at all.




This seems pretty close to my experience. I remember thinking when I first encountered SQL Server Integration Services that this is 'visual programming'.

It didn't take very long to realise that as a solution SSIS was well suited to some simple tasks where the logical actions on screen matched those taking place on the database.

But as soon as those tasks became even slightly more complex, and this mirror was broken, the whole thing sort of fell apart. Then I was struggling to find ways to defeat the system to make it do what I wanted. It was with this realisation, i.e. that the solution was the problem, that I stopped using it.

But it's not just SSIS that suffered this way, ActiveBatch is another example and I'm sure there are plenty more.


For fairly simple configuration tasks IMO visual programming is a powerful tool for non-programmer end users to actually create programs which are not prone to syntax errors, misguided loop conditions or such. It also means one does not need to learn yet another syntax. True, the application domains are limited, but for example the common node based computational graph generation done for example in Blender and other visual software works quite well.

It's not meant for professional programming, it's about empowering the end user with programmable tools.


But at what point does simple become complex? It's all well and good saying only use it for simple tasks, but when real life complexity comes knocking, you end up with this sort of magic [1] that is far worse than any C++ I've seen in the wild.

[1] http://scriptsofanotherdimension.tumblr.com


People interested in VP really, really should learn about DRAKON[1], the Russian style guide for structured dataflows.

I've used it for real projects and it actually solves the spaghetti problem. Compare this:[2] with this:[3].

[1] https://en.wikipedia.org/wiki/DRAKON

[2] http://scriptsofanotherdimension.tumblr.com/image/1386972466...

[3] https://upload.wikimedia.org/wikipedia/commons/0/0f/Dutch_cr...


A drakon node does not seem to support multiple input values? That's why these visual graphs get convoluted. Compute nodes deal with multiple input and output connections.


Good call. Drakon is for control flow specification, not data flow specification. However having a standard for splitting subsystems in separate modules and laying out components using regular rules would benefit any kind of visual graph.


Visual programming excels at data wrangling (parsing structured data in an input/output pipeline that spits the same filtered data in a new format and structure).

You're right that until now its usage has been limited to domain-specific languages for automating applications. But with the emergence of functional programming in the mainstream and visual tools that follow that paradigm, I predict that soon there will be visual tools that allow quite general programming. (I've already seen working prototypes, and they deliver on most of the promises).


Watch Bret Victor's videos about Learnable Programming[1], in particular Stop Drawing Dead Fish. [2] Only because visual tools in the past were limited to putting properties in boxes doesn't mean that all visual tools have to be that way.

[1] http://worrydream.com/LearnableProgramming/

[2] https://vimeo.com/64895205


I have seen it, and while Bret Victors 'research' is interesting, none of it actually scales to a practical level.


Because it is not engineered to industrial levels. It lacks encapsulation and packaging, but those can be added.

Have you seen the tools by Chris Granger (Light Table, Eve)? Although they are not as spectacular as Victor's, those show a lot of promise to become practical tools.


Bret Victor's ideas lack a lot more than that. Complex systems contain hundreds of sub systems which in turn can contain hundreds of algorithms, which then contain many many variables. All interacting in some way. His ideas simply don't scale to this level (he works with < 10 variables, enough for a nicely packaged demo).

Chris Granger himself stopped working on LightTable because he didn't believe in the project, and if you use it, the live editing gives minimal gains in productivity. As well as being buggy, slow, distracting and potential dangerous when blindly running half complete IO operations.

As for Eve, well.. his first release was a visual programming tool that inflated 3 line sql statements to take up a whole page with, you guessed it, boxes and arrows. Now he is working on a 'card wiki' idea which effectively tries to hide all the program logic behind pictures and hyperlinks. Neither are good ideas, or once again, scale in any meaningful way.


Why do you think that doesn't scale? Hiding information is the basis for progressive disclosure, which is essential to handling the limitations of short-term human memory. Wikipedia is card-based,* and it is one of the five largest sites in the web. See Cognitive Dimensions[1] for ways to make visual systems scale.

As soon as you add encapsulation and abstraction, the system can scale; it's no different than hiding code logic behind function declarations and object schemas. If you add easy composition, like modern functional languages have, such systems could be more scalable than traditional imperative languages; code outliners like Leo follow the same structural principle, and they're great for programming.

[1] https://en.wikipedia.org/wiki/Cognitive_dimensions_of_notati...

* Ok, Wikipedia contains articles and not code, but my point is that the structure can handle lots of hierarchical content in multiple categories.


Wikipedia is also written in English, and made for human consumption. English is infinitely more scalable than any programming language, and the brain is far more capable of understanding than a computer. So comparing the two is unfair to say the least.

Additionally simply adding 'encapsulation and abstraction' does not equal scale. Live programming for instance requires context and re-running of code on every edit (and we could be talking millions of lines of code that need to execute), re-reading of data files, re-running of prior user input and so on. There's a reason it is only implemented for small systems (read, useless practically).


> English is infinitely more scalable than any programming language, and the brain is far more capable of understanding than a computer. So comparing the two is unfair to say the least.

IMHO programming a large system shouldn't be all that different from writing an encyclopedia, or rather a math textbook; the same combination of informal explanation and formal demonstrations could be the shape that "visual" programs could take. Literate programming is written in English, and it provides much of the same advantages of the encyclopedia or technical book. Systems like IPython / Jupyter work in that direction, and they belong in the family of visual programming (although they use "classic" languages for the formal part, nothing stops them from also including graphical models as part of the code specification).

Programming languages should be made for human consumption, and one of the promises of visual languages is that they use secondary notation to convey the meaning of the program, so that parts that are not visible to the compiler are still relevant.

I believe that programming environments should converge towards a hybrid of programming, using the best representation for each concept - like text (to represent abstract concepts), diagrams (to represent more physical properties - time, data flows, and dependencies between modules), and raw text for the human-readable comments that explain how everything is tied together. If you build a programming environment like that, my comparison is not unfair at all.

>Live programming for instance requires context and re-running of code on every edit (and we could be talking millions of lines of code that need to execute), re-reading of data files, re-running of prior user input and so on. There's a reason it is only implemented for small systems (read, useless practically).

Modern IDEs are capable of doing a lot of that when in debug mode, I see no reason why having a language with a graphical syntax should make any difference - it just places a little more weight on the parser and graphic card.


> Programming languages should be made for human consumption

The problem here is the "impedance mistmach" between humans and computers. Wikipedia is not a good example: it is exclusively for humans. If there is a "program" for the computer, it's minimal: "search this", "when I click here, take me to this page" (you'll notice everything more complex than that, even in Wikipedia, requires its own mini-language). This is the kind of trivial systems which are easily encoded by "visual" programming.

The problem is that real computer systems need to be understood by the computer as well as by humans. In order for something to be understood by a computer, it must be specified in formal terms; English -- natural language, actually -- is terrible for this. Once you're working in a complex system with a formal language, the limitations of these "visual" tools become evident.


> Wikipedia is not a good example: it is exclusively for humans.

Not exclusively. Bots and Wikidata have shown themselves pretty capable of doing complex automated tasks on top of the structured data contained in the encyclopedia. And the wiki platform plus templates is a very general and flexible data structure that makes it easy for humans and software to collaborate.

> In order for something to be understood by a computer, it must be specified in formal terms

This doesn't need to be that way, at least not for 100% of the system. End-user development tools like spreadsheets and Programming By Example can be used to ease the impedance mismatch without requiring the human to specify by hand every single step in a formal language, so that partially ambiguous inputs can be used to specify valid programs.

Only the task of building system software should require the full technical approach; most business applications can benefit from looser approaches - in fact being less strict can be a benefit in the requirements specification phase, just like User-centered design has shown us.

Combine that with logic solvers and deep learning, and in the future we could get some systems that can use very high-level commands to build fairly complex automated behaviors.


You have a point about wiki bots, I hadn't thought of that. But isn't the "Wiki platform plus templates" an example of a textual programming language with a precise syntax and rules?

As for the rest, I think it's a pipe dream except for the most basic, "build this out of predetermined blocks" apps.


Does the Visual Editor in Wikipedia count as a visual tool? :-P

My point is that the structure of a wiki, where each unit of content is a separate building block, can be used to build a visual system in the same way that you can build a textual one; the basic components of storage and APIs are the same, only the thin syntax layer is different. You are no longer dependent on the single stream of raw text following a strict syntax, which is the basis from which compilers generate binary code in traditional languages; you'd be able to build programs by arranging code and data in a network within a semi-structured platform, which is one of the characteristics of visual tools.

You're right that stuff like this has been a pipe dream for at least half a century, since the Mother of All Demos; but in the very recent years I've seen a lot of the required pieces falling into place, so I believe it to be possible.

If we apply the same amount of engineering work and refinements that went form Engelbart's demo to modern desktop systems, we could have a functioning system with this paradigm; the basis are in place for it, and many of us have the vision of the benefits it shall provide.

Of course, part of building a program may require describing some components using a textual representation, where it makes the most sense; but with a wiki-like "outliner" platform, parts of the working system could be programmed using visual code, and be integrated with those other components programmed in traditional languages. Heck, GUI builders and visual tools for database schemas already work that way; the only thing they lack is the capability to be expanded with new components, which this new family of visual tools have.


> Why do you think that doesn't scale?

Because it doesn't. Watch Bret Victor's last talk about Eve. All those ideas fail when you give them to novices and try to do anything more complex than describing a few static relationships.

Maybe there's a way, but I'm skepitcal.


Yet end users are able to build fairly complex data structures and conditional processing using spreadsheets, so we know for sure that it can be done.

Of course experimental tools will be severely limited compared with industrial IDE systems that have been polished literally for decades; it will take years before we get these new tools into shape and learn what their best use cases are. But I'm not skeptical, because I've seen it happen before, and the conditions for it are now better than ever.


Spreadsheets IMHO are a solution space that hasn't been sufficiently explored. However they are not particularly visual (especially if you try to do complex things with them), they are very much domain specific they are extremely computationally inefficient, they're very hard to version and they become intractably complex much quicker than equivalent imperative or functional programs.

I've thought about what could be done to improve on their weaknesses (like strong typing, bridging the gap to databases, better table naming), even wrote some code in this direction, but anything I can think of involves either making them less visual, less user friendly or both and doesn't even begin to address the lack of generality

Finally I have to point out that some spreadsheets have been around for longer than many programming languages in current use so they don't actually lack maturity, yet they have seen little improvement in over a decade.


Just because they haven't replaced coding in a text editors for all use cases doesn't mean they "failed". In my industry (PLC, building automation), visual programming languages are the norm and are extremely successful.

They significantly drop the barrier to entry and allow basic controlling of systems for non-programmers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: