A "data flow language" that only visualizes functions isn't. IMHO.
I agree with most of the perennial discussion that takes place here for all of these projects: text is good, but not for everything and everyone, yet graphical programming has never taken off... why?
Because if people want a data flow, they care more about the data than the flow. Spreadsheets have secured a rock-solid place in the "lingua franca" of userland, without ever visualizing the "connections," which are seldom enlightening.
I would not criticize something for failing to achieve what it doesn't attempt, and FMJ may be fantastic for developing logic. But it calls itself a "visual dataflow language," and I'm not seeing any data. (Yes, code is data in lisp, that's not what I mean.)
To visualize code is to focus on the means to the end. I would respectfully disagree that "21st-century" programming will resemble the static modelling of systems represented by code in whatever form. Systems will be "live" as a rule. And that is what will improve the lives of programmers. Remember that "interactive computing" was once a radical idea, the realization of which required people to spend their lives and resources fighting for it (Licklider, Englebart, and Bob Taylor et al, to name a few). Well, if you feel any resistance to the idea of relinquishing your rinse-and-repeat development cycle, I have two words for you: punch cards.
I hope I am not trolling here. Just this month I've been "sniped" by a data flow side-project that has me thinking about these things, and I'm extremely interested in how other people approach the subject (and their discussions here).
edit grammar, wording
> Remember that "interactive computing" was once a radical idea, the realization of which required people to spend their lives and resources fighting for it (Licklider, Englebart, and Bob Taylor et al, to name a few).
Yeah, but the funny thing is that this radical idea only sparked for a while, and then was extinguished. Present world, with all the hip languages we use on our $dayjobs, is pretty far behind what Englebart et al. were working on. Event the present Lisp (and Smalltalk) systems are mere shadows of the "interactive computing" of the past. People are slowly rediscovering the concept again though, gently pushed by visionaries like Bret Victor.
I 100% agree with your comment about data. The most successful dataflow languages I've heard of were always tied to some particular data processing tasks - like music and general DSP. It's a good driving force that helps immediately verify the feasibility of the language and the environment.
Directed graphs are the most general data structure, and the language is homoiconic. So data definitely will be represented graphically: not just directed graphs, but other structures such as arrays.
But there's only one of me, and if I'd done this already, something else wouldn't have got done. However, it is high on my agenda, and will be done soon.
I tried to commercialize this technology in 2010, with Cloud Stenography (a joke name): https://github.com/rjurney/Cloud-Stenography https://vimeo.com/6032078
More recently, a company has done some similar stuff, albeit for Spark, in Seahorse: http://seahorse.deepsense.io/
My long-running personal project MFP  looks at basically the same problem as FMJ -- visually representing data flow programs with a "real" language under the hood -- but using Python as the underlying language runtime and adopting the graphical conventions of Pure Data.
Great to see interest in dataflow programming. It really is a good way to turn your programming mind inside out.
For someone to make a new system you need two things:
1- Completely rethink and redesign how things are done.
2- Modify all the tools in use today. This is the biggest part by far. The amount of work you have to do is so much that most people will just choose to continue evolving over the current system.
E.g for making your own visual language you not only have to develop the language itself, but make the compiler also visual, and the debugger also visual, but the tools you could modify, like llvm are text based and terminal based. Modifying llvm for it is hell. And consider yourself lucky if you have to modify llvm, because in the past it was gdb with monolithic design and no libraries.
An example of a redesign was Xerox PARC GUI. Instead of one input-one output development system in the terminal they added 20 or 30 inputs and 20 or 30 outputs. For this they had to use a new programming concept, so they had to use object oriented programming(without it handling 20 inputs-outputs at the same time becomes so hard it is in practice impossible), that at the same time forced them to develop their own tools in order to see object oriented data structures and methods.
Any modern UI development inherits from Xerox design.
Another paradigm change was the Iphone: it forced every smartphone to have hardware acceleration video and touchscreen and used it as input methods.
People like Bret Victor and MathBox creator are trying to redesign the whole concept using new tools like GPU programming and video displays. I agree with them. The current system is so wrong , even when it is the only thing we have.
There are graphical techniques, but they all work in very specific and limited domains. In the wild, even flowcharts that aren't meant to represent a process that has dynamic state are frequently tagged with annotations for explaining important details that are difficult or impossible to convey within the strictures of a flowchart's graphical language.
That's not to say that the idea of a visual programming language is not intriguing, or that it's a waste of time to explore them. But the reasons why it's an uphill battle go far beyond mere inertia.
After becoming a big Oberon and Smalltalk fan during the university, I started collecting Xerox PARC papers and imagining how the IT would have looked like if those ideas managed to win the market the first time they tried to go commercial.
Instead we had to wait for Bret Victor and MathBox creator as you say, for bringing those ideas back.
(A lesson from languages like Erlang or Scala would also be that having a company dedicated to marketing the living shit out of the language also helps.)
However when you leave the coder land and when you jump to user land this turns on its head. Overwhelming rejection of text based programming languages and overwhelming embrace of visual programming languages.
Every software that integrates a easy to learn scripting language like Python and Lua and later implements a visual programming language , users will pick the visual language in the vast majority of cases.
A very powerful visual programming language is Unreal's Blueprints which can completely replace the C++ Api for creating games of any sophistication. As a matter of fact as I have discovered it's easier to find documentation how to implement something with blueprints than C++.
It's a just a matter of time till visual programming languages overtake , text based languages because more and more apps prefer the visual coding route because this is why what the users ask.
Why ? Because it's easier to learn
It's easier to write, because there are no syntax or compiler errors.
But mostly it's easier to have fun with.
So if for an experience coder text based language is all he wants, for the user the choice is visual programming languages any day.
C++ is a rather low bar.
> It's a just a matter of time ...
That sets the upper limit to infinity, so it's a non-statement.
> Because it's easier to learn It's easier to write, because there are no syntax or compiler errors.
To paraphrase Stephen Kell, Smalltalk lost because for whatever reason the people who hadn't yet "seen the light" continued to communicate and interoperate with each other.
Directed graphs look nice but visual programming has an important limitation: The ratio of useful information vs space is really small compared to text based representations. What can be represented in 10 lines of code needs a lot of space in the screen when using visual programming.
With that said, I think there's a lot of room for improvement and maybe an hybrid model, where the expressiveness of text can be extended with visual cues.
But I found it was really difficult to define the layout rules and constraints. Eventually you have to start doing some collapsing or shift things around into different dimensions(which is a kinda cool proposition). I did some other design docs and mockups along the way but was eventually discouraged by the actual compiler implementation- not really my area of expertise and unfortunately I have another project taking up all of my time. Mainly I was interested in designing a language that would work well in VR. A nice property of a language like that is it would work well with touch-screen interfaces(Touch Develop has done some interesting work here).
Not to say it's a thoroughly bad idea; I think there's a lot room for improvement over our flat, plaintext canvas we work with.
 - https://en.wikipedia.org/wiki/Mathcad
Visual programs are a bit like if you were presented a program which needs to be read from bottom up right to left. Would that be somehow easier to understand? Visual language presumably can combine both left-to-right and right-to-left and up-down and bottom-up interpretations. While it's possible that such model would be more expressive than conventional programs, that is not certain to be so from the outset.
I'm trying to solve these problems with textgraph: http://thobbs.cz/tg/tg.html and while the ecosystem is still in daipers, I am finding that using a simple and LIMITED file format, that can do nothing more than use text based edge lables and text based vertex contents, makes development of the ecosystem much faster than if I were to go crazy with having ecosystem features like edge collors, box vertex and arrow shapes ect.
Text is intrinsically one-dimensional, and that is perfectly fine for sequential execution of algorithms. But if you want to exploit MIMD concurrency, text limits you.
The way we work with text makes it easier to construct and layer abstractions. Having twenty thousand different boxes in a graphical language is not something we're used to in a way we can have twenty thousand different words in code (or in a sentence) - and if you start treating boxes as letters and grouping them, you're essentially inventing a textual language right there.
I'm not saying visual programming is a dead end - but I feel we need a much different style than just boxes with arrows. For one, text seems to exploit human ability to pattern-match much better than box diagrams; that's why we feel text is denser in information. Making diagrams more information-dense is not an area I recall to have seen explored in visual programming, but maybe we need to go there.
Anyway, best luck with the work on FMJ!
Guess what I miss the most when I use it?
And all the rest you get with text (versioning, global search&replace, static analysys...)
It was never my intention to develop a language that anyone could use. It's for programmers. You still need to write algorithms. It will attract different people, just like different text-based languages.
I am planning to incorporate automated testing into the language, but haven't decided upon the best way to do this yet.
Now that I think about this, it was exactly the same with a large Sharepoint installation (around 2009/2010?).
In my experience there is a world of difference (and a world of pain) between "this is a source file, you can compile or interpret it and if there is a mistake you will get an error message" and "this is the XML serialization of a complex hierarchy of objects and if you mistype a GUID somewhere you will only find out at runtime, usually because something not even remotely connected to your mistake starts behaving erratically".
I am not saying that your project has no merit or that it will fail, I am just commenting on why, in my opinion, visual programming never really became a serious contender for general purpose programming.
Typically developers care about this, business buyers don't and they direct budgets and in many ways, product prioritization. To a large extent the proprietary or obscure nature of integration DSLs never helped bootstrap any meaningful tooling for users.
That's changing though with developer-power driving prioritization and investment. In our case, we hope to make good progress with an upcoming project called Flogo. Currently in stealth and soon to be OSS.
Disclaimer: I work at TIBCO on integration
Trying and failing to go with option one is what leads to the XML mess you describe.
- "Find all places where I am using records of type X and replace that with a record of type Y"
- "where did variable RunningTotal gets initialized? when do we reset it"?
- "Can I write a simple script to verify which temporary variables are not removed from the dataflow after being introduced?"
(Debug in the more traditional sense works reasonably well in webMethods - but this was not the problem - the problem was that it is impossible to reason about a "program" as a series of instructions and analyze it as such, even if at the end of the day a webMethods flow service is just that: a series of steps that manipulate data in a tuple space).
So this is not a critique of webMethods (or FMJ) - I am explaining what the problems you encounter are when you try to use the graphical metaphor to work on reasonably complex programs.
Most of the successful examples cited so far seems to be mostly about audio manipulation. I suppose this means most of the time you compose smallish "scripts", you work alone and not in a team, logging is not an issue, building large, reusable components is not an issue, extending existing logic is not an issue, and so on and so on.
Having also worked a lot with an ancient 4GL application (where the language was supposedly "more expressive") I can sum it up all my doubts as follows:
Certain paradigms work well as long as the programs are written and maintained by a single developer to automate simple tasks but do not scale very well past that point
Anecdotally, I've also used webMethods, SSIS, and several other visual dataflow programming languages, and they look remarkably similar to FMJ.
> Each single graphical element had 2 or 3 different XML "source" files, distributed in different subdirectories
S-expressions with equivalent content are smaller and just as readable, and don't need an extra parser. XML is OK for text mark-up, but not as good for storing data (including programs).
I would be more than happy to explore viable alternatives to traditional text-based development but so far I haven't found anything that really impressed me.
On a side note I remember seeing this which is kind of a hybrid for clojure.
Or maybe because of the Deutsch limit (text is denser):
Interesting to me is that the Drakon editor was implemented in Tcl/Tk, a language I'm quite familiar with. I regard Tcl as a dialect of Lisp, which I've also used a fair amount, at least in the guise of Scheme.
As I'm not at all familiar with this new language can't say too much about it, though implementation in Lisp might be a plus. One issue with it, as with other efforts in this domain, may be that it's not as intuitive or obvious as its authors believe. IOW, in my admittedly limited experience with visual programming, there's more of a learning curve than advertised.
Visual "syntax" is sufficiently different from conventional text-based forms that people have learned to use, often at the cost of great effort and time. Translating the thought process to a visual metaphor isn't intuitive or easy as I've attempted to learn it.
What's odd is that I have ability and a fair amount of success in the visual art of printmaking, which one could say is an expression of a visual language itself, but one that doesn't map well at all to the visual coding of computer programs. No reason to think it necessarily would, but the observation points to an apparent multiplicity of "visual modes" embedded in our brains.
There are a TON of interesting '21st century' things being done with the "text-based languages" this website dismisses.
BTW, that website is so 20th century.
There were a lot of attempts at touch-based interfaces before someone finally got it right and everyone went "Oh, that's how that's supposed to work."
Edit: Too bad it hasn't been opensourced. Lack of being opensourced is a death sentence for any tool.
Actually, I didn't always believe this, I saw in the time around longhorn, that visual studio was miles ahead of anything else with its almost bugless, very fast, easy to install tools that had every library preconfigured to provide autocompletion and documentation. I was surprised that the majority of windows programmers I knew were using open source tools origionally developed for linux or the "java ecosystem" on windows. But that's the way it is now adays. And Microsoft knows it too.
"Iteration uses two special linked vertices called start+ and repeat+. The start+ vertex has an equal number of inputs and outputs. When a value is received on every input of start+, each value is output on the corresponding output. The repeat+ vertex has the same number of inputs as start+, each of which has the same type. An edge connected to an input on repeat+ really sends its values to the corresponding input on start+.
So repeat+ is just an extension of start+, and has no independent existence. Its inputs can be treated as the inputs of the start+ vertex to which it is linked."
Reading this, I was curious how we disambiguate nests with multiple loops. For example, at http://web.onetel.com/~hibou/fmj/tutorials/DampedHarmonicOsc... the repeat+ on the right is connected to data from both start+ vertices. The types too are the same for both loops. How does the software decide which one to send data to? It must be using some information that isn't visible in the figure.
Edge colors are a crude indication of data type, e.g. Lists are red whatever their subtype, Booleans are green, and Integers blue.
There's no non-arbitrary connection between any of the colors and any of the ideas they represent, so if the user's just going to have to memorize them you might as well use symbols. At least then you can make use of more preexisting memorization and scale to whatever complexity is necessary.
Also, it's hard to predict how colors will be perceived. Between the renderer, the display, and the eye there's an unfair amount of variation. Patterns are better, but they have the same memorization problem as colors.
Does it store the underlying data in a textual representation so it's still GIT and Regexp friendly?
The underlying data is all binary, but LabVIEW has built-in diff tools that allow you to compare diagrams (the source code) between two vi's (LabVIEW files). As a result, you can use it with source control and coordinate multiple developers much like any other language. It isn't really necessary for the underlying data to be text. Search is also pretty decent.
The beauty of dataflow languages is that it frees the developer to think about what is really important while the runtime handles the nasty concurrency issues. Does this approach work for all problem domains? No, but it works fine for a lot of stuff. Nothing comes close to LabVIEW for making concurrency trivial.
Its a pity that LabVIEW and programming languages like it haven't seen much mainstream adoption. In the right hands and in the right problem domain, a good LabVIEW dev can do things that would make other programmers' jaws drop.
Full Metal Jacket is a general purpose, pure dataflow language, supporting recursion, higher order functions, strong typing, type inference, and macros. It's entirely graphical, but it is possible to call Lisp.
It has a very simple, regular, syntax, with only vertices, edges, constants, and enclosures. No special syntax is required for conditions, iteration, or new type definitions. It is homoiconic, though I haven't yet made full use of that.
It is an alternative model of computation, potentially enabling different parts of an algorithm to run concurrently.
It is still under development.
It's implemented in Lisp, so the underlying data are Lisp data structures. Programs are stored as S-expressions, are not particularly easy to read or safe to modify.
I ended up doing roughly the same thing. The graph is based on text, stored in arrays in memory. I just convert to JSON to store on disk. You wouldn't want to mess with the JSON, but technically you could if you didn't make any mistakes.
You can't actually run the programs in parallel though, right? The structure is there but you'd need something on a chip to take advantage of it?
What you've built sounds very interesting, but this part is a problem you may need to solve before programmers will embrace it. If I can't use git with it, I can't develop any non-trivial software with it.
Technically, what git shows you is already a visual diff ;) and they like to draw the diff tree instead of describing it textually because it makes a lot more sense.
(a1 a2 (b1 b2))
Not available, apparently.
1. It reminds me of the graphical view that is present in the IDA disassembler in which functions are broken into "boxes" wherever there is a jump in control flow. If you're disassembling a C function, for example, this generally means that a box corresponds to some block of code that is delimited by curly braces. The point here is that you don't have a box for every single assembly instruction, because that would be ridiculous. Rather, the function is broken down into several "clauses" that express complex ideas, and then each clause is described textually by several lines of code.
For example, if I want to express something like:
(lambda (x y) (/ (* 2 x) (+ x y)))
2. Also, drawing inspiration from IDA's visual mode (as well as electronic circuit diagrams), lines should all travel at right angles. There should also be an auto layout mode which attempts to layout the graph so that there are a minimum number of edge crossings.
Text is universally understood: everybody knows at least how to type, and the essentials of how to read. Those constraints mean that one language can only be so different from another, making it easier to pick up multiple languages.
Most importantly, text has endured. ASCII was readable 20 years ago, and it will probably be readable in some form 20 years from now. There are many standard programs that can read and write the format. Just try reading an FMJ program if you don't have the code on hand, and the repo goes 404. Have fun reverse engineering.
Text-based languages can still be very different from one another. There's only the appearance of similarity now because most programming languages, and all mainstream ones, are descendants of ALGOL60, influenced to a greater or lesser extent by Smalltalk and Lisp. There's almost nothing in common between Prolog and C, for example.
How can you read any program without the (source) code on hand? The same problem occurs, for example, with Java byte code.
And programming as a directed graph isn't a bad idea, but there must be a canonical, easily readable, textual format, that is the DEFAULT, regardless of how you manipulate it.
I'd even argue that this is a prerequisite to making any visual language workable. No language that doesn't play nice with source control is going to get off the ground. If you also need to come up with a visual merge tool before anyone can use the language for anything complicated then you've got a minimum viable product that's infeasible.
One potential problem with this scheme that then pops to mind after that is the a version situation that led to VB.NET's failure. If you've got two visually different languages that are otherwise functionally equivalent (for the most part), what's the use of learning both? Probably most people will just settle on the one that you can't get by with not knowing - in this case, the textual format - and not bother with the one that's not strictly necessary to know.
Most of these visual programming tools have long been using XML as a storage format, which is essentially equivalent to using S-expressions for it. You could edit the generated XML directly, but it's a horrible experience. It would make no difference to the end user, unless you generated S-expressions that were actual human-readable and editable Lisp code.
Even if XML isn't the cause of all the problems with the format's hand editing experience (it probably isn't), the decision to use XML is a clear indication that human edit/readability was, at best, an afterthought. Probably nobody spent any time worrying about making sure it was a good experience.
This is what I think we really need. A general-purpose programming language with first-class dual mode (visual and textual) editing. A Lisp/Scheme-like language would perhaps be best suited for this, although I've seen specific applications that did it with Java (GUI code generators) or SQL (query generators). It needs to have true two-way functionality so the programmer can switch back and forth at will--the visual editor has to generate code that a human can easily read and edit, and it also needs to be able to generate a readable visual representation for code that was written by a human. And if you make a syntax error while in textual mode, the IDE should warn you proactively that your code doesn't compile, instead of just waiting for you to switch back to graphical mode and breaking horribly, and so on.
Most of the characters are now in Unicode but some have been deprecated in modern implementations because of the missing representations.
^ ~ | @ # * are not characters to generally be found in handwriting and not just natural inclusions in a character set.
The ^ and ~ characters are particularly hard to type on many European keyboards. My MacBook's Finnish/Swedish keyboard doesn't even show the tilde ~ character anywhere on the physical layout, and it requires three keypresses to type.
I'm working alone, without any backing.
Have you seen Clarity? Yours looks more interesting, but it's another visual programming language: http://www.clarity-support.co.uk/products/clarity/
There's a book on it too: https://www.amazon.com/Drawing-Programs-Schematic-Functional...
There might some aspects that could be useful to your work.
Then this is not my cup of tea. I think the keyboard is still the best and most versatile interface to a computer.
In any case, most programming time is spent thinking. In comparison, very little is spent typing. If that's not the case, it's a sign the language is too verbose.