Hacker News new | past | comments | ask | show | jobs | submit login
Full Metal Jacket: A Visual Programming Language Based on Lisp (fmjlang.co.uk)
166 points by vmorgulis on July 16, 2016 | hide | past | favorite | 104 comments

Where's the data?

A "data flow language" that only visualizes functions isn't. IMHO.

I agree with most of the perennial discussion that takes place here for all of these projects: text is good, but not for everything and everyone, yet graphical programming has never taken off... why?

Because if people want a data flow, they care more about the data than the flow. Spreadsheets have secured a rock-solid place in the "lingua franca" of userland, without ever visualizing the "connections," which are seldom enlightening.

I would not criticize something for failing to achieve what it doesn't attempt, and FMJ may be fantastic for developing logic. But it calls itself a "visual dataflow language," and I'm not seeing any data. (Yes, code is data in lisp, that's not what I mean.)

To visualize code is to focus on the means to the end. I would respectfully disagree that "21st-century" programming will resemble the static modelling of systems represented by code in whatever form. Systems will be "live" as a rule. And that is what will improve the lives of programmers. Remember that "interactive computing" was once a radical idea, the realization of which required people to spend their lives and resources fighting for it (Licklider, Englebart, and Bob Taylor et al, to name a few). Well, if you feel any resistance to the idea of relinquishing your rinse-and-repeat development cycle, I have two words for you: punch cards.

I hope I am not trolling here. Just this month I've been "sniped" by a data flow side-project that has me thinking about these things, and I'm extremely interested in how other people approach the subject (and their discussions here).

edit grammar, wording

Thanks for your comment - it's insightful, and definitely not trolling.

> Remember that "interactive computing" was once a radical idea, the realization of which required people to spend their lives and resources fighting for it (Licklider, Englebart, and Bob Taylor et al, to name a few).

Yeah, but the funny thing is that this radical idea only sparked for a while, and then was extinguished. Present world, with all the hip languages we use on our $dayjobs[], is pretty far behind what Englebart et al. were working on. Event the present Lisp (and Smalltalk) systems are mere shadows of the "interactive computing" of the past. People are slowly rediscovering the concept again though, gently pushed by visionaries like Bret Victor.

I 100% agree with your comment about data. The most successful dataflow languages I've heard of were always tied to some particular data processing tasks - like music and general DSP. It's a good driving force that helps immediately verify the feasibility of the language and the environment.

You certainly aren't trolling, and your point is valid.

Directed graphs are the most general data structure, and the language is homoiconic. So data definitely will be represented graphically: not just directed graphs, but other structures such as arrays.

But there's only one of me, and if I'd done this already, something else wouldn't have got done. However, it is high on my agenda, and will be done soon.

PigPen had this right... it visualized syntax that you typed. You could technically code with the mouse but that wasn't the intention of the tool. It made it easier to get a gestalt view of things, without hiding the essential details of operations. It was just another window into your code in Eclipse. I think Microsoft's IDE has similar stuff for OO code? But I've never used it.


I tried to commercialize this technology in 2010, with Cloud Stenography (a joke name): https://github.com/rjurney/Cloud-Stenography https://vimeo.com/6032078

More recently, a company has done some similar stuff, albeit for Spark, in Seahorse: http://seahorse.deepsense.io/

There's also Apache NiFi

Really interesting! Much of the active "prior art" in data flow languages is in audio and music, where systems like Max/MSP [1] and Pure Data [2] have made several generations of musicians into programmers without even telling them they were programming!

My long-running personal project MFP [3] looks at basically the same problem as FMJ -- visually representing data flow programs with a "real" language under the hood -- but using Python as the underlying language runtime and adopting the graphical conventions of Pure Data.

Great to see interest in dataflow programming. It really is a good way to turn your programming mind inside out.

[1] https://en.wikipedia.org/Max_(software)

[2] http://puredata.info/

[3] https://github.com/bgribble/MFP

See also OpenMusic[1], also based on Lisp. Also related, because of the data flow approach: Faust[2].

[1] http://forumnet.ircam.fr/product/openmusic-en/

[2] http://faust.grame.fr/examples/2015/07/29/simple-faust-examp...

Some people say there is a reason for text being used in programming, of course there is a reason: the terminal, that evolved from the teletype, that evolved from the typing machine and morse codes.

For someone to make a new system you need two things:

1- Completely rethink and redesign how things are done.

2- Modify all the tools in use today. This is the biggest part by far. The amount of work you have to do is so much that most people will just choose to continue evolving over the current system.

E.g for making your own visual language you not only have to develop the language itself, but make the compiler also visual, and the debugger also visual, but the tools you could modify, like llvm are text based and terminal based. Modifying llvm for it is hell. And consider yourself lucky if you have to modify llvm, because in the past it was gdb with monolithic design and no libraries.

An example of a redesign was Xerox PARC GUI. Instead of one input-one output development system in the terminal they added 20 or 30 inputs and 20 or 30 outputs. For this they had to use a new programming concept, so they had to use object oriented programming(without it handling 20 inputs-outputs at the same time becomes so hard it is in practice impossible), that at the same time forced them to develop their own tools in order to see object oriented data structures and methods.

Any modern UI development inherits from Xerox design.

Another paradigm change was the Iphone: it forced every smartphone to have hardware acceleration video and touchscreen and used it as input methods.

People like Bret Victor and MathBox creator are trying to redesign the whole concept using new tools like GPU programming and video displays. I agree with them. The current system is so wrong , even when it is the only thing we have.

It's not just about VT100s. It's that text is far and away the go-to format for recording any information that's abstract or has narrative structure. It has been for a very long time.

There are graphical techniques, but they all work in very specific and limited domains. In the wild, even flowcharts that aren't meant to represent a process that has dynamic state are frequently tagged with annotations for explaining important details that are difficult or impossible to convey within the strictures of a flowchart's graphical language.

That's not to say that the idea of a visual programming language is not intriguing, or that it's a waste of time to explore them. But the reasons why it's an uphill battle go far beyond mere inertia.

I am coding since the mid-80's and after my GNU/Linux zealot phase I went back to the GUI worlds of Mac and Windows, because although they have their own set of issues, they are closer to the Xerox PARC ideas than any other UNIX clone.

After becoming a big Oberon and Smalltalk fan during the university, I started collecting Xerox PARC papers and imagining how the IT would have looked like if those ideas managed to win the market the first time they tried to go commercial.

Instead we had to wait for Bret Victor and MathBox creator as you say, for bringing those ideas back.

> 2- Modify all the tools in use today. This is the biggest part by far. The amount of work you have to do is so much that most people will just choose to continue evolving over the current system.

One thing I learned from the explosion of JavaScript ecosystem is that this is a bit simpler than one may think - you just need to create a stable minimum base with a good selling point, and let the fans do the rest. If it gains initial momentum, the rest will get developed simultaneously by the world at large.

(A lesson from languages like Erlang or Scala would also be that having a company dedicated to marketing the living shit out of the language also helps.)

Try to draw a diagram that conveys exactly what you have written above. Text is a powerful medium for conveying abstract thought.

The problem is that we are missing a sufficiently expressive visual language. Given such a language, my guess is that OP could have conveyed his thoughts more clearly and concisely.

Praise text programming languages all you want and you will be correct. Programming languages used by coders are by far text based.

However when you leave the coder land and when you jump to user land this turns on its head. Overwhelming rejection of text based programming languages and overwhelming embrace of visual programming languages.

Every software that integrates a easy to learn scripting language like Python and Lua and later implements a visual programming language , users will pick the visual language in the vast majority of cases.

A very powerful visual programming language is Unreal's Blueprints which can completely replace the C++ Api for creating games of any sophistication. As a matter of fact as I have discovered it's easier to find documentation how to implement something with blueprints than C++.

It's a just a matter of time till visual programming languages overtake , text based languages because more and more apps prefer the visual coding route because this is why what the users ask.

Why ? Because it's easier to learn It's easier to write, because there are no syntax or compiler errors.

But mostly it's easier to have fun with.

So if for an experience coder text based language is all he wants, for the user the choice is visual programming languages any day.

> Blueprints which can completely replace the C++ Api

C++ is a rather low bar.

> It's a just a matter of time ...

That sets the upper limit to infinity, so it's a non-statement.

> Because it's easier to learn It's easier to write, because there are no syntax or compiler errors.

AST-constrained editing (e.g. lamdu[1]) will be very useful. But the lowest-common denominator cannot be "overtaken". That is why stdio is a binary stream, webapps are compiled to javascript, and programming is represented as text. Any advanced programming language that can't be represented and manipulated by ubiquitous tools will lose to one that does, so text will never--yes, never--be replaced as the common vernacular, for better or worse.

To paraphrase Stephen Kell[2], Smalltalk lost because for whatever reason the people who hadn't yet "seen the light" continued to communicate and interoperate with each other.

1: http://www.lamdu.org/

2: https://www.cl.cam.ac.uk/~srk31/research/papers/kell13operat...

For power users text based will always be preferred.

Directed graphs look nice but visual programming has an important limitation: The ratio of useful information vs space is really small compared to text based representations. What can be represented in 10 lines of code needs a lot of space in the screen when using visual programming.

With that said, I think there's a lot of room for improvement and maybe an hybrid model, where the expressiveness of text can be extended with visual cues.


I once mocked up what a hybrid model might look like: https://cdn-images-1.medium.com/max/640/1*a65r5zYR2hIc8UwfmF...

But I found it was really difficult to define the layout rules and constraints. Eventually you have to start doing some collapsing or shift things around into different dimensions(which is a kinda cool proposition). I did some other design docs and mockups along the way but was eventually discouraged by the actual compiler implementation- not really my area of expertise and unfortunately I have another project taking up all of my time. Mainly I was interested in designing a language that would work well in VR. A nice property of a language like that is it would work well with touch-screen interfaces(Touch Develop has done some interesting work here).

Your mockup reminds me of Mathcad[0]. It's used to create documents that look like math papers, except the formulas and code there can be executed "in situ". I had a semester of numerical methods on it at my university, and one thing I realized while using it is that, while cool for math expressions, it starts to get problematic when you start implementing more complex algorithms. Programming is a slightly different mental exercise than maths, and the code doesn't lend easily to that kind of representation.

Not to say it's a thoroughly bad idea; I think there's a lot room for improvement over our flat, plaintext canvas we work with.

[0] - https://en.wikipedia.org/wiki/Mathcad

I got something similar like that https://www.youtube.com/playlist?list=PLyvBXLgHYHy1AIK6i5uw3... but behind the scene it's compiled to Clojure. I believe this is enough for build real world programs.

Of course, as it has been said, a picture is worth a thousand words, but there must be a reason why diagrammatic languages have not received the acceptance as widespread as text-based languages have. Even in "obvious" cases like microcircuit design engineers prefer text-based "hardware definition languages" over diagrams. One explanation could be that human intellect has evolved in the process of perfecting means of communication and thinking, of which a language based on symbols (words) proved to be most efficient, particularly in conveying abtractions, which is exactly where pictures, being concrete,fell short.

Good point, programming is much about communicating not only with the machine but with other people. Writing serially executable code is a bit like explaining to others and yourself what you are doing there. If instead we have a "wire-frame" of a program it presumably naturally calls for some kind of explanation. Explanations need to be textual because humans communicate textually. Humans prefer to give each other instructions textually, not as graphs, because less interpretation is needed with strictly serial text.

Visual programs are a bit like if you were presented a program which needs to be read from bottom up right to left. Would that be somehow easier to understand? Visual language presumably can combine both left-to-right and right-to-left and up-down and bottom-up interpretations. While it's possible that such model would be more expressive than conventional programs, that is not certain to be so from the outset.

Personally, I think that drag and drop is a really bad UI for doing this sort of thing. I think that the lack of an effective means of editing graph based information has been a big reason. I think that the seccond reason, is that all of these tools have used their own, complicated serialization format, rather than using something more unixy that you could build a real ecosystem around.

I'm trying to solve these problems with textgraph: http://thobbs.cz/tg/tg.html and while the ecosystem is still in daipers, I am finding that using a simple and LIMITED file format, that can do nothing more than use text based edge lables and text based vertex contents, makes development of the ecosystem much faster than if I were to go crazy with having ecosystem features like edge collors, box vertex and arrow shapes ect.

I think part of the reason is that programmers simply aren't used to thinking solely in terms of dataflow. People have had no problem using flowcharts to represent flow of control.

Text is intrinsically one-dimensional, and that is perfectly fine for sequential execution of algorithms. But if you want to exploit MIMD concurrency, text limits you.

I disagree with that. I think programmers don't even like to use flowcharts very often because they are too limiting. As you're perfectly aware, when you code you quickly stop thinking in terms of "this happens, then that happens, than maybe X or else Y". You're more likely to think like "I need those two things combined and spliced into that third thing in a way that depends on Z".

The way we work with text makes it easier to construct and layer abstractions. Having twenty thousand different boxes in a graphical language is not something we're used to in a way we can have twenty thousand different words in code (or in a sentence) - and if you start treating boxes as letters and grouping them, you're essentially inventing a textual language right there.

I'm not saying visual programming is a dead end - but I feel we need a much different style than just boxes with arrows. For one, text seems to exploit human ability to pattern-match much better than box diagrams; that's why we feel text is denser in information. Making diagrams more information-dense is not an area I recall to have seen explored in visual programming, but maybe we need to go there.

Anyway, best luck with the work on FMJ!

Well, first off, program code is not one-dimensional; you would be more correct saying that it's "two-dimensional", which is is why indentation is helpful at all. Also, saying " text is one-dimensional" is almost as superficial as saying that a book is just a stack of paper. No, it is more than that, of course, and so is text in general. When you are reading, do you imagine a one dimensional world? Of course not, and that is how text's "dimensionality" should be measured, not by its appearance as a mere sequence of markings on paper or a computer screen.

Max/MSP has been very successful, though it's specialized for the music/visuals world. Given the text-based alternatives in that realm (csound, supercollider), it's not surprising.

I have some experience with a visual language (webMethods, currently owned by Software AG).

Guess what I miss the most when I use it?

diff -c

And all the rest you get with text (versioning, global search&replace, static analysys...)

Programming by using a GUI to generate an XML file was a 1990s trend that still hasn't died. The thinking was "we need to make a visual programming tool for non-programmers," which is sort of condescending to begin with, and then the people who use it run into all these real problems that normal text-based programming languages have already solved (automated testing, version control with diffing, merging, etc.)

Programs are stored on disk as lists of program elements in S-expression format, so are still accessible to grep, diff, etc., and there's no reason why a version control system couldn't be used. Even with text-based languages, use of diff runs into problems when text is moved about or reformatted.

It was never my intention to develop a language that anyone could use. It's for programmers. You still need to write algorithms. It will attract different people, just like different text-based languages.

I am planning to incorporate automated testing into the language, but haven't decided upon the best way to do this yet.

In my experience (and, as I said before, this is confined to one specific product/language) while the whole stuff was, in fact, represented as XML files, trying to work at the XML level was a nightmare. Each single graphical element had 2 or 3 different XML "source" files, distributed in different subdirectories, and there was no support (official or unofficial) to operate at the source file level. In other words, any trivial mistake would probably corrupt your project in some subtle and not easily recognizable way.

Now that I think about this, it was exactly the same with a large Sharepoint installation (around 2009/2010?).

In my experience there is a world of difference (and a world of pain) between "this is a source file, you can compile or interpret it and if there is a mistake you will get an error message" and "this is the XML serialization of a complex hierarchy of objects and if you mistype a GUID somewhere you will only find out at runtime, usually because something not even remotely connected to your mistake starts behaving erratically".

I am not saying that your project has no merit or that it will fail, I am just commenting on why, in my opinion, visual programming never really became a serious contender for general purpose programming.

As someone working on TIBCO BusinessWorks, a competitor/alternative to webMethods, this sounds all too familiar. We have had major challenge ourselves in bridging this gap - semantic diff/merge on process/flow engines like BW, WM, etc. with visual tooling.

Typically developers care about this, business buyers don't and they direct budgets and in many ways, product prioritization. To a large extent the proprietary or obscure nature of integration DSLs never helped bootstrap any meaningful tooling for users.

That's changing though with developer-power driving prioritization and investment. In our case, we hope to make good progress with an upcoming project called Flogo. Currently in stealth and soon to be OSS.

Disclaimer: I work at TIBCO on integration

I think the problem here is that if you want to go visual, you need to go visual all the way. There should never be a case you need to manually debug a serialized representation. If your environment can't support that to an acceptable level of reliability, then you need to explicitly stick with a double-form approach, where the visual form and serialized form are "equal citizens" and both are meant for the user to work on in parallel.

Trying and failing to go with option one is what leads to the XML mess you describe.

For what is worth, the reason we tried (and promptly abandoned) the idea to work at the XML level was the following: in a visual language some stuff like:

- "Find all places where I am using records of type X and replace that with a record of type Y"

- "where did variable RunningTotal gets initialized? when do we reset it"?

- "Can I write a simple script to verify which temporary variables are not removed from the dataflow after being introduced?"

(Debug in the more traditional sense works reasonably well in webMethods - but this was not the problem - the problem was that it is impossible to reason about a "program" as a series of instructions and analyze it as such, even if at the end of the day a webMethods flow service is just that: a series of steps that manipulate data in a tuple space).

So this is not a critique of webMethods (or FMJ) - I am explaining what the problems you encounter are when you try to use the graphical metaphor to work on reasonably complex programs. Most of the successful examples cited so far seems to be mostly about audio manipulation. I suppose this means most of the time you compose smallish "scripts", you work alone and not in a team, logging is not an issue, building large, reusable components is not an issue, extending existing logic is not an issue, and so on and so on.

Having also worked a lot with an ancient 4GL application (where the language was supposedly "more expressive") I can sum it up all my doubts as follows:

Certain paradigms work well as long as the programs are written and maintained by a single developer to automate simple tasks but do not scale very well past that point

That sounds absolutely horrendous, and there's no good reason why they had to be like that. It's unreasonable to assume all visual programming languages do things the same or a similar way.

But you said yourself that the s-exp storage format for your language is neither easy nor safe to modify, so how is it better than XML?

Anecdotally, I've also used webMethods, SSIS, and several other visual dataflow programming languages, and they look remarkably similar to FMJ.

I was referring to this:

> Each single graphical element had 2 or 3 different XML "source" files, distributed in different subdirectories

S-expressions with equivalent content are smaller and just as readable, and don't need an extra parser. XML is OK for text mark-up, but not as good for storing data (including programs).

Granted. But keep in mind that both products were made by capable companies and were used to deliver value to customers.

I would be more than happy to explore viable alternatives to traditional text-based development but so far I haven't found anything that really impressed me.

This is a great point. When you think about it his idea is being explained in words which kind of shows that often words are the best way to describe complex ideas.

On a side note I remember seeing this which is kind of a hybrid for clojure.


I suspect that VR is going to open new ground here.

> ... but there must be a reason why diagrammatic languages have not received the acceptance as widespread as text-based languages have.

Or maybe because of the Deutsch limit (text is denser):


I think so. And related, I haven't seen a good way of managing abstractions in a way we do it with text. The dataflow paradigm feels a bit too weak for that (but maybe it's only my impression). With textual programming languages we're used to, there's - for better or worse - pretty much no upper limit for the stack of abstractions we can create.

This is vaguely reminiscent of Drakon, which has been around a long time, but AFAIK hasn't made much of a dent in the programming world, at least outside of Russia. A couple of years ago I was trying to do something with the visual language but can't say I successfully mastered it.

Interesting to me is that the Drakon editor was implemented in Tcl/Tk, a language I'm quite familiar with. I regard Tcl as a dialect of Lisp, which I've also used a fair amount, at least in the guise of Scheme.

As I'm not at all familiar with this new language can't say too much about it, though implementation in Lisp might be a plus. One issue with it, as with other efforts in this domain, may be that it's not as intuitive or obvious as its authors believe. IOW, in my admittedly limited experience with visual programming, there's more of a learning curve than advertised.

Visual "syntax" is sufficiently different from conventional text-based forms that people have learned to use, often at the cost of great effort and time. Translating the thought process to a visual metaphor isn't intuitive or easy as I've attempted to learn it.

What's odd is that I have ability and a fair amount of success in the visual art of printmaking, which one could say is an expression of a visual language itself, but one that doesn't map well at all to the visual coding of computer programs. No reason to think it necessarily would, but the observation points to an apparent multiplicity of "visual modes" embedded in our brains.

Visual languages have been superseded by abstract symbolic languages time and again throughout history. I don't really feel qualified to determine whether or not visual general purpose programming languages could work or not, but I do think that anyone who is pursuing that goal needs to start by figuring out why natural languages progressed from visual to symbolic. Demonstrating that programming languages could progress in the other direction would probably be the primary goal of their endeavour, and when they present their work, it should probably focus on examples of idioms that are easy to express visually, but difficult to express in text form. I am not convinced that those idioms even exist; but then, since I have never used a visual programming language. It could be that I am simply incapable of contemplating them, in my current frame of mind.

This is an interesting, worthwhile project, but the headline, "text-based languages are so 20th century," struck me as naively arrogant.

There are a TON of interesting '21st century' things being done with the "text-based languages" this website dismisses.

BTW, that website is so 20th century.

And to be honest, the visual programming language fad for "actual" use was so 20th century as well. Visual programming has found a bit of a home with "programming for non-programmers" around kids'/intro stuff and some game tools.

I'm working on something similar and I agree with him. Writing is literally stone-age technology. It's adequate for many things, but it's only got one feature. We need something with more dimensions.

There were a lot of attempts at touch-based interfaces before someone finally got it right and everyone went "Oh, that's how that's supposed to work."

Looking at the tutorial now, and it is very intuitive. Nice ;)

Edit: Too bad it hasn't been opensourced. Lack of being opensourced is a death sentence for any tool.

Microsoft , wolfram research , mathworks would like to have a word with you .

I could do with some paid work. I have three published papers, six patents, 35 years computing experience in lots of different areas, and no job.

Patents suck. In 90% of the cases, they cost everyone money, including the inventor. :(

Microsoft is currently in the process of opensourcing their TOOLs. Their tools were dying out before they did so.

Actually, I didn't always believe this, I saw in the time around longhorn, that visual studio was miles ahead of anything else with its almost bugless, very fast, easy to install tools that had every library preconfigured to provide autocompletion and documentation. I was surprised that the majority of windows programmers I knew were using open source tools origionally developed for linux or the "java ecosystem" on windows. But that's the way it is now adays. And Microsoft knows it too.


Here's how iteration works (http://web.onetel.com/~hibou/fmj/tutorials/Iteration.html):

"Iteration uses two special linked vertices called start+ and repeat+. The start+ vertex has an equal number of inputs and outputs. When a value is received on every input of start+, each value is output on the corresponding output. The repeat+ vertex has the same number of inputs as start+, each of which has the same type. An edge connected to an input on repeat+ really sends its values to the corresponding input on start+.

So repeat+ is just an extension of start+, and has no independent existence. Its inputs can be treated as the inputs of the start+ vertex to which it is linked."

Reading this, I was curious how we disambiguate nests with multiple loops. For example, at http://web.onetel.com/~hibou/fmj/tutorials/DampedHarmonicOsc... the repeat+ on the right is connected to data from both start+ vertices. The types too are the same for both loops. How does the software decide which one to send data to? It must be using some information that isn't visible in the figure.

That's correct. The start+ and repeat+ vertices are linked to each other, so there's no possibility data will be sent to the wrong start+ after it is sent to a repeat+. If you hover over one, the other it's linked to is highlighted.

It seems like there must be some way to indicate that a particular start+/repeat+ pair are connected, even just an edge in a different color seems like it'd be a big help.

Thanks. I decided that layout, along with being able to hover over the vertex to see the one it's connected to would be sufficient, but I'm open-minded and may indicate it some other way as well if it becomes an issue.

Edge colors are a crude indication of data type, e.g. Lists are red whatever their subtype, Booleans are green, and Integers blue.

I've been working on using graphs to model logic as well and it seems like encoding information in color should be avoided. It's tempting, but it doesn't degrade gracefully or scale.

There's no non-arbitrary connection between any of the colors and any of the ideas they represent, so if the user's just going to have to memorize them you might as well use symbols. At least then you can make use of more preexisting memorization and scale to whatever complexity is necessary.

Also, it's hard to predict how colors will be perceived. Between the renderer, the display, and the eye there's an unfair amount of variation. Patterns are better, but they have the same memorization problem as colors.

I took a look at your profile and was intrigued. Check out http://akkartik.name/about and feel free to drop me a line!

How does it compare to other wire-up graphical programming languages like LabVIEW's G or Unreal's Kismet?

Does it store the underlying data in a textual representation so it's still GIT and Regexp friendly?

LabVIEW is a marvelous, highly productive programming language. It has been around longer than Java and has been through coherent, well-planned development since the late 80's.

The underlying data is all binary, but LabVIEW has built-in diff tools that allow you to compare diagrams (the source code) between two vi's (LabVIEW files). As a result, you can use it with source control and coordinate multiple developers much like any other language. It isn't really necessary for the underlying data to be text. Search is also pretty decent.

The beauty of dataflow languages is that it frees the developer to think about what is really important while the runtime handles the nasty concurrency issues. Does this approach work for all problem domains? No, but it works fine for a lot of stuff. Nothing comes close to LabVIEW for making concurrency trivial.

Its a pity that LabVIEW and programming languages like it haven't seen much mainstream adoption. In the right hands and in the right problem domain, a good LabVIEW dev can do things that would make other programmers' jaws drop.

LabVIEW is pretty mainstream in scientific instrument construction and control. I just wish more HN type people were exposed to it, because a well designed graphical language is unquestionably more productive than a classical programming language, at least in that problem space.

Yep, and in manufacturing test applications as well. BTW, I remember you from Pitt Physics, Scott, Cheers!

Hey there, mysterious Pitt alum!

Yeah, I'm surprised the developer didn't compare their language to LabVIEW. If I were writing about a bare-metal language, I'd compare it to C, or an embedded language to Lua.

Some time soon, I'll get around to adding videos to the tutorials, so that you can see how programs are edited, instead of just showing completed functions.


Full Metal Jacket is a general purpose, pure dataflow language, supporting recursion, higher order functions, strong typing, type inference, and macros. It's entirely graphical, but it is possible to call Lisp.

It has a very simple, regular, syntax, with only vertices, edges, constants, and enclosures. No special syntax is required for conditions, iteration, or new type definitions. It is homoiconic, though I haven't yet made full use of that.

It is an alternative model of computation, potentially enabling different parts of an algorithm to run concurrently.

It is still under development.

It's implemented in Lisp, so the underlying data are Lisp data structures. Programs are stored as S-expressions, are not particularly easy to read or safe to modify.

You can use gifCam to make quick little animated gifs that are a lot easier to produce and more portable than videos. Here's an example from my project Howstr. The dictionary has a lot more. http://github.howstr.com/quickstart.html

I ended up doing roughly the same thing. The graph is based on text, stored in arrays in memory. I just convert to JSON to store on disk. You wouldn't want to mess with the JSON, but technically you could if you didn't make any mistakes.

You can't actually run the programs in parallel though, right? The structure is there but you'd need something on a chip to take advantage of it?

Programs are stored as S-expressions, are not particularly easy to read or safe to modify.

What you've built sounds very interesting, but this part is a problem you may need to solve before programmers will embrace it. If I can't use git with it, I can't develop any non-trivial software with it.

You need version control and diffs, not git. In a visual system I think it makes more sense to use visual diffs, or algorithm generated summaries.

Technically, what git shows you is already a visual diff ;) and they like to draw the diff tree instead of describing it textually because it makes a lot more sense.

Something that could work with git is to use a map.


    (a1 a2 (b1 b2))

    parent  child
    root    list1
    list1   a1
    list1   a2
    list1   list2
    list2   b1
    list2   b2
It should work with diff too if the ids are preserved.

That's exactly how you store a tree in a relational database. Every row has a parent_id column. This language isn't just a tree though, it's a graph, so you would need a row per edge instead of a row per node. It would work with git as long as it's kept in sorted order and doesn't grow extremely large, but it isn't nice to view or edit directly; you would want a graph query language to interact with it.

That would work as long as the author isn't seriously abusing reader macros in the serialized format :).

Do you have any plans in the near future to open up the code to the world?

I would have liked to try it out, but the page has neither a download link nor a link to any source code.

Not available, apparently.

I like sites that look like its the 90ies. What is wrong with a simple site that contains information and nothing more? Information is what we came for, not some stupid theme that someone downloaded somewhere and serves only to distract, slow down, and hinder accessability. I'm not blind or visually impaired, but I still like to use the tools built into my browser, such as the ability to enlarge text, without having a bunch of irrelevant div elements overlap eachother.

I guess http://motherfuckingwebsite.com/ applies. ;)

A couple of suggestions:

1. It reminds me of the graphical view that is present in the IDA disassembler in which functions are broken into "boxes" wherever there is a jump in control flow. If you're disassembling a C function, for example, this generally means that a box corresponds to some block of code that is delimited by curly braces. The point here is that you don't have a box for every single assembly instruction, because that would be ridiculous. Rather, the function is broken down into several "clauses" that express complex ideas, and then each clause is described textually by several lines of code.

For example, if I want to express something like:

    (lambda (x y) (/ (* 2 x) (+ x y)))
then it would be nice if I could draw a box with input terminals for x and y and then type in the expression, rather than having to create three boxes for each arithmetic operation. It would be faster to program this way, and the result would easier to read.

2. Also, drawing inspiration from IDA's visual mode (as well as electronic circuit diagrams), lines should all travel at right angles. There should also be an auto layout mode which attempts to layout the graph so that there are a minimum number of edge crossings.

Similar idea here but I think we can not do that before VR is poopular. But a text-look form fits the first decades of 21th centry better https://www.youtube.com/playlist?list=PLyvBXLgHYHy1AIK6i5uw3...

Fascinating, but there is a reason text-based languages have endured as long as they have.

Text is universally understood: everybody knows at least how to type, and the essentials of how to read. Those constraints mean that one language can only be so different from another, making it easier to pick up multiple languages.

Most importantly, text has endured. ASCII was readable 20 years ago, and it will probably be readable in some form 20 years from now. There are many standard programs that can read and write the format. Just try reading an FMJ program if you don't have the code on hand, and the repo goes 404. Have fun reverse engineering.

Text does have the advantage of incumbency. When programming languages were first developed, we had teletype terminals which only accepted a limited character set and no graphical input devices, and CPUs which could only execute a single instruction at a time. That constrained the languages people could develop. After adding decent graphics terminals and multi-core CPUs, and noting that most algorithms have instructions which can be executed in parallel, implementing programs as directed graphs makes a lot more sense.

Text-based languages can still be very different from one another. There's only the appearance of similarity now because most programming languages, and all mainstream ones, are descendants of ALGOL60, influenced to a greater or lesser extent by Smalltalk and Lisp. There's almost nothing in common between Prolog and C, for example.

How can you read any program without the (source) code on hand? The same problem occurs, for example, with Java byte code.

That isn't my issue. My issue is that which you may see in APL. Even if you got APL source code, you wouldn't necessarily be able to read it. FMJ source isn't textual. If you don't have the FMJ runtime on hand, no only can you not run the source, you cannot READ it, either.

And programming as a directed graph isn't a bad idea, but there must be a canonical, easily readable, textual format, that is the DEFAULT, regardless of how you manipulate it.

Tangenting off your mention of parallelism, I wonder if one of the potential advantages of a graphical language is that it strongly encourages people to keep their code clean and maintainable. I imagine code with a tangled structure that relies heavily on side effects would be downright painful to look at in a graphical language - just a rat's nest of criss-crossing lines.

One worthwhile thing that might come from a language like this being more lispy is that it should be fairly easy to just use textual s-expressions as the storage format. So you should be able to get things working in a way that makes the visual editor and any standard text editor equally viable as ways to edit the code.

I'd even argue that this is a prerequisite to making any visual language workable. No language that doesn't play nice with source control is going to get off the ground. If you also need to come up with a visual merge tool before anyone can use the language for anything complicated then you've got a minimum viable product that's infeasible.

One potential problem with this scheme that then pops to mind after that is the a version situation that led to VB.NET's failure. If you've got two visually different languages that are otherwise functionally equivalent (for the most part), what's the use of learning both? Probably most people will just settle on the one that you can't get by with not knowing - in this case, the textual format - and not bother with the one that's not strictly necessary to know.

One worthwhile thing that might come from a language like this being more lispy is that it should be fairly easy to just use textual s-expressions as the storage format.

Most of these visual programming tools have long been using XML as a storage format, which is essentially equivalent to using S-expressions for it. You could edit the generated XML directly, but it's a horrible experience. It would make no difference to the end user, unless you generated S-expressions that were actual human-readable and editable Lisp code.

XML's human editing experience is typically bad. XML has a syntax that I can only describe as inhuman. I wouldn't want to be quick to take the badness of XML storage formats that are primarily meant to be edited with WYSIWYG editors as anything fundamental about the idea of having a language with a dual textual/graphical representation.

Even if XML isn't the cause of all the problems with the format's hand editing experience (it probably isn't), the decision to use XML is a clear indication that human edit/readability was, at best, an afterthought. Probably nobody spent any time worrying about making sure it was a good experience.

the idea of having a language with a dual textual/graphical representation.

This is what I think we really need. A general-purpose programming language with first-class dual mode (visual and textual) editing. A Lisp/Scheme-like language would perhaps be best suited for this, although I've seen specific applications that did it with Java (GUI code generators) or SQL (query generators). It needs to have true two-way functionality so the programmer can switch back and forth at will--the visual editor has to generate code that a human can easily read and edit, and it also needs to be able to generate a readable visual representation for code that was written by a human. And if you make a syntax error while in textual mode, the IDE should warn you proactively that your code doesn't compile, instead of just waiting for you to switch back to graphical mode and breaking horribly, and so on.

Well, that's exactly what I was saying: any visual language for which a text editor is non-viable is unacceptable. And it doesn't so much matter who learns what. What matters is that some time in the future, that data is extractable, and viewable, long after the visual environment has been lost to the world. Maybe it isn't easy, but it's doable.

APL is text based. You can't represent some older programs even with Unicode.

Most of the characters are now in Unicode but some have been deprecated in modern implementations because of the missing representations.


APL isn't recognizably text. Not ASCII, or really any encoding save its own. Besides, the creator of APL has solved that problem. Take a look at J & co, to see how to do APL in ASCII.

The point being that the singling out ASCII is survivor bias.

^ ~ | @ # * are not characters to generally be found in handwriting and not just natural inclusions in a character set.

Also, focusing on ASCII often means implicitly focusing on English keyboards to the detriment of other languages.

The ^ and ~ characters are particularly hard to type on many European keyboards. My MacBook's Finnish/Swedish keyboard doesn't even show the tilde ~ character anywhere on the physical layout, and it requires three keypresses to type.

I'm not sure that it's survivor bias. Besides, ASCII's long survival is exactly what makes it so important. ASCII has endured for so long, and is embedded in so much software and even some hardware, that even if ASCII is abandoned for a better encoding, there WILL be legacy support for it. You can never be sure that a data format will endure, but ASCII is about as close to that certainty as you can get.

I agree. I've created an ecosystem called textgraph to address this problem: http://thobbs.cz/tg/tg.html

This has popped up several times iirc, and there is nothing available to try out.

Yet. Before I release it, I want it to be a polished product which runs reliably, and has good documentation, so that people will continue to use it after downloading it.

I'm working alone, without any backing.

Full Metal Jacket seems really promising. Thanks for working on this.

Have you seen Clarity? Yours looks more interesting, but it's another visual programming language: http://www.clarity-support.co.uk/products/clarity/

There's a book on it too: https://www.amazon.com/Drawing-Programs-Schematic-Functional...

There might some aspects that could be useful to your work.

Have you considered what a full 3D version with touch controls might look like?

Do you accept donations?

Donations would be very helpful of course, but I don't yet have any mechanism set up to receive them. If you or anyone wishes to make them, please first get in touch by email (my email address is on http://web.onetel.com/~hibou/index.html). Then, I could arrange to receive donations, perhaps via Paypal.

> Programs are composed almost entirely with the mouse rather than keyboard

Then this is not my cup of tea. I think the keyboard is still the best and most versatile interface to a computer.

totally agree. Even if a good graph representation of code existed (trees diagrams of ASTs or S-Exprs) the keyboard would still be the most efficient way to navigate & edit them. Drag & drop is a rubbish system, too slow.

In Full Metal Jacket, mouse gestures are minimized. It's often just right-click on an output, which opens a canvas containing methods with an input of the output's type (or moves it to the front if it's already open), drag over the vertex you want, drop it, and its input is automatically connected to the output of the vertex you right-clicked on.

In any case, most programming time is spent thinking. In comparison, very little is spent typing. If that's not the case, it's a sign the language is too verbose.

Is there a github repo?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact