Hacker News new | past | comments | ask | show | jobs | submit login

Fascinating, but there is a reason text-based languages have endured as long as they have.

Text is universally understood: everybody knows at least how to type, and the essentials of how to read. Those constraints mean that one language can only be so different from another, making it easier to pick up multiple languages.

Most importantly, text has endured. ASCII was readable 20 years ago, and it will probably be readable in some form 20 years from now. There are many standard programs that can read and write the format. Just try reading an FMJ program if you don't have the code on hand, and the repo goes 404. Have fun reverse engineering.




Text does have the advantage of incumbency. When programming languages were first developed, we had teletype terminals which only accepted a limited character set and no graphical input devices, and CPUs which could only execute a single instruction at a time. That constrained the languages people could develop. After adding decent graphics terminals and multi-core CPUs, and noting that most algorithms have instructions which can be executed in parallel, implementing programs as directed graphs makes a lot more sense.

Text-based languages can still be very different from one another. There's only the appearance of similarity now because most programming languages, and all mainstream ones, are descendants of ALGOL60, influenced to a greater or lesser extent by Smalltalk and Lisp. There's almost nothing in common between Prolog and C, for example.

How can you read any program without the (source) code on hand? The same problem occurs, for example, with Java byte code.


That isn't my issue. My issue is that which you may see in APL. Even if you got APL source code, you wouldn't necessarily be able to read it. FMJ source isn't textual. If you don't have the FMJ runtime on hand, no only can you not run the source, you cannot READ it, either.

And programming as a directed graph isn't a bad idea, but there must be a canonical, easily readable, textual format, that is the DEFAULT, regardless of how you manipulate it.


Tangenting off your mention of parallelism, I wonder if one of the potential advantages of a graphical language is that it strongly encourages people to keep their code clean and maintainable. I imagine code with a tangled structure that relies heavily on side effects would be downright painful to look at in a graphical language - just a rat's nest of criss-crossing lines.


One worthwhile thing that might come from a language like this being more lispy is that it should be fairly easy to just use textual s-expressions as the storage format. So you should be able to get things working in a way that makes the visual editor and any standard text editor equally viable as ways to edit the code.

I'd even argue that this is a prerequisite to making any visual language workable. No language that doesn't play nice with source control is going to get off the ground. If you also need to come up with a visual merge tool before anyone can use the language for anything complicated then you've got a minimum viable product that's infeasible.

One potential problem with this scheme that then pops to mind after that is the a version situation that led to VB.NET's failure. If you've got two visually different languages that are otherwise functionally equivalent (for the most part), what's the use of learning both? Probably most people will just settle on the one that you can't get by with not knowing - in this case, the textual format - and not bother with the one that's not strictly necessary to know.


One worthwhile thing that might come from a language like this being more lispy is that it should be fairly easy to just use textual s-expressions as the storage format.

Most of these visual programming tools have long been using XML as a storage format, which is essentially equivalent to using S-expressions for it. You could edit the generated XML directly, but it's a horrible experience. It would make no difference to the end user, unless you generated S-expressions that were actual human-readable and editable Lisp code.


XML's human editing experience is typically bad. XML has a syntax that I can only describe as inhuman. I wouldn't want to be quick to take the badness of XML storage formats that are primarily meant to be edited with WYSIWYG editors as anything fundamental about the idea of having a language with a dual textual/graphical representation.

Even if XML isn't the cause of all the problems with the format's hand editing experience (it probably isn't), the decision to use XML is a clear indication that human edit/readability was, at best, an afterthought. Probably nobody spent any time worrying about making sure it was a good experience.


the idea of having a language with a dual textual/graphical representation.

This is what I think we really need. A general-purpose programming language with first-class dual mode (visual and textual) editing. A Lisp/Scheme-like language would perhaps be best suited for this, although I've seen specific applications that did it with Java (GUI code generators) or SQL (query generators). It needs to have true two-way functionality so the programmer can switch back and forth at will--the visual editor has to generate code that a human can easily read and edit, and it also needs to be able to generate a readable visual representation for code that was written by a human. And if you make a syntax error while in textual mode, the IDE should warn you proactively that your code doesn't compile, instead of just waiting for you to switch back to graphical mode and breaking horribly, and so on.


Well, that's exactly what I was saying: any visual language for which a text editor is non-viable is unacceptable. And it doesn't so much matter who learns what. What matters is that some time in the future, that data is extractable, and viewable, long after the visual environment has been lost to the world. Maybe it isn't easy, but it's doable.


APL is text based. You can't represent some older programs even with Unicode.

Most of the characters are now in Unicode but some have been deprecated in modern implementations because of the missing representations.

https://en.wikipedia.org/wiki/APL_(codepage)


APL isn't recognizably text. Not ASCII, or really any encoding save its own. Besides, the creator of APL has solved that problem. Take a look at J & co, to see how to do APL in ASCII.


The point being that the singling out ASCII is survivor bias.

^ ~ | @ # * are not characters to generally be found in handwriting and not just natural inclusions in a character set.


Also, focusing on ASCII often means implicitly focusing on English keyboards to the detriment of other languages.

The ^ and ~ characters are particularly hard to type on many European keyboards. My MacBook's Finnish/Swedish keyboard doesn't even show the tilde ~ character anywhere on the physical layout, and it requires three keypresses to type.


I'm not sure that it's survivor bias. Besides, ASCII's long survival is exactly what makes it so important. ASCII has endured for so long, and is embedded in so much software and even some hardware, that even if ASCII is abandoned for a better encoding, there WILL be legacy support for it. You can never be sure that a data format will endure, but ASCII is about as close to that certainty as you can get.


I agree. I've created an ecosystem called textgraph to address this problem: http://thobbs.cz/tg/tg.html




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: