Text is universally understood: everybody knows at least how to type, and the essentials of how to read. Those constraints mean that one language can only be so different from another, making it easier to pick up multiple languages.
Most importantly, text has endured. ASCII was readable 20 years ago, and it will probably be readable in some form 20 years from now. There are many standard programs that can read and write the format. Just try reading an FMJ program if you don't have the code on hand, and the repo goes 404. Have fun reverse engineering.
Text-based languages can still be very different from one another. There's only the appearance of similarity now because most programming languages, and all mainstream ones, are descendants of ALGOL60, influenced to a greater or lesser extent by Smalltalk and Lisp. There's almost nothing in common between Prolog and C, for example.
How can you read any program without the (source) code on hand? The same problem occurs, for example, with Java byte code.
And programming as a directed graph isn't a bad idea, but there must be a canonical, easily readable, textual format, that is the DEFAULT, regardless of how you manipulate it.
I'd even argue that this is a prerequisite to making any visual language workable. No language that doesn't play nice with source control is going to get off the ground. If you also need to come up with a visual merge tool before anyone can use the language for anything complicated then you've got a minimum viable product that's infeasible.
One potential problem with this scheme that then pops to mind after that is the a version situation that led to VB.NET's failure. If you've got two visually different languages that are otherwise functionally equivalent (for the most part), what's the use of learning both? Probably most people will just settle on the one that you can't get by with not knowing - in this case, the textual format - and not bother with the one that's not strictly necessary to know.
Most of these visual programming tools have long been using XML as a storage format, which is essentially equivalent to using S-expressions for it. You could edit the generated XML directly, but it's a horrible experience. It would make no difference to the end user, unless you generated S-expressions that were actual human-readable and editable Lisp code.
Even if XML isn't the cause of all the problems with the format's hand editing experience (it probably isn't), the decision to use XML is a clear indication that human edit/readability was, at best, an afterthought. Probably nobody spent any time worrying about making sure it was a good experience.
This is what I think we really need. A general-purpose programming language with first-class dual mode (visual and textual) editing. A Lisp/Scheme-like language would perhaps be best suited for this, although I've seen specific applications that did it with Java (GUI code generators) or SQL (query generators). It needs to have true two-way functionality so the programmer can switch back and forth at will--the visual editor has to generate code that a human can easily read and edit, and it also needs to be able to generate a readable visual representation for code that was written by a human. And if you make a syntax error while in textual mode, the IDE should warn you proactively that your code doesn't compile, instead of just waiting for you to switch back to graphical mode and breaking horribly, and so on.
Most of the characters are now in Unicode but some have been deprecated in modern implementations because of the missing representations.
^ ~ | @ # * are not characters to generally be found in handwriting and not just natural inclusions in a character set.
The ^ and ~ characters are particularly hard to type on many European keyboards. My MacBook's Finnish/Swedish keyboard doesn't even show the tilde ~ character anywhere on the physical layout, and it requires three keypresses to type.