Conventional programming languages is one answer. They associate programs with text. Some believe there is another way, by associating programs with diagrams. A more abstract example: machine learning associates programs with parameters and weights.
In some weird way, I feel these are all skeuomorphisms. We choose text because that's how we comprehend literature. We choose diagrams because we are visual. We choose ML because we mimic how our brains work.
We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.
For example, take thinking of programming textually. Text assumes a beginning, an end, and an ordered sequence between them. But in this small programming example, is there a well-defined ordering?
a = 0;
b = 1;
c = a + b;
Since the first and second lines can be switched, in some sense Text itself does not do Thought justice.
Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?
I think the more interesting question is how we can accurately represent thought?
In pure functional languages, these expressions form a dependency graph and the interpreter or compiler may choose an ordering and may cache intermediate results
We may even represent the program itself as a graph, just like you suggest with ML programs, but this is a general purpose program
Obviously we can't do this in imperative languages.
I think pure functional programming enables this future of thinking about programs as graphs and not as text.
I think the move towards functional programming, and putting the onus on developers to do the mental elbow grease of converting what are largely macro-style tasks (do this, do that) into functional code (feed this transform into this one) has done a great disservice to software engineering, especially with respect to productivity.
For a specific example: I use map() frequently with a use() clause or some other means of passing immutable variables to the inner scope. I have done the work of building that dependency graph by hand. But I should be able to use a mundane foreach() or even a dreaded for() loop, have the compiler examine my scope and see that I'm using my variables in an immutable fashion, and generate functional code from my imperative code.
What I am getting at is that in the olden days we used synchronous macros do a series of tasks and even though it was mediocre at best, it gave tremendous leverage to the developer. Today the amount of overhead required to map-reduce things or chain promises and carry the mental baggage of every timeout and failure mode is simply untenable for the human brain beyond a certain complexity. What we really need is to be able to read and write code imperatively but have it executed functionally, with every side effect presented for us.
I realize there is a lot of contradiction in what I just said but as far as I can tell, complexity has only increased in my lifetime while productivity has largely slipped. Shifting more and more of the burden to developer proficiency is just exactly the wrong thing to do. I want more from a typical computer today that is 1000 times faster than the ones I grew up on.
I think you've got this exactly backwards. Functional programming lets you think at a higher level of abstraction (data flow) than imperative programming (control flow). The compiler then applies elbow grease to translate your high-level data flow transformations into low-level control flow constructs.
Let's translate your statement back a generation and see how it sounds: "I think the move towards structured programming, and putting the onus on developers to do the mental elbow grease of converting what are largely assembly-level tasks (branch, copy, add a value to a register) into structured code (if, while, for) has done a great disservice to software engineering, especially with respect to productivity."
Hopefully you can understand how silly that seems to a modern programmer.
If you only ever work on things that map well to functional programming then you'll naturally think it's superior to imperative programming. Likewise, if you only ever work on things that map well to imperative programming, then the functional programming approach seems a bit silly.
It will not always be easier, but it certainly provides more control over the execution flow.
That said, you can write spaghetti code in any language. =)
I’m to the point where I am thinking about rejecting all of this and programming in synchronous shell-scripting style in something like Go, to get most of the advantages of Erlang without the learning curve. If languages aren’t purely functional, then I don’t believe they offer strong enough guarantees for the pain of using them. And purely functional languages can’t offer the leverage that scripting can, because they currently can’t be transpiled from imperative ones. It’s trivial to convert from functional to imperative, but basically impossible to go the other direction. They do nothing for you regarding the difficult step (some may argue the only step) of translating human ideas to logic. I think that’s the real reason that they haven’t achieved mainstream adoption.
Swift is probably one of the worst examples when it comes to functional programming because it's still a C like language with some FP like things in the stdlib. So you get none of the advantages and some inconsistencies weighing it down.
DeepUI seems like yet another way to tackle implementing our goals in a different language, and thereby also gain understanding into how we 'normally' do it.
It is exactly that, an attempt to structure data in a similar way that our thoughts are formed. I believe it was this video where he shortly explained that concept; https://www.youtube.com/watch?v=Bqx6li5dbEY
Although it might be a different video since Ted Nelson is all over the place with his documents and videos.
Yes, we do. ML/AI is a moving target that literally represents the current SOTA in doing just that, and even symbolic logic itself is the outcome of an older, a priori, way of doing that. Actually, analytic diagrams are also an outcome of one approach to that. So, all programming methods you mention come from some effort to model thought and make a representation of that model.
My real point is that thought is not visual or textual. Those things are simply ways of transmitting thoughts. When I have a thought, and I write it down, and you read it, I am simply hoping you are now having a thought related to the one I had. Some interaction in your brain is similar to the one in mine, when I had the thought. Civilization has spent a lot of time in mechanisms that correlate thoughts between people. Hence, language. Hence literacy. Etc.
Now we are trying to create a shared language between humans and computers, where we both understand each other with minimal effort.
Even the word "language" is biased towards text, or at least an atomic symbolic representation which is probably verbal.
I agree this is unimaginative, and probably naive. But dataflow/diagramatic systems tend to produce horrible messy graphs that are incredibly unwieldy for non-trivial applications. (My favourite anti-example is Max/MSP which is used for programming sound, music, and visuals. It's popular with visual artists, but its constructs map so poorly to traditional code that using it when you're used to coding with text is a form of torture.)
I think it's hard to do better, because human communication tends to be implicit, contextual, somewhat error prone, and signals emotional state, facts, emotional desires, or more abstract goals.
Computer communication lacks almost all of the above. Programming is a profoundly unnatural pastime that doesn't come easily to most of the population.
The fact that written languages and code both use text is very misleading. They don't use text in anything like the same ways, and code systems are brittle, explicit, and poor cousins of the formal theorem description used in math.
So the domains covered have almost no overlap. Coding is machine design, and human thought mostly isn't. It's hard to see how they can work together with minimal effort unless the machines explicitly include a model of human mental, emotional, and social states, in addition to the usual representations of traditional formal logic.
The tricky part is that it needs to be a shared language between humans, computers, and other humans, if we want software to be maintainable.
I had to laugh here, because that is exactly how I designed ibGib over the past 15 years. It is built conceptually from the ground up, working in conflict (and harmony) with esoteric things like philosophy of mathematics and axiomatic logic systems, information theory, logic of quantum physics, etc. Anyway, like I said...I just had to laugh at this particular statement! :-)
> Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?
> I think the more interesting question is how we can accurately represent thought?
In ibGib, I have created a nodal network that currently can be interacted with via a d3.js force layout. Each node is an ibGib that has four properties (the database has _only_ these four properties): ib, gib, data, and rel8ns (to keep it terse). The ib is like a name/id/quick metadata, the data is for internal data, the rel8ns are named links (think merkle links), and the gib is a hash of the other three.
The ib^gib acts as a URL in a SHA^256-sized space. So each "thought" is effectively a Goedelian number that represents that "thought".
This is essentially the "state" part of it. The way that you create new ibGib is for any ibGib A to "contact" an ibGib B. Currently the B ibGibs are largely transform ibGibs that contain the state necessary to create a tertiary ibGib C. So each one, being an immutable datum with ib/gib/data/rel8ns, when combined with another immutable ibGib, acts as a pure function given the engine implemented. This pure function is actually encapsulated in the server node that the transformation is happening, so it's conceivable that A + B -> C on my node, where A + B -> D on someone else's node. So the "pure" part is probably an implementation detail for me...but anyway, I'm digressing a little.
I'm only starting to introduce behavior to it, but the gist of it is that any behavior, just like any creation of a "new" ibGib, is just sending an immutable ibGib to some other thing that produces a tertiary immutable ibGib. So you could have the engine on some node be in python or R or Bob's Manual ibGib manipulating service where Bob types very slowly random outputs. But in the visual representation of this process, you would do the same thing that you do with all other ibGib. You create a space (the rel8ns also form a dependency graph for intrinsic tree-shaking btw) via querying, forking others' ibGibs, "importing", etc. Then you have commands that act upon the various ibGib, basically like a plugin architecture. The interesting thing though is that since you're black-boxing the plugin transformation (it's an ibGib), you can evolve more and more complex "plugins" that just execute "their" function (just like Bob).
Anyway, I wasn't going to write this much...but like I said. I had to laugh.
The ones I've seen all seem to try to give you a whole bunch of pre-canned function/node for everything you might want to do. This is clearly not a feasible approach. As I see it, they really only need to implement three to four things to have the ideal solution. The first two are: function-nodes that take input which they operate on to produce output, and directed edges that make it possible to connect outputs to inputs.
And following from this the second two logically fall out: a low friction way of amassing (and sharing) a library of function nodes, and some clever UI trickery that makes it easy to black-box a 'canvas' of interconnected function-nodes so that it just becomes a single function-node on a 'higher-level' canvas (i.e. effortless encapsulation, composition and abstraction without loss of low-level control). Systems within systems within systems.
I honestly don't know if any of this makes sense or sounds like a good idea to anyone else. Admittedly I tend to think that our entire reality, from the cosmological to the microscopic, is just one big system composed entirely of other interconnected, lower-order systems. Everything is a system.
There's a problem though that every developer will understand those explanations in slightly different ways, making it difficult to communicate the reason why such model is needed. What I miss in projects like ibgib out your comment above is grounding it in concrete examples of use cases, practical situations that are particularly interesting to the developer and which explain how their specific approach is better than traditional programming in that situation.
So, as I mentioned in the other reply (I even used the term "concrete"), I am currently just on a really cool note taking app. It's pretty much done(-ish) since it's totally usable, and I'm on now to a different concrete goal.
The other "practical situation" arose from ibGib's previous Android incarnation, which was basically two-fold: 1) Too expensive creating domain-targeted data structures (was using EF, but any relational-database would have the same issues). 2) Caching and cache invalidation.
IbGib is addressing both of these issues: 1) I now have an engine to create domain business objects without all the muck of having to "get it right" the first time. This is because it keeps track of the entire history of the process of creating the (class) data structure as well as the actual data implementing those structures. 2) As a corollary to this aspect, I now can update the client whenever any change occurs within an entire dependency graph, because both the data and the code-as-data are "indexed" by the ib^gib.
So caching in ibGib's current web app is basically about passing around "pointers", from what I understand this is very similar to how Docker handles layering hashes when building and rebuilding docker images.
Also, I can't avoid saying a meta use case, which is this thread that we're having right now. In a forum, you have a linear view and that's pretty much it. With ibGib, you can have truly branching threads, with a linear time view being just one projection of the content of those branches. So, say for example with Slack, they have a "thread" feature that they've just implemented. But it's only one thread. With ibGib, it's n-threads. The linear view is one of my issues that I'm going to be tackling next (along with notifications). But it's slow going, cuz it's just me ;-)
Yeah sorry I didn't mean to imply that you don't have concrete goals (although I couldn't find them explicitly stated in your website), only that this kind of "rethinking computing/storage/interaction" projects are often hard to approach from the outside.
> IbGib is addressing both of these issues: 1) I now have an engine to create domain business objects without all the muck of having to "get it right" the first time. This is because it keeps track of the entire history of the process of creating the (class) data structure as well as the actual data implementing those structures.
That's cool! I've been looking for a platform that allowed incremental persistent storage, to build my own note-taking-meets-programming tool. How easy is it to detach the engine from the user interface in ibgib? I'd like to create something less "bubbly" for myself, but I could learn about how you use your simple data model to build domain objects. I've also been following the Eve language and I like their computation model, but so far there's no much there in terms of persistence.
> I think it's really exciting :-O, since it actually ties together many many things fundamentally: logic, physics, mathematics, AI, religion...
Just curious, how does a data model touch religion? :-D
Ah, infer and imply - perfect for ibGib! I say this because I didn't make that inferrence about concrete goals. It was more of like an event that prompts more attention to the concept of concreteness. As for the website, I hope it is quite obvious that is a WIP! ;-) I'm not a great front-end person, as I am super abstract and backend-ish, which segues nicely into...
> How easy is it to detach the engine from the user interface in ibgib?
> I'd like to create something less "bubbly" for myself, but I could learn about how you use your simple data model to build domain objects.
To me, this is incredibly easy to do. But I'm not sure if DeepUI's HN thread is quite the right venue for such a thing. I would love to work with you (and anyone else interested) in a GitHub issue. I am holding off on doing my own Show HN because I want a couple more specific features before doing such a thing. (I wasn't planning on speaking this much, but the comment was just too perfect).
> Just curious, how does a data model touch religion? :-D
ibGib is less a data model and more a projection of a certain type of logic...it's a "meta-logic", which ends up being the logic. This is similar to how any turing machine can emulate any other turing machine. The point is that I've been developing this logic for just about my whole life. I was that kid who slept through class, did no homework, got 800 SAT/36 ACT math scores, etc. But axiomatic systems, and the process of actually applying math, and proofs, rigor, etc. all didn't sit well with me. Neither did religion. Now it does. But that's a perfect opportunity for a GitHub issue or an ibGib. I don't have notifications implemented yet, but collaborating is actually implemented in that you can add comments/pics/links to anyone's existing ibGib that you have a link to.
First, the notion of the pre-canned functions for nodes: That's one of the really novel things about ibGib, is that there is an infinite number of possible functions that can be "performed on" each node. Any node combined with any other node, and we're all nodes, our programs are nodes, etc. Currently, programmers micro-manage this in a locally addressed memory space. What I've discovered recently is that my design is actually like a universally sized turing complete language. One of the earlier conscious design decisions I made was that ib^gib are cheap and data itself is expensive. This is essentially the same decision when dealing with pointers and memory...You "just" pass around pointers and the actual thing can be dereferenced to get the value (also it's immutable, also it maintains integrity, and more and more, I have to stop though or I'll keep going). So basically, my point is that dealing with the pre-canned function aspect is essentially just creating a new language...but why a new language and what is different?
Which brings me to my second point about the "low friction way of amassing (and sharing) a library of function nodes...": My design also ends up coinciding in many ways to GitHub's hashes (which I only realized after the fact when explaining this system to a brother of mine). But fundamentally ibGib is unique! Whereas GitHub (and everything else like it) thinks in terms of files and folders, dealing with diffing files, ibGib works at the conceptual/semantic level thinking of everything in terms of ibGib. You don't "create a new" ibGib or "new up" an ibGib. You fork an existing ibGib (any existing ibGib), and when you are forking a "blank", then you are actually forking the "Root". This conceptually has profound implications, but the end product is that you are forking, mut8ing, and rel8ing at an atomic level, the atomicity part being totally up to the ibGib that is combining state. For now, that's just my ibGib engine on my server, but really it's anything that takes in an ibGib and outputs an ibGib. So imagine you went to GitHub and not just forked a library, but forked a single function/class in that library. ibGib's data structure keeps a complete dependency graph (in the form of rel8ns to the ib^gib pointers) for every single ibGib. So if you have to make a change to some aspect, you fork it and make the change and now you're using the fork.
There are issues of complexity arising at this level of granularity though, which is partly why I'm working very slowly on concrete goals. The first one is the "Note Taking" app, which already is phenomenally useful (the videos I have don't really touch it). I'm dogfooding it every day, and though it obviously has limitations, it's extremely easy to use (as long as I don't get my micro t2 limit, now upgraded to a small on aws hah). Also to address the granularity though is how easily this system incorporates AI/machine learning, etc. This is because it's essentially a Big Data architecture on top of everything else. You create nodes that operate on projections of nodes that creates other nodes.
And I've already written so much, but I had to also mention that your "higher-level canvas" is a dependency graph projection of ibGib. Just today I've implemented tagging ibGib (which you can see on my github repo on my issue branch). Anyway, thanks for your response, and I'd love to talk more with you and anyone else about this because I think it's really exciting :-O, since it actually ties together many many things fundamentally: logic, physics, mathematics, AI, religion...the list goes on and on. Feel free to create an issue on my repo and we'll mark it as discussion, question, etc. After I've gotten a couple more features in place I plan on doing a Show HN.
Also I apologize to the DeepUI people as I'm using their thread to talk about ibGib (so I'm cutting it off here!). I have to mention that their video to me looks really awesome, and it reminds me of the MIT Scratch program (https://scratch.mit.edu/). But like others have mentioned on this thread, I also was totally confused as to how one would actually use it. But I love front end magic and polish, since that is what I severely lack!
Considering the complexity involved in developing a an advanced IDE or similar, have you considered publishing an open source community version? Similar to JetBrain with IntelliJ. They seem to be doing great.
Since we're on the subject of experimental UI concepts, I'll plug Bret Victor's Inventing on Principle  talk. For me it was an instant classic.
He is my hero actually :)
- You can run the program as you wish, for any purpose.
- You can study how the program works, and change it so it does what you want.
- You can redistribute unmodified copies without fear of the publiher suing you.
- You can distribute copies of your modified versions to others without fear of the publiher suing you.
Who wouldn't want to have all that in a tool that will be the basis for your own projects?
In all your three examples, the product they create is detached from the creation platform before release, and that product (text, music, images) is compiled/played/displayed on a different application; an also there are many alternative tools where the product could be processed if the original tool ceased to exist.
So there's no fear with those that, if the tool creator begins imposing more and more restrictions, you'll be locked-in. But a unique programming language for which your code can't possibly be ported to a competing platform? One would be stupid to develop for it anything requiring ongoing commercial support, beyond some cool tech demos.
Anyway not here looking to start a flame war. My point was that the product can be good or bad and it doesn't have anything to do with it's source code being available or not.
My 5c. Everyone feel free to disagree.
A product can be good or bad without OS. So long as you can deploy what you make with it effectively.
Sometimes that means mindshare, like UDK and Source.
Sometimes that means complete decoupling tooling from code like Sublime and Atom.
Sometimes it is more about the deployment, like Visual Studio and Unity.
However, to stand out, this product needs one of the above.
It's new, and doesn't seem to be backed by a big name, so no mindshare yet.
So they need to somehow make devs want to use it for their code. A nice experience is usually not enough.
An OS community, or a fantastic cross-compilation strategy are the only two things that I, personally, have seen work.
OS seems to be the easiest of the two.
While I side with your central argument, there are numerous cases where the above is absolutely enough.
There might be a cost/benefit analysis where some closed platform is extremely far ahead in every other aspect, but you should be very well aware of the risks of being tied to that platform.
I'm happy with non-commercial use (including providing the source), but if someone wants to make money from software I write, it's only fair that they should pay me, and under the open source model they don't have to, and usually they won't.
If a commercial user likes my software they can pay for it or hire me. (I'm available). Is that too much to ask?
1) you become unavailable?
2) you don't have the physical capacity to develop everything that the client needs?
3) you don't want to implement some feature that the user needs, for reasons? (related to the software architecture, or the direction you want the project to take, or whatever).
Open source gives the commercial client the flexibility to adapt the code to their uses, in a way that being tied to a single provider will never achieve.
If they pay for it, they'd have the source, and would be able to modify it for their own needs but not release it.
But having access to the source, without permission to modify it nor a perpetual license for using it, doesn't solve any of the problems I listed above. And if they have permission and are doing the modifications themselves, why should they pay the original author?
> if they have permission and are doing the modifications themselves, why should they pay the original author?
They should have paid the author for the right to use the software. That would normally include the right to adapt the software for their own purposes.
The author has the right to decide on the terms of the licence under which software is released. Any user who doesn't agree to the terms shouldn't be allowed to use the software.
I never said that this shouldn't happen, don't attribute me words that I didn't say. I said that typically it won't, since it doesn't make any sense to the users of the software.
> They should have paid the author for the right to use the software. That would normally include the right to adapt the software for their own purposes.
You didn't reply to my question, which was: why?
> The author has the right to decide on the terms of the licence under which software is released. Any user who doesn't agree to the terms shouldn't be allowed to use the software.
No one is arguing otherwise. What I'm trying to explain is that developers following that strategy will likely find themselves with very few users. In the long term, the developer who gives the users a product that better matches the user's needs will displace the one who doesn't, that's pure market behaviour.
Paying the author for the right to be locked in a closed software ecosystem is a terrible value proposition from the point of view of the client. Note that this argument applies primarily to software like the one in the article, which tries to be a development platform, not necessarily to applications.
Vimeo also behaves like this on mobile and it's far superior to Youtube, which often totally hides the fullscreen button.
Because you're doing other stuff while listening to the audio.
>Youtube, which often totally hides the fullscreen button.
It's in the bottom right.
Not in OP's video. There's literally no way to escape the full-screen without pressing the escape key. You can't even double-click. I'm actually impressed by how user-hostile that video is.
Because the way they are doing it there is no minimize button. A less technically inclined person may not know that they have to press Escape (and now, it seems, some laptops have no hardware Esc button at all). In addition, the first time I played it, for some reason, Esc did not work and I had to Alt-Tab in order to leave the video.
Presumably, this would use the usual mobile video player though, which does have a minimise button, so its likely not a real issue (I'm not on iPad right now so didn't check).
I think the project is trying to be a user friendly way of writing programs, and I think that's an awesome idea, but the actual product looks otherwise.
I finished the video and still have no idea what the hell was going on through out the entire video.
If you believe their website it has been used in the Russian Space Program.
But I have to agree with you, this DeepUI looks a PITA to work with in comparison to DRAKON.
Graphics are absolutely wonderful, in contrast, when they are able to stick to a domain abstraction, which is why we have a notion of application software at all. I have, in fact, aimed towards discovering what a "Pong synthesizer" would look like, so I have the domain knowledge to know that it does tend to lead to the digital logic breakdown if you aim to surface every possibility as a configurable patch point. As a result I started looking more deeply into software modularity ideas and rediscovered hierarchical structures(Unix, IP addressing, etc.) as the model to follow. I'm gradually incorporating those ideas into a functioning game engine, i.e. I'm shipping the game first and building the engine as I find time, and I do have designs for making a higher level editor of this sort at some point.
However, I also intend to have obvious cutoff points. There are certain things that are powerful about visual systems, but pressuring them to expose everything at the top level is more obscuring than illuminating. So my strategy is to instead have editors that compile down to the formats of the modular lower-level system, smoothing the "ladder of abstraction" and allowing people to work at the abstraction level that suits them.
Otoh, in a visual programming language it'd feel more natural to make the upper and lower edges of the playing field collidable (I'm sure there's a better word for that), so that moving the paddle is inherently limited by collision with the edges.
Exactly! A short tutorial using screenshots to explain what stuff means, for example, would go a long way and doesn't actually require anything to be implemented as the screenshots could be mocked up.
Having to use a mouse to interact with your programming ide graphically doesn't scale. It does make for a decent tool for hobby projects or prototyping though.
But don't throw away text-based programming yet; the wiser move would be to combine the two.
Find use-cases were visual DeepUI style programming shines and is vastly superior to text-based programming, but let me polish the details with old-school text source code.
There are apps which already do a lot of this, for example Unity - you can assemble your scenes and animations visually and tune it up with code.
Ultimately work like this though leads to needing to reinvent all of programming (unfortunately) for instance, I'm now having to build a graph database to handle Turing Complete systems that are being collaborated on in realtime (see http://gun.js.org). So prepare for a long haul of work ahead of you. Get to know people in the space, like me and the Eve team, etc.
If you persist long enough (don't let money or lack of money stop you) you'll make a dent. :)
I'd like to incorporate this coupling idea into my own visual dataflow language (http://web.onetel.com/~hibou/fmj/FMJ.html), but haven't yet decided how to implement it. My approach has been to design the language from the bottom-up, so that simple programs can be simply drawn, and there are higher level programming constructs which simplify more complex code, avoiding the complexity problem (the Deutsch limit) you've seen with LabVIEW.
it's a rule amongst good labview programmers that you keep your block diagram to where it fits on a single, reasonably sized/resolution monitor without scrolling. simply adhering to that rule encourages good coding practice. within my large systems, i am able to freely edit pieces with often no unintended consequences. since reference-based data types are really only used for multi-threaded communication and instrument/file communication, you typically are operating on value-based data which makes reliable code development quite easy.
and what you describe is equally applicable to any text-based language. neither labview nor text-based languages have built-in precautions against horrific coding standards.
What's really annoying about LabVIEW is, that its programming paradigm is kind-of functional, but it doesn't go the full effort and forces you to do things, which one kind of expects are abstracted away, and things become a mess. Let me explain my top pet peeve:
In LabVIEW the main concept are so called VIs: Virtual Instruments. A VI consists of a number of inputs called "Controls", some logic in between and outputs called "Indicators". Inside a VI you have the full range of programming primitives like loops (which interestingly enough can also work like list comprehensions through automatic indexing, but I digress) "variables" (in the form of data flow wires) but no functions. VIs are what you use as function. And if everything happens through VI inputs and outputs and you don't use global variables, feedback nodes or similar impure stuff it's pretty much functional.
Somewhere your program has to start, i.e. there must be some kind of "main" VI. But VIs mostly behave like functions, so if you hit "run" for the main VI it will just follow its data flow until every input has reached what it's wired to and all subVI instances have executed and thats it. That's perfect for a single shot program, like you'd have on the command line or executing to serve a HTTP request, however it's kind of the opposite of what you want for an interactive program that has a visual UI. Sure there is that "run continuously" mode which will just loop VI execution. But all what it does is re-evaluate and execute each and every input and subVI again and again and again. If you're using LabVIEW in a laboratory setting, which is its main use, you probably have some sensors, actuators or even stuff like lasers controlled by this. And then you do not want to have then execute whatever command again and again. There is a solution to this of course, which are called "event structures". Essentially its like a large "switch" statement, that will dispatch exactly once for one event. Of course this caters only toward input manipulation events and some application execution state events. And you can not use it in "run continuously" mode without invoking all the other caveats. So what you do is, you place it in a while loop. How do you stop the while loop? Eh, splat a "STOP" button somewhere on the Front Panel (and don't forget to add a "Value Changed" event handler for the stop button, otherwise you'll click STOP without effect until you manipulate something else).
And then in the Event structure you have to meticulously wire all the data flows not touched by whatever the event does through so called "shift registers" in the while loop to keep the values around. If you forget or miswire one data flow you have a bug.
What seriously annoys me about that is, that in principle the whole dataflow paradigm of LabVIEW would allow for immediate implementation of FRP (functional reactive programming): re-evaluation and execution of only those parts of the program that are affected by the change.
The other thing that seriously annoys me is how poorly polymorphism is implemented in LabVIEW and how limited dynamic typing is. I'd not even go as far as saying that LabVIEW does type inference, although at least for primitive types it covers a surprisingly large set of use cases. Connect numeric type arrays to an arithmetic operation and it does it element wise. Connect a single element numeric type and an array and it again does things element wise. Have an all numeric cluster (LabVIEW equivalent of a struct) and you can do element wise operations just as well. So if we were to look at this like Haskell there's a certain type class to which numeric element arrays, clusters and single elements belong and it's actually great and a huge workload saver! Unfortunately you can't expose that on the inputs/outputs of a VI. VI inputs/outputs always have to be of a specific type. Oh yes, there are variants, but they're about as foolproof to use as `void*` in C/C++. So the proper way to implement polymorphism in LabVIEW is to manually create variants of your VI for each and every combination of types you'd like to input and coalesce them in a polymorphic VI. And since you have to do it with the mouse and VIs are stored in binary this is not something you can easily script away. Gaaahhh…
It seems like its great for things that have an on-screen spatial meaning like the pong example game. But what if I want to represent something abstract?
Like a tree (lets say a quad-tree since this is for (2d?) games for now)? Or what if I want to implement AI logic (lets say I want some kind of decision-tree planner and path finding)? I'm having trouble visualising (I guess because the video didn't really go to explain) how any of this can be done, as opposed to "moving something around on the screen".
I assume this has been thought about. I just couldn't figure out any of the details from the video.
 although even in that case, I couldn't figure out what the symbols and lines in the video meant. The symbols especially seem cryptic. A mix between logic gates and something else?
* https://thegrid.io/ (seems like their site is having some difficulties)
Before yours, there have been many products that promise to enable programming for everyone, many of these have been scams, utter failures or were just overly marketed mediocre software packages.
The things is, the way your video seems voiced to "sell" rather than "explain" what is going on, as well as the odd, or at least "unique" hacker aesthetic of the website makes it look very dubious.
I have no idea how to showcase a product like this the "right" way, maybe there is no "right" way, but I can certainly say that the aesthetic, big claims, confusing video and ambitious future plans give the project a bad smell.
(Pro-tip for the site; screenshots and concrete descriptions on how things work)
Initially, Torvalds wanted to call the kernel he developed Freax (a combination of "free", "freak", and the letter X to indicate that it is a Unix-like system), but his friend Ari Lemmke, who administered the FTP server where the kernel was first hosted for download, named Torvalds's directory linux.
Even if this wasn't spaghetti code, most of the Labview icons are totally unreadable:
Simulink and LabVIEW are made for electrical engineers transitioning to developers. They are good tools for building and shipping products.
DeepUI on the other hand introduces new paradigm for visual expression, and it looks to be more targeted towards computer scientists and experimental artists (and maybe hobby game developers with aversion to traditional programming).
I agree that in all these visual programming languages the interface is always way too slow. They tend to completely ignore the keyboard which is by far the fastest input device.