Conventional programming languages is one answer. They associate programs with text. Some believe there is another way, by associating programs with diagrams. A more abstract example: machine learning associates programs with parameters and weights.
In some weird way, I feel these are all skeuomorphisms. We choose text because that's how we comprehend literature. We choose diagrams because we are visual. We choose ML because we mimic how our brains work.
We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.
For example, take thinking of programming textually. Text assumes a beginning, an end, and an ordered sequence between them. But in this small programming example, is there a well-defined ordering?
a = 0;
b = 1;
c = a + b;
Since the first and second lines can be switched, in some sense Text itself does not do Thought justice.
Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?
I think the more interesting question is how we can accurately represent thought?
In pure functional languages, these expressions form a dependency graph and the interpreter or compiler may choose an ordering and may cache intermediate results
We may even represent the program itself as a graph, just like you suggest with ML programs, but this is a general purpose program
Obviously we can't do this in imperative languages.
I think pure functional programming enables this future of thinking about programs as graphs and not as text.
I think the move towards functional programming, and putting the onus on developers to do the mental elbow grease of converting what are largely macro-style tasks (do this, do that) into functional code (feed this transform into this one) has done a great disservice to software engineering, especially with respect to productivity.
For a specific example: I use map() frequently with a use() clause or some other means of passing immutable variables to the inner scope. I have done the work of building that dependency graph by hand. But I should be able to use a mundane foreach() or even a dreaded for() loop, have the compiler examine my scope and see that I'm using my variables in an immutable fashion, and generate functional code from my imperative code.
What I am getting at is that in the olden days we used synchronous macros do a series of tasks and even though it was mediocre at best, it gave tremendous leverage to the developer. Today the amount of overhead required to map-reduce things or chain promises and carry the mental baggage of every timeout and failure mode is simply untenable for the human brain beyond a certain complexity. What we really need is to be able to read and write code imperatively but have it executed functionally, with every side effect presented for us.
I realize there is a lot of contradiction in what I just said but as far as I can tell, complexity has only increased in my lifetime while productivity has largely slipped. Shifting more and more of the burden to developer proficiency is just exactly the wrong thing to do. I want more from a typical computer today that is 1000 times faster than the ones I grew up on.
I think you've got this exactly backwards. Functional programming lets you think at a higher level of abstraction (data flow) than imperative programming (control flow). The compiler then applies elbow grease to translate your high-level data flow transformations into low-level control flow constructs.
Let's translate your statement back a generation and see how it sounds: "I think the move towards structured programming, and putting the onus on developers to do the mental elbow grease of converting what are largely assembly-level tasks (branch, copy, add a value to a register) into structured code (if, while, for) has done a great disservice to software engineering, especially with respect to productivity."
Hopefully you can understand how silly that seems to a modern programmer.
If you only ever work on things that map well to functional programming then you'll naturally think it's superior to imperative programming. Likewise, if you only ever work on things that map well to imperative programming, then the functional programming approach seems a bit silly.
It will not always be easier, but it certainly provides more control over the execution flow.
That said, you can write spaghetti code in any language. =)
I’m to the point where I am thinking about rejecting all of this and programming in synchronous shell-scripting style in something like Go, to get most of the advantages of Erlang without the learning curve. If languages aren’t purely functional, then I don’t believe they offer strong enough guarantees for the pain of using them. And purely functional languages can’t offer the leverage that scripting can, because they currently can’t be transpiled from imperative ones. It’s trivial to convert from functional to imperative, but basically impossible to go the other direction. They do nothing for you regarding the difficult step (some may argue the only step) of translating human ideas to logic. I think that’s the real reason that they haven’t achieved mainstream adoption.
Swift is probably one of the worst examples when it comes to functional programming because it's still a C like language with some FP like things in the stdlib. So you get none of the advantages and some inconsistencies weighing it down.
DeepUI seems like yet another way to tackle implementing our goals in a different language, and thereby also gain understanding into how we 'normally' do it.
It is exactly that, an attempt to structure data in a similar way that our thoughts are formed. I believe it was this video where he shortly explained that concept; https://www.youtube.com/watch?v=Bqx6li5dbEY
Although it might be a different video since Ted Nelson is all over the place with his documents and videos.
Yes, we do. ML/AI is a moving target that literally represents the current SOTA in doing just that, and even symbolic logic itself is the outcome of an older, a priori, way of doing that. Actually, analytic diagrams are also an outcome of one approach to that. So, all programming methods you mention come from some effort to model thought and make a representation of that model.
My real point is that thought is not visual or textual. Those things are simply ways of transmitting thoughts. When I have a thought, and I write it down, and you read it, I am simply hoping you are now having a thought related to the one I had. Some interaction in your brain is similar to the one in mine, when I had the thought. Civilization has spent a lot of time in mechanisms that correlate thoughts between people. Hence, language. Hence literacy. Etc.
Now we are trying to create a shared language between humans and computers, where we both understand each other with minimal effort.
Even the word "language" is biased towards text, or at least an atomic symbolic representation which is probably verbal.
I agree this is unimaginative, and probably naive. But dataflow/diagramatic systems tend to produce horrible messy graphs that are incredibly unwieldy for non-trivial applications. (My favourite anti-example is Max/MSP which is used for programming sound, music, and visuals. It's popular with visual artists, but its constructs map so poorly to traditional code that using it when you're used to coding with text is a form of torture.)
I think it's hard to do better, because human communication tends to be implicit, contextual, somewhat error prone, and signals emotional state, facts, emotional desires, or more abstract goals.
Computer communication lacks almost all of the above. Programming is a profoundly unnatural pastime that doesn't come easily to most of the population.
The fact that written languages and code both use text is very misleading. They don't use text in anything like the same ways, and code systems are brittle, explicit, and poor cousins of the formal theorem description used in math.
So the domains covered have almost no overlap. Coding is machine design, and human thought mostly isn't. It's hard to see how they can work together with minimal effort unless the machines explicitly include a model of human mental, emotional, and social states, in addition to the usual representations of traditional formal logic.
The tricky part is that it needs to be a shared language between humans, computers, and other humans, if we want software to be maintainable.
I had to laugh here, because that is exactly how I designed ibGib over the past 15 years. It is built conceptually from the ground up, working in conflict (and harmony) with esoteric things like philosophy of mathematics and axiomatic logic systems, information theory, logic of quantum physics, etc. Anyway, like I said...I just had to laugh at this particular statement! :-)
> Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?
> I think the more interesting question is how we can accurately represent thought?
In ibGib, I have created a nodal network that currently can be interacted with via a d3.js force layout. Each node is an ibGib that has four properties (the database has _only_ these four properties): ib, gib, data, and rel8ns (to keep it terse). The ib is like a name/id/quick metadata, the data is for internal data, the rel8ns are named links (think merkle links), and the gib is a hash of the other three.
The ib^gib acts as a URL in a SHA^256-sized space. So each "thought" is effectively a Goedelian number that represents that "thought".
This is essentially the "state" part of it. The way that you create new ibGib is for any ibGib A to "contact" an ibGib B. Currently the B ibGibs are largely transform ibGibs that contain the state necessary to create a tertiary ibGib C. So each one, being an immutable datum with ib/gib/data/rel8ns, when combined with another immutable ibGib, acts as a pure function given the engine implemented. This pure function is actually encapsulated in the server node that the transformation is happening, so it's conceivable that A + B -> C on my node, where A + B -> D on someone else's node. So the "pure" part is probably an implementation detail for me...but anyway, I'm digressing a little.
I'm only starting to introduce behavior to it, but the gist of it is that any behavior, just like any creation of a "new" ibGib, is just sending an immutable ibGib to some other thing that produces a tertiary immutable ibGib. So you could have the engine on some node be in python or R or Bob's Manual ibGib manipulating service where Bob types very slowly random outputs. But in the visual representation of this process, you would do the same thing that you do with all other ibGib. You create a space (the rel8ns also form a dependency graph for intrinsic tree-shaking btw) via querying, forking others' ibGibs, "importing", etc. Then you have commands that act upon the various ibGib, basically like a plugin architecture. The interesting thing though is that since you're black-boxing the plugin transformation (it's an ibGib), you can evolve more and more complex "plugins" that just execute "their" function (just like Bob).
Anyway, I wasn't going to write this much...but like I said. I had to laugh.
The ones I've seen all seem to try to give you a whole bunch of pre-canned function/node for everything you might want to do. This is clearly not a feasible approach. As I see it, they really only need to implement three to four things to have the ideal solution. The first two are: function-nodes that take input which they operate on to produce output, and directed edges that make it possible to connect outputs to inputs.
And following from this the second two logically fall out: a low friction way of amassing (and sharing) a library of function nodes, and some clever UI trickery that makes it easy to black-box a 'canvas' of interconnected function-nodes so that it just becomes a single function-node on a 'higher-level' canvas (i.e. effortless encapsulation, composition and abstraction without loss of low-level control). Systems within systems within systems.
I honestly don't know if any of this makes sense or sounds like a good idea to anyone else. Admittedly I tend to think that our entire reality, from the cosmological to the microscopic, is just one big system composed entirely of other interconnected, lower-order systems. Everything is a system.
There's a problem though that every developer will understand those explanations in slightly different ways, making it difficult to communicate the reason why such model is needed. What I miss in projects like ibgib out your comment above is grounding it in concrete examples of use cases, practical situations that are particularly interesting to the developer and which explain how their specific approach is better than traditional programming in that situation.
So, as I mentioned in the other reply (I even used the term "concrete"), I am currently just on a really cool note taking app. It's pretty much done(-ish) since it's totally usable, and I'm on now to a different concrete goal.
The other "practical situation" arose from ibGib's previous Android incarnation, which was basically two-fold: 1) Too expensive creating domain-targeted data structures (was using EF, but any relational-database would have the same issues). 2) Caching and cache invalidation.
IbGib is addressing both of these issues: 1) I now have an engine to create domain business objects without all the muck of having to "get it right" the first time. This is because it keeps track of the entire history of the process of creating the (class) data structure as well as the actual data implementing those structures. 2) As a corollary to this aspect, I now can update the client whenever any change occurs within an entire dependency graph, because both the data and the code-as-data are "indexed" by the ib^gib.
So caching in ibGib's current web app is basically about passing around "pointers", from what I understand this is very similar to how Docker handles layering hashes when building and rebuilding docker images.
Also, I can't avoid saying a meta use case, which is this thread that we're having right now. In a forum, you have a linear view and that's pretty much it. With ibGib, you can have truly branching threads, with a linear time view being just one projection of the content of those branches. So, say for example with Slack, they have a "thread" feature that they've just implemented. But it's only one thread. With ibGib, it's n-threads. The linear view is one of my issues that I'm going to be tackling next (along with notifications). But it's slow going, cuz it's just me ;-)
Yeah sorry I didn't mean to imply that you don't have concrete goals (although I couldn't find them explicitly stated in your website), only that this kind of "rethinking computing/storage/interaction" projects are often hard to approach from the outside.
> IbGib is addressing both of these issues: 1) I now have an engine to create domain business objects without all the muck of having to "get it right" the first time. This is because it keeps track of the entire history of the process of creating the (class) data structure as well as the actual data implementing those structures.
That's cool! I've been looking for a platform that allowed incremental persistent storage, to build my own note-taking-meets-programming tool. How easy is it to detach the engine from the user interface in ibgib? I'd like to create something less "bubbly" for myself, but I could learn about how you use your simple data model to build domain objects. I've also been following the Eve language and I like their computation model, but so far there's no much there in terms of persistence.
> I think it's really exciting :-O, since it actually ties together many many things fundamentally: logic, physics, mathematics, AI, religion...
Just curious, how does a data model touch religion? :-D
Ah, infer and imply - perfect for ibGib! I say this because I didn't make that inferrence about concrete goals. It was more of like an event that prompts more attention to the concept of concreteness. As for the website, I hope it is quite obvious that is a WIP! ;-) I'm not a great front-end person, as I am super abstract and backend-ish, which segues nicely into...
> How easy is it to detach the engine from the user interface in ibgib?
> I'd like to create something less "bubbly" for myself, but I could learn about how you use your simple data model to build domain objects.
To me, this is incredibly easy to do. But I'm not sure if DeepUI's HN thread is quite the right venue for such a thing. I would love to work with you (and anyone else interested) in a GitHub issue. I am holding off on doing my own Show HN because I want a couple more specific features before doing such a thing. (I wasn't planning on speaking this much, but the comment was just too perfect).
> Just curious, how does a data model touch religion? :-D
ibGib is less a data model and more a projection of a certain type of logic...it's a "meta-logic", which ends up being the logic. This is similar to how any turing machine can emulate any other turing machine. The point is that I've been developing this logic for just about my whole life. I was that kid who slept through class, did no homework, got 800 SAT/36 ACT math scores, etc. But axiomatic systems, and the process of actually applying math, and proofs, rigor, etc. all didn't sit well with me. Neither did religion. Now it does. But that's a perfect opportunity for a GitHub issue or an ibGib. I don't have notifications implemented yet, but collaborating is actually implemented in that you can add comments/pics/links to anyone's existing ibGib that you have a link to.
First, the notion of the pre-canned functions for nodes: That's one of the really novel things about ibGib, is that there is an infinite number of possible functions that can be "performed on" each node. Any node combined with any other node, and we're all nodes, our programs are nodes, etc. Currently, programmers micro-manage this in a locally addressed memory space. What I've discovered recently is that my design is actually like a universally sized turing complete language. One of the earlier conscious design decisions I made was that ib^gib are cheap and data itself is expensive. This is essentially the same decision when dealing with pointers and memory...You "just" pass around pointers and the actual thing can be dereferenced to get the value (also it's immutable, also it maintains integrity, and more and more, I have to stop though or I'll keep going). So basically, my point is that dealing with the pre-canned function aspect is essentially just creating a new language...but why a new language and what is different?
Which brings me to my second point about the "low friction way of amassing (and sharing) a library of function nodes...": My design also ends up coinciding in many ways to GitHub's hashes (which I only realized after the fact when explaining this system to a brother of mine). But fundamentally ibGib is unique! Whereas GitHub (and everything else like it) thinks in terms of files and folders, dealing with diffing files, ibGib works at the conceptual/semantic level thinking of everything in terms of ibGib. You don't "create a new" ibGib or "new up" an ibGib. You fork an existing ibGib (any existing ibGib), and when you are forking a "blank", then you are actually forking the "Root". This conceptually has profound implications, but the end product is that you are forking, mut8ing, and rel8ing at an atomic level, the atomicity part being totally up to the ibGib that is combining state. For now, that's just my ibGib engine on my server, but really it's anything that takes in an ibGib and outputs an ibGib. So imagine you went to GitHub and not just forked a library, but forked a single function/class in that library. ibGib's data structure keeps a complete dependency graph (in the form of rel8ns to the ib^gib pointers) for every single ibGib. So if you have to make a change to some aspect, you fork it and make the change and now you're using the fork.
There are issues of complexity arising at this level of granularity though, which is partly why I'm working very slowly on concrete goals. The first one is the "Note Taking" app, which already is phenomenally useful (the videos I have don't really touch it). I'm dogfooding it every day, and though it obviously has limitations, it's extremely easy to use (as long as I don't get my micro t2 limit, now upgraded to a small on aws hah). Also to address the granularity though is how easily this system incorporates AI/machine learning, etc. This is because it's essentially a Big Data architecture on top of everything else. You create nodes that operate on projections of nodes that creates other nodes.
And I've already written so much, but I had to also mention that your "higher-level canvas" is a dependency graph projection of ibGib. Just today I've implemented tagging ibGib (which you can see on my github repo on my issue branch). Anyway, thanks for your response, and I'd love to talk more with you and anyone else about this because I think it's really exciting :-O, since it actually ties together many many things fundamentally: logic, physics, mathematics, AI, religion...the list goes on and on. Feel free to create an issue on my repo and we'll mark it as discussion, question, etc. After I've gotten a couple more features in place I plan on doing a Show HN.
Also I apologize to the DeepUI people as I'm using their thread to talk about ibGib (so I'm cutting it off here!). I have to mention that their video to me looks really awesome, and it reminds me of the MIT Scratch program (https://scratch.mit.edu/). But like others have mentioned on this thread, I also was totally confused as to how one would actually use it. But I love front end magic and polish, since that is what I severely lack!