Hacker News new | past | comments | ask | show | jobs | submit login

There is a deeper question undercutting this project (and Bret Victor's Drawing Dead Fish talk and related approaches). That question is: how can we represent computation in an intuitive and scalable way?

Conventional programming languages is one answer. They associate programs with text. Some believe there is another way, by associating programs with diagrams. A more abstract example: machine learning associates programs with parameters and weights.

In some weird way, I feel these are all skeuomorphisms. We choose text because that's how we comprehend literature. We choose diagrams because we are visual. We choose ML because we mimic how our brains work.

We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.

For example, take thinking of programming textually. Text assumes a beginning, an end, and an ordered sequence between them. But in this small programming example, is there a well-defined ordering?

a = 0; b = 1; c = a + b;

Since the first and second lines can be switched, in some sense Text itself does not do Thought justice.

Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?

I think the more interesting question is how we can accurately represent thought?




> is there a well-defined ordering? > a = 0; b = 1; c = a + b;

In pure functional languages, these expressions form a dependency graph and the interpreter or compiler may choose an ordering and may cache intermediate results

We may even represent the program itself as a graph, just like you suggest with ML programs, but this is a general purpose program

Obviously we can't do this in imperative languages.

I think pure functional programming enables this future of thinking about programs as graphs and not as text.


Back in the late 80s/early 90s after I learned C, I remember wondering in awe how in the world compiler optimizations worked. But they do the same thing, they build (often intricate) dependency graphs. In the end, if a human can translate between imperative and functional programming, then there's no reason a machine can't.

I think the move towards functional programming, and putting the onus on developers to do the mental elbow grease of converting what are largely macro-style tasks (do this, do that) into functional code (feed this transform into this one) has done a great disservice to software engineering, especially with respect to productivity.

For a specific example: I use map() frequently with a use() clause or some other means of passing immutable variables to the inner scope. I have done the work of building that dependency graph by hand. But I should be able to use a mundane foreach() or even a dreaded for() loop, have the compiler examine my scope and see that I'm using my variables in an immutable fashion, and generate functional code from my imperative code.

What I am getting at is that in the olden days we used synchronous macros do a series of tasks and even though it was mediocre at best, it gave tremendous leverage to the developer. Today the amount of overhead required to map-reduce things or chain promises and carry the mental baggage of every timeout and failure mode is simply untenable for the human brain beyond a certain complexity. What we really need is to be able to read and write code imperatively but have it executed functionally, with every side effect presented for us.

I realize there is a lot of contradiction in what I just said but as far as I can tell, complexity has only increased in my lifetime while productivity has largely slipped. Shifting more and more of the burden to developer proficiency is just exactly the wrong thing to do. I want more from a typical computer today that is 1000 times faster than the ones I grew up on.


> I think the move towards functional programming, and putting the onus on developers to do the mental elbow grease of converting what are largely macro-style tasks (do this, do that) into functional code (feed this transform into this one) has done a great disservice to software engineering, especially with respect to productivity.

I think you've got this exactly backwards. Functional programming lets you think at a higher level of abstraction (data flow) than imperative programming (control flow). The compiler then applies elbow grease to translate your high-level data flow transformations into low-level control flow constructs.

Let's translate your statement back a generation and see how it sounds: "I think the move towards structured programming, and putting the onus on developers to do the mental elbow grease of converting what are largely assembly-level tasks (branch, copy, add a value to a register) into structured code (if, while, for) has done a great disservice to software engineering, especially with respect to productivity."

Hopefully you can understand how silly that seems to a modern programmer.


I don't think that data flow is a higher level of abstraction to control flow. They're just different types of abstraction, and each has its strengths and weaknesses.

If you only ever work on things that map well to functional programming then you'll naturally think it's superior to imperative programming. Likewise, if you only ever work on things that map well to imperative programming, then the functional programming approach seems a bit silly.


Functional code can represent an imperative program (using temporal logic or state monads), so it can be used for domains where you would use imperative code.

It will not always be easier, but it certainly provides more control over the execution flow.


Implementing an algorithm in a functional language almost always requires less code than implementing the same algorithm in an imperative language. That is direct evidence that functional languages are more abstract.


I enjoy functional programming because it's easier to reason about the code. Often it's easier to write it, but it's true that sometimes it's harder. I find that when I need to read and understand that code later, though, functional programming is usually a win. The same factors that make it sometimes harder to write - state must be passed around explicitly, idiomatic control flow constructs are less general, mutability is explicit and discouraged - make it much easier to understand later, because the interactions between different parts of the system are very clear. Compilers can certainly transform imperative code into the same form in many cases, but the benefit of functional programming is for my ability to reason, not the compiler's.

That said, you can write spaghetti code in any language. =)


Ya I'm not knocking functional programming (I prefer it as well) but I find it frustrating that so much of it breaks from the conventions we are accustomed to in C-style languages. Others around me can’t easily grok what I’ve written. Functional logic is the solution to the problem being tackled, but currently it has to be generated by hand, often in a write-only style. We are effectively writing functional assembly.

Take Swift for example (neat handle by the way!), it's probably the most pedantic language I have ever used. Personally I don't believe that it can be written without compiler assistance, especially when dealing with real-world data like JSON where pretty much anything can be optional. It's a language that makes me feel like anything I try will be rejected outright. It gets us halfway to functional programming with "let" and passing variables as immutable to callbacks by default, but then breaks too far from the contextual assumptions that we've built up in languages like C and javascript to "just work" when we try things. I feel misgivings about Rust for precisely these same reasons.

At this point the only functional language I've found that's somewhat approachable from an imperative background is probably ClojureScript, which is basically just Scheme running in one-shot mode, switching to Javascript to get the next piece of data instead of forcing the developer to use monads. It’s not even close to how I would design a functional language, but it’s substantially more approachable than say, Haskell.

I’m to the point where I am thinking about rejecting all of this and programming in synchronous shell-scripting style in something like Go, to get most of the advantages of Erlang without the learning curve. If languages aren’t purely functional, then I don’t believe they offer strong enough guarantees for the pain of using them. And purely functional languages can’t offer the leverage that scripting can, because they currently can’t be transpiled from imperative ones. It’s trivial to convert from functional to imperative, but basically impossible to go the other direction. They do nothing for you regarding the difficult step (some may argue the only step) of translating human ideas to logic. I think that’s the real reason that they haven’t achieved mainstream adoption.


This is a more fundamental issue though, we(the software industry as a whole) need to throw off the shackles of languages like C.

Swift is probably one of the worst examples when it comes to functional programming because it's still a C like language with some FP like things in the stdlib. So you get none of the advantages and some inconsistencies weighing it down.


Yes, I agree that Swift is 'too C-like', and besides, I was looking at Kotlin first, and it seems almost exactly the same syntactically. I like the J programming language, which allows for very small programs due to the composing and abstracting of functions. J's use of high-level abstraction using ASCII characters to represent functions (operators - verbs, adverbs, nouns...) seems to scare a lot of people away from it. The irony is of course the move towards array-computing hardware, GPUs and FPGAs that lend themselves for a perfect match with array-based languages like APL/J/K/Kona and other, and yet we mold array/vector libraries or patches to the C-style languages to enable programming GPUs People get comfortable with their PLs like their native tongue. It's why I would hear Westerners think a Chinese child was being particularly whiny compared to their own children or other Western children, when in reality the Cantonese-speaking child was saying the same type of things. Being American and understanding some of what the child was saying in Cantonese allowed me to make that observation, and fully realize how our comforts and preconceptions operate on how we perceive others and the world. This is why I try to be multilingual in PLs and spoken languages.

DeepUI seems like yet another way to tackle implementing our goals in a different language, and thereby also gain understanding into how we 'normally' do it.


i also think you have it backwards, FP is the higher level abstraction, pure FP is essentially pure math


This. Well put.


There's someone who definitely spent a lot of time on this, definitely check out Ted Nelson's ZigZag structure: http://xanadu.com/zigzag/

It is exactly that, an attempt to structure data in a similar way that our thoughts are formed. I believe it was this video where he shortly explained that concept; https://www.youtube.com/watch?v=Bqx6li5dbEY

Although it might be a different video since Ted Nelson is all over the place with his documents and videos.


> We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.

Yes, we do. ML/AI is a moving target that literally represents the current SOTA in doing just that, and even symbolic logic itself is the outcome of an older, a priori, way of doing that. Actually, analytic diagrams are also an outcome of one approach to that. So, all programming methods you mention come from some effort to model thought and make a representation of that model.


Perhaps I spoke too broadly. You're right, many disciplines try to understand thought. From CS to philosophy to neuroscience to psychology etc.

My real point is that thought is not visual or textual. Those things are simply ways of transmitting thoughts. When I have a thought, and I write it down, and you read it, I am simply hoping you are now having a thought related to the one I had. Some interaction in your brain is similar to the one in mine, when I had the thought. Civilization has spent a lot of time in mechanisms that correlate thoughts between people. Hence, language. Hence literacy. Etc.

Now we are trying to create a shared language between humans and computers, where we both understand each other with minimal effort.


Developer tools have always been biased towards text manipulation, and - in a Sapir-Whorf kind of a way - that has influenced which ideas are imaginable in computational languages.

Even the word "language" is biased towards text, or at least an atomic symbolic representation which is probably verbal.

I agree this is unimaginative, and probably naive. But dataflow/diagramatic systems tend to produce horrible messy graphs that are incredibly unwieldy for non-trivial applications. (My favourite anti-example is Max/MSP which is used for programming sound, music, and visuals. It's popular with visual artists, but its constructs map so poorly to traditional code that using it when you're used to coding with text is a form of torture.)

I think it's hard to do better, because human communication tends to be implicit, contextual, somewhat error prone, and signals emotional state, facts, emotional desires, or more abstract goals.

Computer communication lacks almost all of the above. Programming is a profoundly unnatural pastime that doesn't come easily to most of the population.

The fact that written languages and code both use text is very misleading. They don't use text in anything like the same ways, and code systems are brittle, explicit, and poor cousins of the formal theorem description used in math.

So the domains covered have almost no overlap. Coding is machine design, and human thought mostly isn't. It's hard to see how they can work together with minimal effort unless the machines explicitly include a model of human mental, emotional, and social states, in addition to the usual representations of traditional formal logic.


> Now we are trying to create a shared language between humans and computers, where we both understand each other with minimal effort.

The tricky part is that it needs to be a shared language between humans, computers, and other humans, if we want software to be maintainable.


String example: if you have "foo" and "bar", both are a list of characters. Now, "bar" has a beginning represented by a handle and you drag that handle to the end of "foo". Very briefly something like that. Of course, not everything is set in stone and we need to try multiple approaches to see which one is the fastest.


> We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.

I had to laugh here, because that is exactly how I designed ibGib over the past 15 years. It is built conceptually from the ground up, working in conflict (and harmony) with esoteric things like philosophy of mathematics and axiomatic logic systems, information theory, logic of quantum physics, etc. Anyway, like I said...I just had to laugh at this particular statement! :-)

> Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?

> I think the more interesting question is how we can accurately represent thought?

In ibGib, I have created a nodal network that currently can be interacted with via a d3.js force layout. Each node is an ibGib that has four properties (the database has _only_ these four properties): ib, gib, data, and rel8ns (to keep it terse). The ib is like a name/id/quick metadata, the data is for internal data, the rel8ns are named links (think merkle links), and the gib is a hash of the other three.

The ib^gib acts as a URL in a SHA^256-sized space. So each "thought" is effectively a Goedelian number that represents that "thought".

This is essentially the "state" part of it. The way that you create new ibGib is for any ibGib A to "contact" an ibGib B. Currently the B ibGibs are largely transform ibGibs that contain the state necessary to create a tertiary ibGib C. So each one, being an immutable datum with ib/gib/data/rel8ns, when combined with another immutable ibGib, acts as a pure function given the engine implemented. This pure function is actually encapsulated in the server node that the transformation is happening, so it's conceivable that A + B -> C on my node, where A + B -> D on someone else's node. So the "pure" part is probably an implementation detail for me...but anyway, I'm digressing a little.

I'm only starting to introduce behavior to it, but the gist of it is that any behavior, just like any creation of a "new" ibGib, is just sending an immutable ibGib to some other thing that produces a tertiary immutable ibGib. So you could have the engine on some node be in python or R or Bob's Manual ibGib manipulating service where Bob types very slowly random outputs. But in the visual representation of this process, you would do the same thing that you do with all other ibGib. You create a space (the rel8ns also form a dependency graph for intrinsic tree-shaking btw) via querying, forking others' ibGibs, "importing", etc. Then you have commands that act upon the various ibGib, basically like a plugin architecture. The interesting thing though is that since you're black-boxing the plugin transformation (it's an ibGib), you can evolve more and more complex "plugins" that just execute "their" function (just like Bob).

Anyway, I wasn't going to write this much...but like I said. I had to laugh.


This sounds very close to what I spent a year or two searching in vain for. The closest thing I could find was node-red (http://nodered.org/), but that still fell far short. In a weird way, the problem is that every attempt at implementing a functional/dataflow programming environment, at least from what I've seen, invariably tries to do way too much.

The ones I've seen all seem to try to give you a whole bunch of pre-canned function/node for everything you might want to do. This is clearly not a feasible approach. As I see it, they really only need to implement three to four things to have the ideal solution. The first two are: function-nodes that take input which they operate on to produce output, and directed edges that make it possible to connect outputs to inputs.

And following from this the second two logically fall out: a low friction way of amassing (and sharing) a library of function nodes, and some clever UI trickery that makes it easy to black-box a 'canvas' of interconnected function-nodes so that it just becomes a single function-node on a 'higher-level' canvas (i.e. effortless encapsulation, composition and abstraction without loss of low-level control). Systems within systems within systems.

I honestly don't know if any of this makes sense or sounds like a good idea to anyone else. Admittedly I tend to think that our entire reality, from the cosmological to the microscopic, is just one big system composed entirely of other interconnected, lower-order systems. Everything is a system.


It largely makes sense, and the are many developers trying to find this "holy grail" model of efficient computation that will be the basis of future software, overcoming the limits of the Von Neumann machine (which has been the base default model for the last >60 years).

There's a problem though that every developer will understand those explanations in slightly different ways, making it difficult to communicate the reason why such model is needed. What I miss in projects like ibgib out your comment above is grounding it in concrete examples of use cases, practical situations that are particularly interesting to the developer and which explain how their specific approach is better than traditional programming in that situation.


I didn't "set out" to find a "holy grail" and certainly not to overcome the Von Neumann architecture...ibGib in its current incarnation is just the current manifestation of my approach to address some of those concrete examples, mixed with my background like many of understanding and unifying things such as physics, maths, etc.

So, as I mentioned in the other reply (I even used the term "concrete"), I am currently just on a really cool note taking app. It's pretty much done(-ish) since it's totally usable, and I'm on now to a different concrete goal.

The other "practical situation" arose from ibGib's previous Android incarnation, which was basically two-fold: 1) Too expensive creating domain-targeted data structures (was using EF, but any relational-database would have the same issues). 2) Caching and cache invalidation.

IbGib is addressing both of these issues: 1) I now have an engine to create domain business objects without all the muck of having to "get it right" the first time. This is because it keeps track of the entire history of the process of creating the (class) data structure as well as the actual data implementing those structures. 2) As a corollary to this aspect, I now can update the client whenever any change occurs within an entire dependency graph, because both the data and the code-as-data are "indexed" by the ib^gib.

So caching in ibGib's current web app is basically about passing around "pointers", from what I understand this is very similar to how Docker handles layering hashes when building and rebuilding docker images.

Also, I can't avoid saying a meta use case, which is this thread that we're having right now. In a forum, you have a linear view and that's pretty much it. With ibGib, you can have truly branching threads, with a linear time view being just one projection of the content of those branches. So, say for example with Slack, they have a "thread" feature that they've just implemented. But it's only one thread. With ibGib, it's n-threads. The linear view is one of my issues that I'm going to be tackling next (along with notifications). But it's slow going, cuz it's just me ;-)


> So, as I mentioned in the other reply (I even used the term "concrete"), I am currently just on a really cool note taking app. It's pretty much done(-ish) since it's totally usable, and I'm on now to a different concrete goal.

Yeah sorry I didn't mean to imply that you don't have concrete goals (although I couldn't find them explicitly stated in your website), only that this kind of "rethinking computing/storage/interaction" projects are often hard to approach from the outside.

> IbGib is addressing both of these issues: 1) I now have an engine to create domain business objects without all the muck of having to "get it right" the first time. This is because it keeps track of the entire history of the process of creating the (class) data structure as well as the actual data implementing those structures.

That's cool! I've been looking for a platform that allowed incremental persistent storage, to build my own note-taking-meets-programming tool. How easy is it to detach the engine from the user interface in ibgib? I'd like to create something less "bubbly" for myself, but I could learn about how you use your simple data model to build domain objects. I've also been following the Eve language and I like their computation model, but so far there's no much there in terms of persistence.

> I think it's really exciting :-O, since it actually ties together many many things fundamentally: logic, physics, mathematics, AI, religion...

Just curious, how does a data model touch religion? :-D


> Yeah sorry I didn't mean to imply that you don't have concrete goals (although I couldn't find them explicitly stated in your website), only that this kind of "rethinking computing/storage/interaction" projects are often hard to approach from the outside.

Ah, infer and imply - perfect for ibGib! I say this because I didn't make that inferrence about concrete goals. It was more of like an event that prompts more attention to the concept of concreteness. As for the website, I hope it is quite obvious that is a WIP! ;-) I'm not a great front-end person, as I am super abstract and backend-ish, which segues nicely into...

> How easy is it to detach the engine from the user interface in ibgib?

The UI/web app is totally just a view into the engine (which is itself just the current expression of the "concept of ibGib"). It allows us to explore these abstract concepts more concretely. The plan is to have a CLI, an API, and possibly an isomorphic secondary implementation that allows for occasionally disconnected scenarios. The POC was attempted in isomorphic client/server javascript/typescript, but the complete parallel/concurrent aspect of it was too unwieldy. Elixir (Erlang and the BEAM(!)) turned out to be ridiculously well-suited for it, and their community and documentation is stellar.

> I'd like to create something less "bubbly" for myself, but I could learn about how you use your simple data model to build domain objects.

To me, this is incredibly easy to do. But I'm not sure if DeepUI's HN thread is quite the right venue for such a thing. I would love to work with you (and anyone else interested) in a GitHub issue. I am holding off on doing my own Show HN because I want a couple more specific features before doing such a thing. (I wasn't planning on speaking this much, but the comment was just too perfect).

> Just curious, how does a data model touch religion? :-D

ibGib is less a data model and more a projection of a certain type of logic...it's a "meta-logic", which ends up being the logic. This is similar to how any turing machine can emulate any other turing machine. The point is that I've been developing this logic for just about my whole life. I was that kid who slept through class, did no homework, got 800 SAT/36 ACT math scores, etc. But axiomatic systems, and the process of actually applying math, and proofs, rigor, etc. all didn't sit well with me. Neither did religion. Now it does. But that's a perfect opportunity for a GitHub issue or an ibGib. I don't have notifications implemented yet, but collaborating is actually implemented in that you can add comments/pics/links to anyone's existing ibGib that you have a link to.


You make so many salient points, I'm like a kid in a candy shop thinking of what to speak to. But in order to avoid writing a book, I'll just address your two main points.

First, the notion of the pre-canned functions for nodes: That's one of the really novel things about ibGib, is that there is an infinite number of possible functions that can be "performed on" each node. Any node combined with any other node, and we're all nodes, our programs are nodes, etc. Currently, programmers micro-manage this in a locally addressed memory space. What I've discovered recently is that my design is actually like a universally sized turing complete language. One of the earlier conscious design decisions I made was that ib^gib are cheap and data itself is expensive. This is essentially the same decision when dealing with pointers and memory...You "just" pass around pointers and the actual thing can be dereferenced to get the value (also it's immutable, also it maintains integrity, and more and more, I have to stop though or I'll keep going). So basically, my point is that dealing with the pre-canned function aspect is essentially just creating a new language...but why a new language and what is different?

Which brings me to my second point about the "low friction way of amassing (and sharing) a library of function nodes...": My design also ends up coinciding in many ways to GitHub's hashes (which I only realized after the fact when explaining this system to a brother of mine). But fundamentally ibGib is unique! Whereas GitHub (and everything else like it) thinks in terms of files and folders, dealing with diffing files, ibGib works at the conceptual/semantic level thinking of everything in terms of ibGib. You don't "create a new" ibGib or "new up" an ibGib. You fork an existing ibGib (any existing ibGib), and when you are forking a "blank", then you are actually forking the "Root". This conceptually has profound implications, but the end product is that you are forking, mut8ing, and rel8ing at an atomic level, the atomicity part being totally up to the ibGib that is combining state. For now, that's just my ibGib engine on my server, but really it's anything that takes in an ibGib and outputs an ibGib. So imagine you went to GitHub and not just forked a library, but forked a single function/class in that library. ibGib's data structure keeps a complete dependency graph (in the form of rel8ns to the ib^gib pointers) for every single ibGib. So if you have to make a change to some aspect, you fork it and make the change and now you're using the fork.

There are issues of complexity arising at this level of granularity though, which is partly why I'm working very slowly on concrete goals. The first one is the "Note Taking" app, which already is phenomenally useful (the videos I have don't really touch it). I'm dogfooding it every day, and though it obviously has limitations, it's extremely easy to use (as long as I don't get my micro t2 limit, now upgraded to a small on aws hah). Also to address the granularity though is how easily this system incorporates AI/machine learning, etc. This is because it's essentially a Big Data architecture on top of everything else. You create nodes that operate on projections of nodes that creates other nodes.

And I've already written so much, but I had to also mention that your "higher-level canvas" is a dependency graph projection of ibGib. Just today I've implemented tagging ibGib (which you can see on my github repo on my issue branch). Anyway, thanks for your response, and I'd love to talk more with you and anyone else about this because I think it's really exciting :-O, since it actually ties together many many things fundamentally: logic, physics, mathematics, AI, religion...the list goes on and on. Feel free to create an issue on my repo and we'll mark it as discussion, question, etc. After I've gotten a couple more features in place I plan on doing a Show HN.

Also I apologize to the DeepUI people as I'm using their thread to talk about ibGib (so I'm cutting it off here!). I have to mention that their video to me looks really awesome, and it reminds me of the MIT Scratch program (https://scratch.mit.edu/). But like others have mentioned on this thread, I also was totally confused as to how one would actually use it. But I love front end magic and polish, since that is what I severely lack!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: