Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: DeepUI Programming Studio – A different approach to programming (deepui.io)
186 points by Naeron on Feb 12, 2017 | hide | past | favorite | 128 comments



There is a deeper question undercutting this project (and Bret Victor's Drawing Dead Fish talk and related approaches). That question is: how can we represent computation in an intuitive and scalable way?

Conventional programming languages is one answer. They associate programs with text. Some believe there is another way, by associating programs with diagrams. A more abstract example: machine learning associates programs with parameters and weights.

In some weird way, I feel these are all skeuomorphisms. We choose text because that's how we comprehend literature. We choose diagrams because we are visual. We choose ML because we mimic how our brains work.

We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.

For example, take thinking of programming textually. Text assumes a beginning, an end, and an ordered sequence between them. But in this small programming example, is there a well-defined ordering?

a = 0; b = 1; c = a + b;

Since the first and second lines can be switched, in some sense Text itself does not do Thought justice.

Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?

I think the more interesting question is how we can accurately represent thought?


> is there a well-defined ordering? > a = 0; b = 1; c = a + b;

In pure functional languages, these expressions form a dependency graph and the interpreter or compiler may choose an ordering and may cache intermediate results

We may even represent the program itself as a graph, just like you suggest with ML programs, but this is a general purpose program

Obviously we can't do this in imperative languages.

I think pure functional programming enables this future of thinking about programs as graphs and not as text.


Back in the late 80s/early 90s after I learned C, I remember wondering in awe how in the world compiler optimizations worked. But they do the same thing, they build (often intricate) dependency graphs. In the end, if a human can translate between imperative and functional programming, then there's no reason a machine can't.

I think the move towards functional programming, and putting the onus on developers to do the mental elbow grease of converting what are largely macro-style tasks (do this, do that) into functional code (feed this transform into this one) has done a great disservice to software engineering, especially with respect to productivity.

For a specific example: I use map() frequently with a use() clause or some other means of passing immutable variables to the inner scope. I have done the work of building that dependency graph by hand. But I should be able to use a mundane foreach() or even a dreaded for() loop, have the compiler examine my scope and see that I'm using my variables in an immutable fashion, and generate functional code from my imperative code.

What I am getting at is that in the olden days we used synchronous macros do a series of tasks and even though it was mediocre at best, it gave tremendous leverage to the developer. Today the amount of overhead required to map-reduce things or chain promises and carry the mental baggage of every timeout and failure mode is simply untenable for the human brain beyond a certain complexity. What we really need is to be able to read and write code imperatively but have it executed functionally, with every side effect presented for us.

I realize there is a lot of contradiction in what I just said but as far as I can tell, complexity has only increased in my lifetime while productivity has largely slipped. Shifting more and more of the burden to developer proficiency is just exactly the wrong thing to do. I want more from a typical computer today that is 1000 times faster than the ones I grew up on.


> I think the move towards functional programming, and putting the onus on developers to do the mental elbow grease of converting what are largely macro-style tasks (do this, do that) into functional code (feed this transform into this one) has done a great disservice to software engineering, especially with respect to productivity.

I think you've got this exactly backwards. Functional programming lets you think at a higher level of abstraction (data flow) than imperative programming (control flow). The compiler then applies elbow grease to translate your high-level data flow transformations into low-level control flow constructs.

Let's translate your statement back a generation and see how it sounds: "I think the move towards structured programming, and putting the onus on developers to do the mental elbow grease of converting what are largely assembly-level tasks (branch, copy, add a value to a register) into structured code (if, while, for) has done a great disservice to software engineering, especially with respect to productivity."

Hopefully you can understand how silly that seems to a modern programmer.


I don't think that data flow is a higher level of abstraction to control flow. They're just different types of abstraction, and each has its strengths and weaknesses.

If you only ever work on things that map well to functional programming then you'll naturally think it's superior to imperative programming. Likewise, if you only ever work on things that map well to imperative programming, then the functional programming approach seems a bit silly.


Functional code can represent an imperative program (using temporal logic or state monads), so it can be used for domains where you would use imperative code.

It will not always be easier, but it certainly provides more control over the execution flow.


Implementing an algorithm in a functional language almost always requires less code than implementing the same algorithm in an imperative language. That is direct evidence that functional languages are more abstract.


I enjoy functional programming because it's easier to reason about the code. Often it's easier to write it, but it's true that sometimes it's harder. I find that when I need to read and understand that code later, though, functional programming is usually a win. The same factors that make it sometimes harder to write - state must be passed around explicitly, idiomatic control flow constructs are less general, mutability is explicit and discouraged - make it much easier to understand later, because the interactions between different parts of the system are very clear. Compilers can certainly transform imperative code into the same form in many cases, but the benefit of functional programming is for my ability to reason, not the compiler's.

That said, you can write spaghetti code in any language. =)


Ya I'm not knocking functional programming (I prefer it as well) but I find it frustrating that so much of it breaks from the conventions we are accustomed to in C-style languages. Others around me can’t easily grok what I’ve written. Functional logic is the solution to the problem being tackled, but currently it has to be generated by hand, often in a write-only style. We are effectively writing functional assembly.

Take Swift for example (neat handle by the way!), it's probably the most pedantic language I have ever used. Personally I don't believe that it can be written without compiler assistance, especially when dealing with real-world data like JSON where pretty much anything can be optional. It's a language that makes me feel like anything I try will be rejected outright. It gets us halfway to functional programming with "let" and passing variables as immutable to callbacks by default, but then breaks too far from the contextual assumptions that we've built up in languages like C and javascript to "just work" when we try things. I feel misgivings about Rust for precisely these same reasons.

At this point the only functional language I've found that's somewhat approachable from an imperative background is probably ClojureScript, which is basically just Scheme running in one-shot mode, switching to Javascript to get the next piece of data instead of forcing the developer to use monads. It’s not even close to how I would design a functional language, but it’s substantially more approachable than say, Haskell.

I’m to the point where I am thinking about rejecting all of this and programming in synchronous shell-scripting style in something like Go, to get most of the advantages of Erlang without the learning curve. If languages aren’t purely functional, then I don’t believe they offer strong enough guarantees for the pain of using them. And purely functional languages can’t offer the leverage that scripting can, because they currently can’t be transpiled from imperative ones. It’s trivial to convert from functional to imperative, but basically impossible to go the other direction. They do nothing for you regarding the difficult step (some may argue the only step) of translating human ideas to logic. I think that’s the real reason that they haven’t achieved mainstream adoption.


This is a more fundamental issue though, we(the software industry as a whole) need to throw off the shackles of languages like C.

Swift is probably one of the worst examples when it comes to functional programming because it's still a C like language with some FP like things in the stdlib. So you get none of the advantages and some inconsistencies weighing it down.


Yes, I agree that Swift is 'too C-like', and besides, I was looking at Kotlin first, and it seems almost exactly the same syntactically. I like the J programming language, which allows for very small programs due to the composing and abstracting of functions. J's use of high-level abstraction using ASCII characters to represent functions (operators - verbs, adverbs, nouns...) seems to scare a lot of people away from it. The irony is of course the move towards array-computing hardware, GPUs and FPGAs that lend themselves for a perfect match with array-based languages like APL/J/K/Kona and other, and yet we mold array/vector libraries or patches to the C-style languages to enable programming GPUs People get comfortable with their PLs like their native tongue. It's why I would hear Westerners think a Chinese child was being particularly whiny compared to their own children or other Western children, when in reality the Cantonese-speaking child was saying the same type of things. Being American and understanding some of what the child was saying in Cantonese allowed me to make that observation, and fully realize how our comforts and preconceptions operate on how we perceive others and the world. This is why I try to be multilingual in PLs and spoken languages.

DeepUI seems like yet another way to tackle implementing our goals in a different language, and thereby also gain understanding into how we 'normally' do it.


i also think you have it backwards, FP is the higher level abstraction, pure FP is essentially pure math


This. Well put.


There's someone who definitely spent a lot of time on this, definitely check out Ted Nelson's ZigZag structure: http://xanadu.com/zigzag/

It is exactly that, an attempt to structure data in a similar way that our thoughts are formed. I believe it was this video where he shortly explained that concept; https://www.youtube.com/watch?v=Bqx6li5dbEY

Although it might be a different video since Ted Nelson is all over the place with his documents and videos.


> We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.

Yes, we do. ML/AI is a moving target that literally represents the current SOTA in doing just that, and even symbolic logic itself is the outcome of an older, a priori, way of doing that. Actually, analytic diagrams are also an outcome of one approach to that. So, all programming methods you mention come from some effort to model thought and make a representation of that model.


Perhaps I spoke too broadly. You're right, many disciplines try to understand thought. From CS to philosophy to neuroscience to psychology etc.

My real point is that thought is not visual or textual. Those things are simply ways of transmitting thoughts. When I have a thought, and I write it down, and you read it, I am simply hoping you are now having a thought related to the one I had. Some interaction in your brain is similar to the one in mine, when I had the thought. Civilization has spent a lot of time in mechanisms that correlate thoughts between people. Hence, language. Hence literacy. Etc.

Now we are trying to create a shared language between humans and computers, where we both understand each other with minimal effort.


Developer tools have always been biased towards text manipulation, and - in a Sapir-Whorf kind of a way - that has influenced which ideas are imaginable in computational languages.

Even the word "language" is biased towards text, or at least an atomic symbolic representation which is probably verbal.

I agree this is unimaginative, and probably naive. But dataflow/diagramatic systems tend to produce horrible messy graphs that are incredibly unwieldy for non-trivial applications. (My favourite anti-example is Max/MSP which is used for programming sound, music, and visuals. It's popular with visual artists, but its constructs map so poorly to traditional code that using it when you're used to coding with text is a form of torture.)

I think it's hard to do better, because human communication tends to be implicit, contextual, somewhat error prone, and signals emotional state, facts, emotional desires, or more abstract goals.

Computer communication lacks almost all of the above. Programming is a profoundly unnatural pastime that doesn't come easily to most of the population.

The fact that written languages and code both use text is very misleading. They don't use text in anything like the same ways, and code systems are brittle, explicit, and poor cousins of the formal theorem description used in math.

So the domains covered have almost no overlap. Coding is machine design, and human thought mostly isn't. It's hard to see how they can work together with minimal effort unless the machines explicitly include a model of human mental, emotional, and social states, in addition to the usual representations of traditional formal logic.


> Now we are trying to create a shared language between humans and computers, where we both understand each other with minimal effort.

The tricky part is that it needs to be a shared language between humans, computers, and other humans, if we want software to be maintainable.


String example: if you have "foo" and "bar", both are a list of characters. Now, "bar" has a beginning represented by a handle and you drag that handle to the end of "foo". Very briefly something like that. Of course, not everything is set in stone and we need to try multiple approaches to see which one is the fastest.


> We don't, however, try to understand what "thought" is, and work backwards to form a representation of it.

I had to laugh here, because that is exactly how I designed ibGib over the past 15 years. It is built conceptually from the ground up, working in conflict (and harmony) with esoteric things like philosophy of mathematics and axiomatic logic systems, information theory, logic of quantum physics, etc. Anyway, like I said...I just had to laugh at this particular statement! :-)

> Visual representations like the one in this video also have their shortcomings. The most obvious being, a monitor is 2D. Thought is not 2D. Going to 3D won't help either. Thought also not only spatial and temporal. For example, how would you represent concatenating strings visually?

> I think the more interesting question is how we can accurately represent thought?

In ibGib, I have created a nodal network that currently can be interacted with via a d3.js force layout. Each node is an ibGib that has four properties (the database has _only_ these four properties): ib, gib, data, and rel8ns (to keep it terse). The ib is like a name/id/quick metadata, the data is for internal data, the rel8ns are named links (think merkle links), and the gib is a hash of the other three.

The ib^gib acts as a URL in a SHA^256-sized space. So each "thought" is effectively a Goedelian number that represents that "thought".

This is essentially the "state" part of it. The way that you create new ibGib is for any ibGib A to "contact" an ibGib B. Currently the B ibGibs are largely transform ibGibs that contain the state necessary to create a tertiary ibGib C. So each one, being an immutable datum with ib/gib/data/rel8ns, when combined with another immutable ibGib, acts as a pure function given the engine implemented. This pure function is actually encapsulated in the server node that the transformation is happening, so it's conceivable that A + B -> C on my node, where A + B -> D on someone else's node. So the "pure" part is probably an implementation detail for me...but anyway, I'm digressing a little.

I'm only starting to introduce behavior to it, but the gist of it is that any behavior, just like any creation of a "new" ibGib, is just sending an immutable ibGib to some other thing that produces a tertiary immutable ibGib. So you could have the engine on some node be in python or R or Bob's Manual ibGib manipulating service where Bob types very slowly random outputs. But in the visual representation of this process, you would do the same thing that you do with all other ibGib. You create a space (the rel8ns also form a dependency graph for intrinsic tree-shaking btw) via querying, forking others' ibGibs, "importing", etc. Then you have commands that act upon the various ibGib, basically like a plugin architecture. The interesting thing though is that since you're black-boxing the plugin transformation (it's an ibGib), you can evolve more and more complex "plugins" that just execute "their" function (just like Bob).

Anyway, I wasn't going to write this much...but like I said. I had to laugh.


This sounds very close to what I spent a year or two searching in vain for. The closest thing I could find was node-red (http://nodered.org/), but that still fell far short. In a weird way, the problem is that every attempt at implementing a functional/dataflow programming environment, at least from what I've seen, invariably tries to do way too much.

The ones I've seen all seem to try to give you a whole bunch of pre-canned function/node for everything you might want to do. This is clearly not a feasible approach. As I see it, they really only need to implement three to four things to have the ideal solution. The first two are: function-nodes that take input which they operate on to produce output, and directed edges that make it possible to connect outputs to inputs.

And following from this the second two logically fall out: a low friction way of amassing (and sharing) a library of function nodes, and some clever UI trickery that makes it easy to black-box a 'canvas' of interconnected function-nodes so that it just becomes a single function-node on a 'higher-level' canvas (i.e. effortless encapsulation, composition and abstraction without loss of low-level control). Systems within systems within systems.

I honestly don't know if any of this makes sense or sounds like a good idea to anyone else. Admittedly I tend to think that our entire reality, from the cosmological to the microscopic, is just one big system composed entirely of other interconnected, lower-order systems. Everything is a system.


It largely makes sense, and the are many developers trying to find this "holy grail" model of efficient computation that will be the basis of future software, overcoming the limits of the Von Neumann machine (which has been the base default model for the last >60 years).

There's a problem though that every developer will understand those explanations in slightly different ways, making it difficult to communicate the reason why such model is needed. What I miss in projects like ibgib out your comment above is grounding it in concrete examples of use cases, practical situations that are particularly interesting to the developer and which explain how their specific approach is better than traditional programming in that situation.


I didn't "set out" to find a "holy grail" and certainly not to overcome the Von Neumann architecture...ibGib in its current incarnation is just the current manifestation of my approach to address some of those concrete examples, mixed with my background like many of understanding and unifying things such as physics, maths, etc.

So, as I mentioned in the other reply (I even used the term "concrete"), I am currently just on a really cool note taking app. It's pretty much done(-ish) since it's totally usable, and I'm on now to a different concrete goal.

The other "practical situation" arose from ibGib's previous Android incarnation, which was basically two-fold: 1) Too expensive creating domain-targeted data structures (was using EF, but any relational-database would have the same issues). 2) Caching and cache invalidation.

IbGib is addressing both of these issues: 1) I now have an engine to create domain business objects without all the muck of having to "get it right" the first time. This is because it keeps track of the entire history of the process of creating the (class) data structure as well as the actual data implementing those structures. 2) As a corollary to this aspect, I now can update the client whenever any change occurs within an entire dependency graph, because both the data and the code-as-data are "indexed" by the ib^gib.

So caching in ibGib's current web app is basically about passing around "pointers", from what I understand this is very similar to how Docker handles layering hashes when building and rebuilding docker images.

Also, I can't avoid saying a meta use case, which is this thread that we're having right now. In a forum, you have a linear view and that's pretty much it. With ibGib, you can have truly branching threads, with a linear time view being just one projection of the content of those branches. So, say for example with Slack, they have a "thread" feature that they've just implemented. But it's only one thread. With ibGib, it's n-threads. The linear view is one of my issues that I'm going to be tackling next (along with notifications). But it's slow going, cuz it's just me ;-)


> So, as I mentioned in the other reply (I even used the term "concrete"), I am currently just on a really cool note taking app. It's pretty much done(-ish) since it's totally usable, and I'm on now to a different concrete goal.

Yeah sorry I didn't mean to imply that you don't have concrete goals (although I couldn't find them explicitly stated in your website), only that this kind of "rethinking computing/storage/interaction" projects are often hard to approach from the outside.

> IbGib is addressing both of these issues: 1) I now have an engine to create domain business objects without all the muck of having to "get it right" the first time. This is because it keeps track of the entire history of the process of creating the (class) data structure as well as the actual data implementing those structures.

That's cool! I've been looking for a platform that allowed incremental persistent storage, to build my own note-taking-meets-programming tool. How easy is it to detach the engine from the user interface in ibgib? I'd like to create something less "bubbly" for myself, but I could learn about how you use your simple data model to build domain objects. I've also been following the Eve language and I like their computation model, but so far there's no much there in terms of persistence.

> I think it's really exciting :-O, since it actually ties together many many things fundamentally: logic, physics, mathematics, AI, religion...

Just curious, how does a data model touch religion? :-D


> Yeah sorry I didn't mean to imply that you don't have concrete goals (although I couldn't find them explicitly stated in your website), only that this kind of "rethinking computing/storage/interaction" projects are often hard to approach from the outside.

Ah, infer and imply - perfect for ibGib! I say this because I didn't make that inferrence about concrete goals. It was more of like an event that prompts more attention to the concept of concreteness. As for the website, I hope it is quite obvious that is a WIP! ;-) I'm not a great front-end person, as I am super abstract and backend-ish, which segues nicely into...

> How easy is it to detach the engine from the user interface in ibgib?

The UI/web app is totally just a view into the engine (which is itself just the current expression of the "concept of ibGib"). It allows us to explore these abstract concepts more concretely. The plan is to have a CLI, an API, and possibly an isomorphic secondary implementation that allows for occasionally disconnected scenarios. The POC was attempted in isomorphic client/server javascript/typescript, but the complete parallel/concurrent aspect of it was too unwieldy. Elixir (Erlang and the BEAM(!)) turned out to be ridiculously well-suited for it, and their community and documentation is stellar.

> I'd like to create something less "bubbly" for myself, but I could learn about how you use your simple data model to build domain objects.

To me, this is incredibly easy to do. But I'm not sure if DeepUI's HN thread is quite the right venue for such a thing. I would love to work with you (and anyone else interested) in a GitHub issue. I am holding off on doing my own Show HN because I want a couple more specific features before doing such a thing. (I wasn't planning on speaking this much, but the comment was just too perfect).

> Just curious, how does a data model touch religion? :-D

ibGib is less a data model and more a projection of a certain type of logic...it's a "meta-logic", which ends up being the logic. This is similar to how any turing machine can emulate any other turing machine. The point is that I've been developing this logic for just about my whole life. I was that kid who slept through class, did no homework, got 800 SAT/36 ACT math scores, etc. But axiomatic systems, and the process of actually applying math, and proofs, rigor, etc. all didn't sit well with me. Neither did religion. Now it does. But that's a perfect opportunity for a GitHub issue or an ibGib. I don't have notifications implemented yet, but collaborating is actually implemented in that you can add comments/pics/links to anyone's existing ibGib that you have a link to.


You make so many salient points, I'm like a kid in a candy shop thinking of what to speak to. But in order to avoid writing a book, I'll just address your two main points.

First, the notion of the pre-canned functions for nodes: That's one of the really novel things about ibGib, is that there is an infinite number of possible functions that can be "performed on" each node. Any node combined with any other node, and we're all nodes, our programs are nodes, etc. Currently, programmers micro-manage this in a locally addressed memory space. What I've discovered recently is that my design is actually like a universally sized turing complete language. One of the earlier conscious design decisions I made was that ib^gib are cheap and data itself is expensive. This is essentially the same decision when dealing with pointers and memory...You "just" pass around pointers and the actual thing can be dereferenced to get the value (also it's immutable, also it maintains integrity, and more and more, I have to stop though or I'll keep going). So basically, my point is that dealing with the pre-canned function aspect is essentially just creating a new language...but why a new language and what is different?

Which brings me to my second point about the "low friction way of amassing (and sharing) a library of function nodes...": My design also ends up coinciding in many ways to GitHub's hashes (which I only realized after the fact when explaining this system to a brother of mine). But fundamentally ibGib is unique! Whereas GitHub (and everything else like it) thinks in terms of files and folders, dealing with diffing files, ibGib works at the conceptual/semantic level thinking of everything in terms of ibGib. You don't "create a new" ibGib or "new up" an ibGib. You fork an existing ibGib (any existing ibGib), and when you are forking a "blank", then you are actually forking the "Root". This conceptually has profound implications, but the end product is that you are forking, mut8ing, and rel8ing at an atomic level, the atomicity part being totally up to the ibGib that is combining state. For now, that's just my ibGib engine on my server, but really it's anything that takes in an ibGib and outputs an ibGib. So imagine you went to GitHub and not just forked a library, but forked a single function/class in that library. ibGib's data structure keeps a complete dependency graph (in the form of rel8ns to the ib^gib pointers) for every single ibGib. So if you have to make a change to some aspect, you fork it and make the change and now you're using the fork.

There are issues of complexity arising at this level of granularity though, which is partly why I'm working very slowly on concrete goals. The first one is the "Note Taking" app, which already is phenomenally useful (the videos I have don't really touch it). I'm dogfooding it every day, and though it obviously has limitations, it's extremely easy to use (as long as I don't get my micro t2 limit, now upgraded to a small on aws hah). Also to address the granularity though is how easily this system incorporates AI/machine learning, etc. This is because it's essentially a Big Data architecture on top of everything else. You create nodes that operate on projections of nodes that creates other nodes.

And I've already written so much, but I had to also mention that your "higher-level canvas" is a dependency graph projection of ibGib. Just today I've implemented tagging ibGib (which you can see on my github repo on my issue branch). Anyway, thanks for your response, and I'd love to talk more with you and anyone else about this because I think it's really exciting :-O, since it actually ties together many many things fundamentally: logic, physics, mathematics, AI, religion...the list goes on and on. Feel free to create an issue on my repo and we'll mark it as discussion, question, etc. After I've gotten a couple more features in place I plan on doing a Show HN.

Also I apologize to the DeepUI people as I'm using their thread to talk about ibGib (so I'm cutting it off here!). I have to mention that their video to me looks really awesome, and it reminds me of the MIT Scratch program (https://scratch.mit.edu/). But like others have mentioned on this thread, I also was totally confused as to how one would actually use it. But I love front end magic and polish, since that is what I severely lack!


Some feedback on the website: IMO, it's obnoxious and disrespectful to play full screen video like you're doing. That just made me close it immediately and leave after reading through the text.

Considering the complexity involved in developing a an advanced IDE or similar, have you considered publishing an open source community version? Similar to JetBrain with IntelliJ. They seem to be doing great.

Since we're on the subject of experimental UI concepts, I'll plug Bret Victor's Inventing on Principle [0] talk. For me it was an instant classic.

[0] https://vimeo.com/36579366


Actually more like Stop Drawing Dead Fish by Bret Victor: https://vimeo.com/64895205

He is my hero actually :)


Why does EVERYTHING have to be opensource these days to be considered acceptable?


Because with open source software:

- You can run the program as you wish, for any purpose.

- You can study how the program works, and change it so it does what you want.

- You can redistribute unmodified copies without fear of the publiher suing you.

- You can distribute copies of your modified versions to others without fear of the publiher suing you.

Who wouldn't want to have all that in a tool that will be the basis for your own projects?


I understand when we're talking about crypto or security. You want your data to be safe. You want to know that it is what they're selling. But this? A guy is building something that might be cool one day. One should he make his engine/IDE opensource? He can but doesn't have to. I don't see people screaming why SublimeText, AbletonLive, Photoshop or many other tools aren't open source. There is still that thing called commercial software and some people make a living out of it.


SublimeText, AbletonLive or Photoshops are not platforms where you create an automated software artifact that runs on top of the tool. On top of a programming language, however, you depend on the tool being always available without major changes for as long as you need it to remain stable. This is not guaranteed on a privative development platform. They are simply not in the same class of products at all.

In all your three examples, the product they create is detached from the creation platform before release, and that product (text, music, images) is compiled/played/displayed on a different application; an also there are many alternative tools where the product could be processed if the original tool ceased to exist.

So there's no fear with those that, if the tool creator begins imposing more and more restrictions, you'll be locked-in. But a unique programming language for which your code can't possibly be ported to a competing platform? One would be stupid to develop for it anything requiring ongoing commercial support, beyond some cool tech demos.


Stupid as all those people building games with Unity? Or all those people who used to use Unreal/UDK before v4 while there was no source code available?

Anyway not here looking to start a flame war. My point was that the product can be good or bad and it doesn't have anything to do with it's source code being available or not.

My 5c. Everyone feel free to disagree.


Both Unity and UDK recognised the benefits of open source, and have OS components.

A product can be good or bad without OS. So long as you can deploy what you make with it effectively.

Sometimes that means mindshare, like UDK and Source.

Sometimes that means complete decoupling tooling from code like Sublime and Atom.

Sometimes it is more about the deployment, like Visual Studio and Unity.

However, to stand out, this product needs one of the above.

It's new, and doesn't seem to be backed by a big name, so no mindshare yet.

So they need to somehow make devs want to use it for their code. A nice experience is usually not enough.

An OS community, or a fantastic cross-compilation strategy are the only two things that I, personally, have seen work.

OS seems to be the easiest of the two.


> So they need to somehow make devs want to use it for their code. A nice experience is usually not enough.

While I side with your central argument, there are numerous cases where the above is absolutely enough.


Games is one kind of software where you usually won't expect to keep your code base maintained 10 or 20 years from now. So far for the "stable platform" argument where you as a user wouldn't want anything less than open source code, unless forced to use proprietary code for external reasons.

There might be a cost/benefit analysis where some closed platform is extremely far ahead in every other aspect, but you should be very well aware of the risks of being tied to that platform.


Even through I love open-source, I can't argue this point.


But what if you write it?

I'm happy with non-commercial use (including providing the source), but if someone wants to make money from software I write, it's only fair that they should pay me, and under the open source model they don't have to, and usually they won't.


That argument is convenient for you, but how is it convenient for the people using your code?


The only people who would be inconvenienced by it are those who can make money out of it. Anyone who agrees not to use it commercially can still receive it, source included, gratis, when it's ready for release.

If a commercial user likes my software they can pay for it or hire me. (I'm available). Is that too much to ask?


But what if:

1) you become unavailable?

2) you don't have the physical capacity to develop everything that the client needs?

3) you don't want to implement some feature that the user needs, for reasons? (related to the software architecture, or the direction you want the project to take, or whatever).

Open source gives the commercial client the flexibility to adapt the code to their uses, in a way that being tied to a single provider will never achieve.


Why do you think companies should have software developed for them by third parties for free?

If they pay for it, they'd have the source, and would be able to modify it for their own needs but not release it.


Who said development should be free? Paying the person who knows the software most in the world to taylor it to your needs makes all the sense. Paying for copy, in a world where making digital copies is essentially free, does not.

But having access to the source, without permission to modify it nor a perpetual license for using it, doesn't solve any of the problems I listed above. And if they have permission and are doing the modifications themselves, why should they pay the original author?


What you appear to be arguing is that developers shouldn't be compensated for their efforts unless they write custom software for commercial organizations as contractors or employees, and that if they develop novel software no one thinks to ask for, that should just be given away free for others to profit from.

> if they have permission and are doing the modifications themselves, why should they pay the original author?

They should have paid the author for the right to use the software. That would normally include the right to adapt the software for their own purposes.

The author has the right to decide on the terms of the licence under which software is released. Any user who doesn't agree to the terms shouldn't be allowed to use the software.


> What you appear to be arguing is that developers shouldn't be compensated for their efforts unless they write custom software for commercial organizations as contractors or employees, and that if they develop novel software no one thinks to ask for, that should just be given away free for others to profit from.

I never said that this shouldn't happen, don't attribute me words that I didn't say. I said that typically it won't, since it doesn't make any sense to the users of the software.

> They should have paid the author for the right to use the software. That would normally include the right to adapt the software for their own purposes.

You didn't reply to my question, which was: why?

> The author has the right to decide on the terms of the licence under which software is released. Any user who doesn't agree to the terms shouldn't be allowed to use the software.

No one is arguing otherwise. What I'm trying to explain is that developers following that strategy will likely find themselves with very few users. In the long term, the developer who gives the users a product that better matches the user's needs will displace the one who doesn't, that's pure market behaviour.

Paying the author for the right to be locked in a closed software ecosystem is a terrible value proposition from the point of view of the client. Note that this argument applies primarily to software like the one in the article, which tries to be a development platform, not necessarily to applications.


So that companies which haven't made any contributions to the code or its development can profit from it, presumably.


Because too many projects fail or get abandoned and I will not invest in a project that may not exist next week.


I actually disagree about the fullscreen video. Why would I ever want to watch video not fullscreen?

Vimeo also behaves like this on mobile and it's far superior to Youtube, which often totally hides the fullscreen button.


>Why would I ever want to watch video not fullscreen?

Because you're doing other stuff while listening to the audio.

>Youtube, which often totally hides the fullscreen button.

It's in the bottom right.


>It's in the bottom right.

Not in OP's video. There's literally no way to escape the full-screen without pressing the escape key. You can't even double-click. I'm actually impressed by how user-hostile that video is.


> Why would I ever want to watch video not fullscreen?

Because the way they are doing it there is no minimize button. A less technically inclined person may not know that they have to press Escape (and now, it seems, some laptops have no hardware Esc button at all). In addition, the first time I played it, for some reason, Esc did not work and I had to Alt-Tab in order to leave the video.


I didn't check what this looks like on mobile, but I do the majority of my browsing on an iPad and while I use a physical keyboard with it, it doesn't have an escape button.

Presumably, this would use the usual mobile video player though, which does have a minimise button, so its likely not a real issue (I'm not on iPad right now so didn't check).


There is an alternative non-fullscreen link available but Amazon CloudFront cache hasn't been updated yet.


It's just a youtube video: https://youtu.be/Gy5m091fOTU


Is it just me? Or does this look 10 times geekier than writing actual code?

I think the project is trying to be a user friendly way of writing programs, and I think that's an awesome idea, but the actual product looks otherwise.

I finished the video and still have no idea what the hell was going on through out the entire video.


Same here, the video is way too long without giving actual information I was looking for. I guess they should create two videos, one for programmers and one for everybody else.


There is a plan for a lengthier explanation video. The intention of the video was to show that it can be done. Of course, some kind of explanation and training is needed for everything, I don't dispute that.


My problem with the video is that it doesn't show me what can be done. It shows me that the ball can follow the line and with some magical symbols and lines "other logic" can be "somehow" added. It didn't do anything to tell me what other logic is possible, nor did it explain what the symbols and lines mean. So I still don't really know _what_ can be done (besides a pong game) and I've no idea what most of the on screen stuff even is or means.


I think they pursue a similar approach to programming or implementing programming logic as DRAKON.

If you believe their website it has been used in the Russian Space Program.

But I have to agree with you, this DeepUI looks a PITA to work with in comparison to DRAKON.

http://drakon-editor.sourceforge.net/


Visual languages are very diverse. Drakon and DeepUI have very little in common beside both being visual.


The video was trying to do two things at once: explain the goals and high level ideas as well as demonstrate "syntax" (the clicks and drag and drops). Since these are both new to people coming to your site, it's hard to digest at the same time. I'd love a video of just explaining what the demo is doing, since I didn't really understand how to program in the new approach you described.


The exact method is not that important but the goal itself. But of course, you have a point. The explanation is planned but had no time for it yet.


As soon as I saw a logic gate implemented for a single keypress I was "noping" out of there. Visual methods, to a one, break down quickly when they reproduce low-level digital logic. At that point, you have a software circuit board, and this is a thing that your CPU can represent just fine by coding in assembly and possibly adding an abstraction on top for a dataflow representation.

Graphics are absolutely wonderful, in contrast, when they are able to stick to a domain abstraction, which is why we have a notion of application software at all. I have, in fact, aimed towards discovering what a "Pong synthesizer" would look like, so I have the domain knowledge to know that it does tend to lead to the digital logic breakdown if you aim to surface every possibility as a configurable patch point. As a result I started looking more deeply into software modularity ideas and rediscovered hierarchical structures(Unix, IP addressing, etc.) as the model to follow. I'm gradually incorporating those ideas into a functioning game engine, i.e. I'm shipping the game first and building the engine as I find time, and I do have designs for making a higher level editor of this sort at some point.

However, I also intend to have obvious cutoff points. There are certain things that are powerful about visual systems, but pressuring them to expose everything at the top level is more obscuring than illuminating. So my strategy is to instead have editors that compile down to the formats of the modular lower-level system, smoothing the "ladder of abstraction" and allowing people to work at the abstraction level that suits them.


The logic gate's primary function seemed to be in limiting the paddle to not move outside of the playing field.

Otoh, in a visual programming language it'd feel more natural to make the upper and lower edges of the playing field collidable (I'm sure there's a better word for that), so that moving the paddle is inherently limited by collision with the edges.


Like so many projects on the internet: Lots of big words and ideas, but no content. Show us some actual tech and code and I might throw my money your way. Or better: Release your code under an open license and I might even throw my time your way!


I need the money to develop it fully. If it was usable at this point, I would release it, no doubt. I want something out of the door ASAP, that's why it is focusing on 2D games first. That is feasible but still hard. This is really not a technology in the traditional sense. The runtime itself is nothing new. The real thing is the UX, that is how programming can be made more efficient to do. I believe if someone supports this, he/she supports the goal of this project, not the concrete implementation.


I love the idea of it... but the video makes it very hard to see how it works. I believe it's possible to show how this works without it actually working right now. I think you might be focusing too much on making it look good and polished, you need to just make it work, bare bones!


I believe it's possible to show how this works without it actually working right now.

Exactly! A short tutorial using screenshots to explain what stuff means, for example, would go a long way and doesn't actually require anything to be implemented as the screenshots could be mocked up.


There is this old idea that programs are somehow "limited" by their textual representation, and that a 2D graphical syntax would unleash more possibilities. Never worked very well so far, unfortunately, except for a bunch of very specific niches.


Isn't this just UML for a generation that wasn't programming during Dot Com?


From what I understand after watching true video, I think this abstraction could work for extremely simple implementation details, but once the implementation gets even slightly complex, the scale of complexity of coding it with this system balloons.


That's the exact problem I encountered when I worked on a similar product for microcontrollers.

Having to use a mouse to interact with your programming ide graphically doesn't scale. It does make for a decent tool for hobby projects or prototyping though.


Yeah then you have Labview, and nobody wants that.


I like it and I think this is the general direction that creating applications will look like in the future.

But don't throw away text-based programming yet; the wiser move would be to combine the two.

Find use-cases were visual DeepUI style programming shines and is vastly superior to text-based programming, but let me polish the details with old-school text source code.

There are apps which already do a lot of this, for example Unity - you can assemble your scenes and animations visually and tune it up with code.


I worked on Accelsor which is a tactile-spatial programming language, and I think the ideas here are actually really good (so don't let HN haters get you down).

Ultimately work like this though leads to needing to reinvent all of programming (unfortunately) for instance, I'm now having to build a graph database to handle Turing Complete systems that are being collaborated on in realtime (see http://gun.js.org). So prepare for a long haul of work ahead of you. Get to know people in the space, like me and the Eve team, etc.

If you persist long enough (don't let money or lack of money stop you) you'll make a dent. :)


I still have to eat though :)


This seems quite similar to LabView. So I imagine it'll have similar pros and cons: LabView is great for putting together quick prototypes for e.g data collection or visualization, but it quickly becomes unmanageable as the complexity increases; you need to 'tidy up' the placement of the various operators or it ends up being a rat's nest.


No, they explain that LabVIEW and other visual languages are just different ways of representing the code. The idea here, I think, is that there's far greater coupling between the output of the program and the program itself. It's similar to some of Bret Victor's ideas: https://www.youtube.com/watch?v=PUv66718DII

I'd like to incorporate this coupling idea into my own visual dataflow language (http://web.onetel.com/~hibou/fmj/FMJ.html), but haven't yet decided how to implement it. My approach has been to design the language from the bottom-up, so that simple programs can be simply drawn, and there are higher level programming constructs which simplify more complex code, avoiding the complexity problem (the Deutsch limit) you've seen with LabVIEW.


i have written very large systems in labview, and your viewpoint is simply not accurate for a good labview programmer. just like any coding discipline, you keep your VIs, classes, libraries, etc. small and suited for a single purpose. what you end up with is a collection of VIs that basically have a REPL automatically built in (i.e., just run the VI). and when i say large systems, i mean multiple projects with greater than 1,000 VIs and many tens of classes.

it's a rule amongst good labview programmers that you keep your block diagram to where it fits on a single, reasonably sized/resolution monitor without scrolling. simply adhering to that rule encourages good coding practice. within my large systems, i am able to freely edit pieces with often no unintended consequences. since reference-based data types are really only used for multi-threaded communication and instrument/file communication, you typically are operating on value-based data which makes reliable code development quite easy.

and what you describe is equally applicable to any text-based language. neither labview nor text-based languages have built-in precautions against horrific coding standards.


If it were "only" for the spatial relationship between "variables" and logic, LabVIEW wouldn't be such a pain to use.

What's really annoying about LabVIEW is, that its programming paradigm is kind-of functional, but it doesn't go the full effort and forces you to do things, which one kind of expects are abstracted away, and things become a mess. Let me explain my top pet peeve:

In LabVIEW the main concept are so called VIs: Virtual Instruments. A VI consists of a number of inputs called "Controls", some logic in between and outputs called "Indicators". Inside a VI you have the full range of programming primitives like loops (which interestingly enough can also work like list comprehensions through automatic indexing, but I digress) "variables" (in the form of data flow wires) but no functions. VIs are what you use as function. And if everything happens through VI inputs and outputs and you don't use global variables, feedback nodes or similar impure stuff it's pretty much functional.

Somewhere your program has to start, i.e. there must be some kind of "main" VI. But VIs mostly behave like functions, so if you hit "run" for the main VI it will just follow its data flow until every input has reached what it's wired to and all subVI instances have executed and thats it. That's perfect for a single shot program, like you'd have on the command line or executing to serve a HTTP request, however it's kind of the opposite of what you want for an interactive program that has a visual UI. Sure there is that "run continuously" mode which will just loop VI execution. But all what it does is re-evaluate and execute each and every input and subVI again and again and again. If you're using LabVIEW in a laboratory setting, which is its main use, you probably have some sensors, actuators or even stuff like lasers controlled by this. And then you do not want to have then execute whatever command again and again. There is a solution to this of course, which are called "event structures". Essentially its like a large "switch" statement, that will dispatch exactly once for one event. Of course this caters only toward input manipulation events and some application execution state events. And you can not use it in "run continuously" mode without invoking all the other caveats. So what you do is, you place it in a while loop. How do you stop the while loop? Eh, splat a "STOP" button somewhere on the Front Panel (and don't forget to add a "Value Changed" event handler for the stop button, otherwise you'll click STOP without effect until you manipulate something else).

And then in the Event structure you have to meticulously wire all the data flows not touched by whatever the event does through so called "shift registers" in the while loop to keep the values around. If you forget or miswire one data flow you have a bug.

What seriously annoys me about that is, that in principle the whole dataflow paradigm of LabVIEW would allow for immediate implementation of FRP (functional reactive programming): re-evaluation and execution of only those parts of the program that are affected by the change.

The other thing that seriously annoys me is how poorly polymorphism is implemented in LabVIEW and how limited dynamic typing is. I'd not even go as far as saying that LabVIEW does type inference, although at least for primitive types it covers a surprisingly large set of use cases. Connect numeric type arrays to an arithmetic operation and it does it element wise. Connect a single element numeric type and an array and it again does things element wise. Have an all numeric cluster (LabVIEW equivalent of a struct) and you can do element wise operations just as well. So if we were to look at this like Haskell there's a certain type class to which numeric element arrays, clusters and single elements belong and it's actually great and a huge workload saver! Unfortunately you can't expose that on the inputs/outputs of a VI. VI inputs/outputs always have to be of a specific type. Oh yes, there are variants, but they're about as foolproof to use as `void*` in C/C++. So the proper way to implement polymorphism in LabVIEW is to manually create variants of your VI for each and every combination of types you'd like to input and coalesce them in a polymorphic VI. And since you have to do it with the mouse and VIs are stored in binary this is not something you can easily script away. Gaaahhh…


That auto full screen thing is infuriating.


Just came here to say the same thing, sounds interesting but I bounced soon as I couldn't view outside of fullscreen.


Agreed, that was very strange. Right click "Copy video link". Leads me to - https://www.youtube.com/watch?v=Gy5m091fOTU


As a quick fix, I've put there a direct link to that video.


I think you should just turn off the forced full screen. No reason to have it at all.


I'm on it!


I like the idea of a physical analog to game logic, e.g. tripping a certain condition based on collision mechanics. This could be useful for something akin to Game Maker, and I'd be curious to see how it would translate to other mediums. I could imagine front end programming would be well suited for this style, especially when creating interactive prototypes.


Yes, those would work too.


I found the video quite confusing.

It seems like its great for things that have an on-screen spatial meaning like the pong example game[1]. But what if I want to represent something abstract?

Like a tree (lets say a quad-tree since this is for (2d?) games for now)? Or what if I want to implement AI logic (lets say I want some kind of decision-tree planner and path finding)? I'm having trouble visualising (I guess because the video didn't really go to explain) how any of this can be done, as opposed to "moving something around on the screen".

I assume this has been thought about. I just couldn't figure out any of the details from the video.

[1] although even in that case, I couldn't figure out what the symbols and lines in the video meant. The symbols especially seem cryptic. A mix between logic gates and something else?


Looks very similar to blueprints in UE4, with more visual integration of what actually happens.


Actually, that presents the code in a visual way while this allows you to work on the thing itself. Similar to Stop Drawing Dead Fish by Bret Victor: https://vimeo.com/64895205


Looks interesting. I would like a more detailed look at the visual language being used here. How is logic projected out into the physical world, is it simply making variables into nodes?


Looks like completely detached from reality. Its like a digital revival of electronics ?


This looks like a complete scam.


Why do you say that?


To be fair, your site looks like one of those weird overly ambitious overpromising products like

* http://madmaniak.github.io/pro/ * https://thegrid.io/ (seems like their site is having some difficulties)

Before yours, there have been many products that promise to enable programming for everyone, many of these have been scams, utter failures or were just overly marketed mediocre software packages.

The things is, the way your video seems voiced to "sell" rather than "explain" what is going on, as well as the odd, or at least "unique" hacker aesthetic of the website makes it look very dubious.

I have no idea how to showcase a product like this the "right" way, maybe there is no "right" way, but I can certainly say that the aesthetic, big claims, confusing video and ambitious future plans give the project a bad smell.


Can't watch the video right now. Anyone can summarize it? What is this / how does it work (from a user standpoint)?

(Pro-tip for the site; screenshots and concrete descriptions on how things work)


Cute idea. Why No Linux?


Maybe because GNU/Linux users are allergic to money for desktop software.


But we do. And I do. That's all I'll say to you. I don't want any part of this vitriol.


Out of respect for Linus, it should be called Linux, and not by some made-up name.


A lot of people worked on the user space, too. I understand why both camps want to get some of the credit.


Linux is a made-up name.


No, Linus created a piece of software and called it Linux. It is his creation and he named it.


Funnily enough, he didn't :)

Initially, Torvalds wanted to call the kernel he developed Freax (a combination of "free", "freak", and the letter X to indicate that it is a Unix-like system), but his friend Ari Lemmke, who administered the FTP server where the kernel was first hosted for download, named Torvalds's directory linux.


[flagged]


An important part of this community is that when we disagree with other members we don't attack them or call names like this, so please don't.


Just recently we had the author of Octave, thinking about giving up development, because of exactly this problem, so excuse me if you don't think my remark was mature enough.

https://news.ycombinator.com/item?id=13603575

Do you at least pay for the distribution you use? I do.


Your comment attacked an entire user base, their collective crime being that they don't share your particular (weak) moral values about the distribution of software intended for use on the Linux desktop.

Worst of all, you now insinuate that a problem suffered by one author of one application that is used on the GNU/Linux desktop is a problem for all.

Not everybody can "pay for" (support monetarily) free software. I personally don't see why it is any of your damn business what I do and do not "pay" for. My distribution does not make a huge deal over asking for donations but I do support them both monetarily and otherwise. Frankly, I think the 'other' ways that I support the free software movement are more valuable than whatever money I can chuck at a guy whose program I like.

Plenty of free software developers and maintainers would agree with your line of reasoning, which is part of the beauty of free software. But few of those people would argue that yours is a decent reason to attack their entire user base.


I wish this wasn't a deeply nested comment attached to a somewhat unrelated article because I think this is a really interesting discussion that deserves its own thread. There are some genuinely interesting ethical questions here.

FWIW, I feel that people who have the money to do so should support the artists, engineers, and other folks who create the things they use and enjoy. I buy books rather than borrow them from the library for this reason: I want to give the authors more money, so that hopefully they'll keep writing the books I love to read.

I know not everyone shares my perspective, though.


Because users like yourself feel entitled to have access to the work of others without regard how we manage to pay our bills.

All nice and good when it is possible to sell books, consulting services or hide the software behind a SaaS pay-wall that helps to pay the bills, which is absolutely not the case for desktop software unless it is web based applications behind that pay-wall.

Thus preventing any kind of long term business model targeting the GNU/Linux desktop.

Which is yet another reason why many rather target app stores nowadays.


Where is this bitterness and hostility coming from? How do you know what kind of user I am, and what kinds of works I may or may not be personally responsible for within the free software community?

I happen to know the struggle you speak of first-hand. I paid my bills early on in my journey by teaching courses related to the subject matter my project touched upon. Free software has _no opinions_ on the adequate income model for developers who involve themselves in its world. The beauty of free software is that it is agnostic to your kind of ethics-mongering, which is why I have chosen to be so intimately involved in it myself.

We all need to make a living, and I don't have any presumptions over how someone chooses to do it. However, I think you would be better to not force your particular difficulties and decisions on this front on the entire GNU/Linux user base.


Why should one feel entitled to use software legally available for free? Do you feel guilty for using HN without paying YC for it? What a weird concept.

Yes, there's a lack of money in Free Software. Yes, there are consequences from that, like the abandonment of certain projects. That doesn't mean non-paying users are somehow guilty of something, that's a poisonous attitude.


I am not sure how more maturely should he put it, but it is definitely true. You can verify it by looking at statistics from different category of software: games, CAD apps, etc.


This question has been around since I connected to the internet. The response has always been to use a reasonably decent OS.


A mashup of flowcharts and FRP with Hollywood aesthetics?


FRP seems about right and I wanted it to look cool :). But no flowcharts, those represent steps. If you are talking about the logical symbols, they are not step based at all. Rather more similar to physically implemented logical circuits.


Very promising, the site doesn't talk about Unity 3D, did you consider building a plugin for it?


No, I haven't considered it yet. To be frank, my plate is full right now.


Many people did not understand the video so I am going to create a detailed explanation.


Fullscreen issue has been fixed but the old version of the website is still cached.


Is it a sort of Wolfram's approach? Candidly asking.


I watched the video but have no idea what I'm looking at, it's supposed to be a pong game of some sort?


So they've reinvented Matlab Simulink or LabVIEW with a dorky interface and .5% of the functionality.


I feel like 90% of the problem with Labview is the interface, so I wouldn't discount interface improvements.

Even if this wasn't spaghetti code, most of the Labview icons are totally unreadable:

http://www.ni.com/cms/images/devzone/pub/nrjsxmfm91216399872...


Simulink and LabVIEW to DeepUI can be compared to what C is to Lisp.

Simulink and LabVIEW are made for electrical engineers transitioning to developers. They are good tools for building and shipping products.

DeepUI on the other hand introduces new paradigm for visual expression, and it looks to be more targeted towards computer scientists and experimental artists (and maybe hobby game developers with aversion to traditional programming).

I agree that in all these visual programming languages the interface is always way too slow. They tend to completely ignore the keyboard which is by far the fastest input device.


The misspelled purpose.


fixed, the old one is still cached though


Good luck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: