Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Skov – A visual programming environment (skov.software)
142 points by nicolas-p on Dec 2, 2016 | hide | past | favorite | 62 comments



This is coming close to the way I envision we’ll be programming in the future.

This goes from traditional 1-dimensional text-based programming to 2-dimensional visual programming.

What I envision is a 3-dimensional augmented-reality programming language where program elements will be floating around in 3d space around us and we use an augmented-reality interface to interact with them.

It’ll also enable us to do IoT programming in a very literal sense.

For example if you look at an air conditioner, light switch, or some other network-connected appliance that’s in front of you in the real world then the augmented-reality display will overlay the interface exposed by that device.

Let’s say that you’re looking at your phone (not the screen but the actual object). Since this is a device with a GPS chip your AR interface will indicate that you can do location-based programming with it.

Then you create the equivalent of an if-condition specifying a 10m radius around your current location.

if (phone is within 10m of current location) {

}

Now you look at a lightbulb in the room and your interface shows you that the lightbulb has a method for turning on.

You draw a line from the then-branch of the if condition to the light bulb.

You’ve created a program that turns on the light bulb in your room whenever you’re within 10m of this room.


I've been using Unreal Bluescript since past few months. It's probably as practical and large scale as any visual programming system ever have been designed. There are many good things about it, for example, it "compiles" instantly. There is even some visual debugging. You can change a variable name and it gets refactored automatically. It gives you definitive errors right away instead of waiting for compilation. It has nice abstraction of subroutine and macros.

However, I have to disagree that visual programming will actually become more mainstream than programming mechanism for less savvy users. As soon as Blueprints gets bigger (equivalent to may be 200 lines of code), it becomes absolutely unwieldy. You will find yourself clicking all over places all day long. It becomes very hard to parse giant graphs. It becomes hard to keep layout of what is where. It simply doesn't scale. Compare this to even lousy speed of typing 40 WPM, ability to write 1000s of lines of code and be perfectly at peace with everything. Ability to quickly copy paste, refactor, move around by blazing fast keyboard navigation as opposed to just two buttons on mouse.

I was using Hololens other day and created bunch of objects around my room. It became overwhelming just after dozen of objects around me and my hands were getting tired by expensive gesturing all over place. A code that would fit in to 13" display probably takes significant portion of 3D space because each "if()", "while()" etc must be represented by space consuming graphic objects and forest of connections between them. Humans are good at absorbing small graph but as soon as nodes and connections starts climbing they become frustrated. This is why small toy examples look good in visual programming but no one seems to write 10,000 lines of code in those systems.

So keep your expectations accordingly. Visual programming is good for people who don't want to be full time developer but whose job entails them writing may be 100 lines of code every other week.


So I think programming will eventually become a tool used by everyone rather than something for only for programmers and other tech-savvy individuals. It'll be something that everyone uses the same way basic arithmetic is a tool that everyone uses these days. These days programming is reserved for the few because in order to program you have to learn a language with very specific grammar and the level of abstraction is still quite low. The underlying operations being expressed by current languages are not themselves intuitively hard to understand but the average person is prevented from doing so because of the language barrier. The level of abstraction that the average programmer is working with is moving continuously higher. In the past there was a time when the average programmer was working at the level of assembly language but today the average programmer is working at much higher levels of abstraction. There are obvious disadvantages to current visual programming systems as you've described but it may be the case that we just haven't developed the right abstractions and interfaces that work best for visual programming.


I doubt that will happen.

There is a distinction between incidental complexity - complex tools and languages - and inherent complexity - the problem itself, it's domain and the emergent properties of it all combined with a Turing complete machine.

You can't really do much to reduce the inherent complexity, and even professional programmers are struggling with the inherent complexity in all but the smallest problems.

Some programmers are even struggling more than others which suggests to me that programmers need to be a bit above average IQ to be efficient, and it might even be a lower limit were nothing useful happens at all.

What could potentially happen is that we could invent AI agents that could help us tackle some of the inherent complexities, and maybe non professionals could string together intents that will be interpreted by AI, but that won't happen soon.

Edit:

The insights about incidental and inherent complexity is not mine. They were popularised in the great book The Mythical Man-Month by Fred Brook that have many insights that are applicable to software development in general apart from those insights that are applicable to writing large operative systems.

As the book is 40+ years old and the follow-up essay "There is no silver bullet" - that expand on the complexity topic - is 30 years old, there is little reason that we are still trying to invent the silver bullet.

I guess it could even be described as a failure of the educational systems, because I'm sure there is a lot research in information theory that have been done or at least should have been done, that touches these topics.


Things have changed since Fred Brooks wrote his book: accidental complexity has increased considerably and is now at least an order of magnitude more than essential complexity. I estimate you can take any non-trivial system and reduce it to around a tenth of the size, without removing essential functionality, without changing languages. Chuck Moore goes even further: http://web.archive.org/web/20071009222710/http://www.colorfo...

I've written on the topic here: http://web.onetel.com/~hibou/blog/NoSilverBullet.html


In many cases what differentiates a programmer who is better at dealing with inherent complexity than another has a lot to do with the ability to apply the right patterns, tools, or strategies under the right circumstances.

This ability is largely influenced by having been exposed to similar problems in the past and having applied various strategies to solve those problems. One reason why a lot of programmers struggle is because our current educational systems do not expose students to computational thinking at an early age. They've simply not have had enough exposure to that kind of thinking.

Even though computational thinking draws on computer science as a formal discipline the insights are applicable to various other domains and virtually everyone will benefit if they're able to apply the same kind of thinking to other problems in their personal and professional lives. Without having to solve any problems specific to computer science people can still acquire the ability to effectively apply computational thinking by solving other problems in their lives because the inherent complexities of those other problems are going to have some similarities to the inherent complexities in designing software systems.

Now some people may already have highly developed computational thinking abilities without ever having touched a computer because intuitively they've understood how to solve analogous problems in a different domain. Such people may be far better at dealing with inherent complexity than some or most professional programmers but they could never apply those abilities in this domain due to the barrier to entry posed by the incidental complexity associated with the tools and languages.

A friend of mine, a physicist, is an example of the above. Her approach to dealing with complex problems is already far better than many developers I know, but she wouldn't be a good programmer simply because she doesn't understand the tools. However, if she were to invest significant time to understand the tools then she'd be a better developer than most. Not only that, if she had familiarity with the tools then her already developed computational thinking skills would mature much further. If she were to have access to tools that allowed her to apply her ability to deal with complexity without getting in her way then that would be the ideal outcome. What I'm trying to say is that we are a very long way away from minimizing the incidental complexities introduced by our tools and languages.

EDIT: Sal Khan gave a relevant Ted talk on the limitations of our educational systems: https://www.youtube.com/watch?v=-MTRxRO5SRA

He mentions that if you were to ask a literate individual in a past society with a 10 percent literacy rate what would be the maximum possible literacy rate they would've said something like 20 percent (80 percent of the population is incapable of overcoming the inherent complexity associated with gaining literacy). But our educational systems have progressed far enough to achieve a 99 percent literacy rate, which would've been impossible for someone in a society with 10 percent literacy to even consider as a possibility. Our estimates of where the boundary between incidental and inherent complexity lies is generally going to be strongly biased.


I wholeheartedly agree to almost everything you say and especially that everyone is probably able to learn some programming in a suitable language, just as they can learn to read and do math - and it would be good for them if it was taught well, if not because it gives you a tool for solving problems that you might not otherwise acquire. But I am certain that not all, or perhaps not even the majority, have the skills required to pursue a professional career.

I doubt however, that there is much to gain from simplifying the languages, and especially not by going the graphical programming route - it's a chimera occluding the actual problems.

Graphic programming could be used as learning tool, just like a set of training wheels, but just as them it would eventually become a hindrance as both make simple problems simple and everything else much more complex or virtually impossible.

The tools and languages can be improved of course but I do not think there will be any ground breaking improvements, just like there probably will not be any ground breaking simplifications in mathematics, or the process of learning it.

As an side note, I personally wonder why not more work have been done in incorporating first class support for entity-relation and/or graphs in languages - which is probably useful when modeling actual real world problems. But as it is still possible to build using generic language constructs, no one seems to care.

Or maybe the problem is that the educational system is so bad that too many are actually struggling to not topple over with their bicycle even after some years in university.


> Graphic programming could be used as learning tool, just like a set of training wheels, but just as them it would eventually become a hindrance as both make simple problems simple and everything else much more complex or virtually impossible.

For visual languages such as Squeak, and Skov, the idea is indeed to make programming easier for people.

However, visual programming has another important raison d'être. MIMD programs algorithms are more naturally expressed as dataflow, so that task scheduling and load-balancing become almost trivial, and dataflow is more naturally expressed graphically.

Now we have multicore machines and it is difficult to make optimal use of resources with mainstream imperative programming languages.

Graphs are, as you point out, a very flexible data structure, and there ought to be something to be gained in representing programs as graphs, and having programs operate on graphs. Lisp and Prolog benefit from homoiconicity.


Developers who've significant experience with functional programming understand that it would be easier and more reliable to solve many (or most) problems using a functional language over an imperative one. But the majority of developers have little to no experience with functional programming. In almost every conceivable way widespread adoption of functional programming would make everyones jobs significantly better. And these days we are beginning to see mainstream imperative languages starting to incorporate functional concepts slowly because people are starting to realize the value of it.

Right now we have far superior tools available to us than the ones used by the average programmer, but due to a combination of historical reasons, poor education, and sheer inertia of past commitment to inferior tools, we don't yet see widespread adoption of the superior tools that are already available.

The visual languages that I've seen so far are a kind of adaptation of existing languages to make them simpler and more accessible to beginners. What I'm talking about is not merely simplification of existing languages, but more a shift to an entirely different kind of paradigm. A shift that would require the development of new abstractions that are uniquely suited for visual programming that are not just adaptations of existing abstractions.

But as with all tools there are limitations. Certainly visual languages will not be best tool for all jobs, but there are certain classes of problems that are going to be easier to tackle with a visual language. There is also the possibility of solving a particular problem (or different parts of the problem) with both visual and text-based programming concurrently. First class support for data structures like graphs are easier to do with a visual language. I use Neo4j for graphs and the cypher query language is quite good, but it's effectiveness comes from the fact that it incorporates visual elements within the text. Something like MATCH (n)-[:REL]->(y) is a query for matching a part of the graph but you're still limited by the one dimensional structure of the language. You could imagine a visual query language for graphs being vastly more powerful in ways that a text based language couldn't possibly be.


There are a few other games out there like this, but what you described reminded me of Glitchspace[0]. There was a post about zachtronics some time ago that listed similar games.

[0] http://store.steampowered.com/app/290060/ [1] http://www.zachtronics.com/


While in my optimistic moments I imagine we'll be programming visually in the not too distant future, and am doing all I can to bring this about, I still would prefer to turn on light bulbs by flicking switches, which simply close electrical circuits rather than call methods. For that, some might consider me a Luddite.


Right, I'm totally with you on this. I only used the light bulb to illustrate the basic idea.

This need not be limited to network-connected devices either. You can define interfaces for any object even if they aren't electronic.

For example you can define the interface for a bottle as something that can be opened. You can define processes based on the interfaces of multiple objects. These process definitions can then be transferred to a robot that will then be able to interact with the real world based on those definitions.


One way to do this would be the Reality Editor from MIT. http://fluid.media.mit.edu/realityeditor


It's quite impressive that you've got so far so quickly.

There are some similarities to my own Full Metal Jacket (http://web.onetel.com/~hibou/fmj/FMJ.html), which I'm still actively developing, but I suspect there are significant differences as well. So I have a few questions.

(1) Is the computation model dataflow, i.e. do vertices execute asynchronously, when all their inputs have values?

(2) If not, is Skov a visual equivalent of factor or some other textual language?

(3) Is it statically or dynamically typed?

(4) How is iteration done?

(5) Is there a lambda?


(1) & (2) The code is compiled using the Factor compiler so it really works like Factor (3) Dynamically typed, type errors will be caught at runtime (4) You use "while" and "until", you can see an example in the last screenshot in the web page (5) There is a lambda. There are several examples of this on the web page, including a lambda inside a lambda


It's quite different then, despite the superficial similarities.

(1) Full Metal Jacket is explicitly dataflow - that's how the interpreter works. How to compile it (and to what) is an open problem for me, but there are a number of options. I won't release it until I have a working compiler.

(2) It's not a visual version of any existing text-based language. It is implemented in Lisp, and you can mix the two languages, but it's nothing like Lisp.

(3) It's very strongly statically typed, with type inference. Type errors are prevented by the editor. Run time errors are simply unacceptable.

(4) Iteration is done two different ways: using a feedback mechanism, and using emitters and collectors.

(5) Lambdas, including non-local capture of values, are built into the language.

More information here, including papers and tutorials: http://web.onetel.com/~hibou/fmj/FMJ.html


Back when I was studying for my degree, we used visual programming with an application called Max/MSP [1]. Having a quick glance at Skov it looks very similar although Max (and its open source brethren PureData [2]) probably has a much more mature ecosystem (it's been around for quite a long time). I believe you could even program extensions in Java that you could drop in if you needed to have a performant algorithm that wasn't supplied by the 'standard library'.

The main difference is Max and PureData are focussed around creating audio and graphics, but they're both perfectly suitable for general-purpose programming. You could even build quite complex GUIs that were just a shim over the application logic.

However, AFAIK, there wasn't a concept of 'building'/'compiling'. You had to ship your application with the full run time which required a user to make a separate install with no option for a statically built 'fat' executable.

[1] https://cycling74.com/products/max/

[2] https://puredata.info/


I used Max/MSP (well, max4live) for a while and I loved it. I especially found that not having to name stuff (variables, functions etc in other languages) quite liberating when experimentally working towards a solution. Once the design solidified, I could go back and name things nicely.

I also found that while Max gets a bad rep for "literal spaghetti code" (I mean, google image search for Max/MSP and what you'll find is a mess), my code really didn't reflect this. I put it down to the fact that most users of Max are musicians and artists and not programmers who have learned basic software engineering principles like abstraction and separation of concerns and whatnot. When you compartmentalism logic into self-contained little blocks, I found the code to be super clear and actually kinda beautiful.

My main complaint with Max is that its data structures are very.. lacking. As far as I remember, you couldn't even do nested lists (or any kind of references), so things like trees and such were out. You basically had their built in types and nothing else.

I'd love to see a visual programming language very similar to Max/MSP but with better support for user-defined types/data structures, unit testing and other basic things lacking from Max but present in modern programming languages (or their tooling).

This was a few years ago though, so perhaps some of these things are now "fixed".


Awesome, great work! Personally I'm not a fan of "visual" because it requires too much mouse work and navigating around. I tried to tackle these challenges by coming up with the concept of a "tactile-spatial" programming language, which is foremost designed for coding from a phone/on-the-go/touch-based devices.

Here is an animated gif that shows integrated live testing while you code the phone in a rapid T9 or calculator style: http://giphy.com/gifs/ast-tactile-gundb-l0HlRxRHUg0WpF3kk


Nice, I like it's simplicity. I'm also working on a visual programming environment, but for building chatbots. Check it out, I bet you can get some ideas for yours.

I just released the beta: https://talkbot.io

How do you avoid wires overlapping other wires on more complex "flows"?


Just a quick note:

1-minute away to get your bot alive .. maybe should be "bot live"? not sure what you intention was here, but one would use "live" more likely in this context.

"Use it, is free" should be "Use it, it is free" or "User it. It's free". English always requires a pronoun before the verb (with exception)


Thanks for feedback! Sometimes I have difficulties with english grammar, spanish is my native language.


I think it would be hugely beneficial to allow people to code with both text and the visual interface. LabView has a graphical editor as well, for bigger projects, it would have been nice to have both formats available.


Shameless plug -- something similar for Android: http://flowgrid.org


I've watched Uncle Bob's talk today and one of the key messages were that software hasn't advanced much since the 50s:

https://www.youtube.com/watch?v=ecIWPzGEbFc

Visual programming may or may not be the next big step, but I have to applaud anyone who attempts it in an OS manner. Unreal Studio for example executed it really good and the code it generates operates in god knows how many games.


I applaud the attempt, and it does look nice, but my impression of visual programming environments is that they never scale, and it might be an inherent issue.

I have never met a single person that, with more than 5 minutes of time, preferes the "distance computation" as given in the skov example, to the simple mathematical (x-x)2+(y-y)2

Lamdu[0], which has been discussed on HN before, is more Haskellesque, and I have also not heard from anyone that it is actually useful (beyond very early Haskell teaching).

Personally, I'm at the other end (preferring the tersest practical language, K) but I understand the appeal of a visual programming language - it's just that I have never seen an example that delivers.

[0] http://www.lamdu.org/


K is an example of a language I can code with on my phone. Because you need only a small screen with code to see a lot of functionality and you do not need a lot of input but more thinking it works. Most modern visual programming attempts come from the drive to work on small screen without real keyboards and languages like K (APL likes) and Forth likes (like this one is based on) already work because the code is very terse and you do not have all the issues with copy/paste and scrolling like you do in most languages. I experiment quite a bit with programming while on the move (even walking) and I am close to having something which works about as well as normal coding. It takes ideas from K (oK) and different forth implementations.


What K compiler do you have in your phone?

I have the awesome J Android app[0] on my phone, but J doesn't seem as polished as K.

0: http://code.jsoftware.com/wiki/Guides/JAndroid


There's John earnest's oK on github (am on phone right now or I would link). It's a JavaScript implementation that runs everywhere.

K is not more polished, it is differently polished. It is much more minimal than J, which makes the syntax simpler and thus the shortest program is often longer and slower in K; however, in my experience typically the length and performance are on par.


One of the problems with text-based programming is that it scales rather too well. I've had to deal with C functions which were 1000 lines long. You simply can't abuse a visual language like that. As you approach the Deutsch Limit, the complexity of what you've drawn becomes immediately apparent and you have to think about ways of reducing it, by structuring your code better, e.g. splitting up complex functions into two or more simpler ones.


I can assure you that there's an analogous person who can do the 1000 lines drawing (or whatever the equivalent monstrosity is). And if that is impossible, it means the language is too limited to be of real use.

We've had many programming paradigms over the years, most delivered and we can argue about their merits. To the best of my knowledge the only somewhat successful visual programming systems are simulink and lab view, which are extremely limited, and any nontrivial use does venture into the "dreaded" textual.

I would love to be proven wrong, but I suspect it is inherent. A picture might be worth a thousand words, but movie scripts are still written as text, and graphic novels are a minuscule part of the market.

In my opinion It is similar to the misguided notion that some people have that if only math notation (or physics notation, or music notation, etc) was more graphic/elaborate/readable, it would be easier and accessible. This is wrong: the notation is just the top of the iceberg you see above the water. The underlying complexity is the real issue, and it won't go away with a different notation.


Movie scripts and novels, graphic or otherwise, contain a narrative which is inherently sequential, and the best way to capture that is in text.

Programs in general are non-sequential (MIMD). They are normally expressed sequentially because early hardware was inherently sequential.

Music notation is graphical. So are Feynman diagrams, circuit diagrams, maps, blueprints, structural formulae, etc. Not everything is best expressed as text.

Other successful visual dataflow systems include Max, Reaktor, and Flow-based programming. The field is in its infancy and we can't be sure yet what works and what doesn't.


Most of computing is actually sequential, or a small set of sequential computations carried in parallel, often reactively based on an input. So you agree that in cases well described this way (e.g., a Word processor or a Web browser or an Excel spreadsheet) text is a better decription of the computation? Specifically, re:excel, I'm talking about describing Excel itself, not a specific spreadsheet (which may sometimes have a reasonable graphical representation)

Music notation is graphical, and still gets complaints for being "hard to read" by people who haven't mastered it. My intention for bringing it up was not "see, it's not working", but rather "see, it doesn't make things simpler".

Circuit diagrams are graphical only for very small and simple circuits, which illustrates my point about scaling - there is no circuit diagram for any circuit with more than a few tens of elements anymore. Take a simple processor, for example - there are textual descriptions (verilog, vhdl, ihdl, ...) which are compiled into logic gates and/or netlists (textual or binary), and the result of layout (which is graphical, but not for human consumption). There's a block diagram, yes, and a small number of those blocks can be zoomed into another block diagram or a circuit -- but by and large, with the exception of manual layout tweaking, it is textual -- and that's in an area that started with diagrams and still champions them in its courses.

I disagree that the field is in its infancy - we've had pen and paper diagrams for everything in computing since forever, none of which has scaled so far to a complete system (some are ok for toplevel, some are ok at the really bottom level, non work in the middle) , including flow charts, data flow diagrams, UML and its tens of different diagrams, etc.

I would love to be proven wrong, though, but personally I have given up on this kind of things.


> ...my impression of visual programming environments is that they never scale, and it might be an inherent issue.

Eventually they won't need to.


Can you elaborate?


I think visual programming calls for a dramatically different style of programming (along with the dramatically different tooling) in order to take advantage of it well. And I think part of that is working well with many small snippets of code rather than much more monolithic functions.

Though I'm not sure quite what it'll take for it to make sense for people to collectively give visual programming the attention it needs to improve beyond being a mostly useless oddity.


Use a caret for your "2", you confused me with that equation.


I'm persistently getting an error "Skov is damaged and can't be opened". I extracted the folder from the disk image to my Desktop. I tried right-click->open with the same result (and Gatekeeper gives a different error anyway).

I'm still on El Capitan. Could that be the problem?


I have just tried it on my older Mac that still runs El Capitan. I downloaded the image from the website, opened it and copied the folder to the desktop. I had the same error as you ("Skov is damaged and can't be opened"). Then I went to System Preferences > Security and Confidentiality and I saw that "Allow applications downloaded from" was set to "Mac App Store and identified developers". I selected "Anywhere" and tried again: it worked. So the problem was really Gatekeeper.


Shouldn't right click and "Open" work? It doesn't work and I don't know if I want to disable gatekeeper.


Other people have helped me identify the cause of the problem (quarantine attribute on the app folder). I've uploaded a new dmg image. If you try to download it again it should work this time (I hope).


Doesn't work. Same problem.

But thanks for the tip about quarantine. I removed it from the folder and app with xattr and now I can start it.


"Each dot is actually a plus button in disguise."

Why disguise it though? This seems like a decision to create obfuscation and make the design less-intuitive for something that is very-marginally simpler to look at.


Nothing Real Shake was one of my favorite program back in the early 2Ks.

http://community.avid.com/cfs-filesystemfile.ashx/__key/Comm... http://deliveryimages.acm.org/10.1145/510000/509455/4743f2.j...

That was for mainly 2D image compositing (aka 2D matrices).

If you dig earlier, you'll find SideFX Houdini (successor of PRISM) which isn't tied to a dimension, 1, 2, 3D, geometry. You swing between all these to create whatever you want. It's like interactive math and physics right before your eyes taken to an extreme (at least until 2010s). You can then lift parameters from the graph to "package" it as a user made function to have things like city generations

https://www.youtube.com/watch?v=4QT_-Sws_nI

Loads of fun


For anyone curious about the Factor language mentioned,

http://factorcode.org/


Factor is a great for playing around with. When Slava was still pushing it , it felt more alive but it is an impressive effort still.


This might be cool for something like Elm? :) maybe even implement it in Elm with something like the Electron project?


Another powerful visual programming language is called VVVV which you can find at https://vvvv.org/ I think they will be shortening it to VL with the next release. I had experimented with it back in 2011 trying to make a poor man's multi touch interface for the PC using an iPad and Touch OSC. https://www.youtube.com/watch?v=DYEtkJSvZCk It has a pretty clean/ minimalist UI and one of the more interesting implementations of visual programming.


I wonder if this visual chema will work for more complex projects where one needs to zoom in and out of scope (eg Reaktor). But the potential is excellent and even if it doesn't take off as a hardcore programming environment it will make an excellent educational tool.


This is wonderful for folks who can understand concepts presented to them visually. If I had access to this as a child, I could have definitely grokked functional programming faster :)

I can only think of one major improvement - add the concept of synchronicity/asynchronousness. Perhaps a visualization to illustrate difference in function completion times or race conditions will help!


Is the author Danish? I believe "Skov" means "forest" in Danish. And there's a picture of a tree on the page.


No, I just looked at the translations for "tree" and "forest" in many languages and I chose the one I preferred.


Looks great, like a visually pleasing version of Prograph.

I remember my arms starting to ache after a few days of using the mouse constantly when working with Prograph. So some keyboard shortcuts would be recommended.

http://www.andescotia.com/examples/


I don't know why I thought it would be a visual language to develop on mobile. On desktop, I don't think this is useful. I tried to teach once using Scratch, and the IDE was not the problem, but the logic and the way to think about the problem and solution.


I'd expect to see something like this in the FAQ section: How is it possible to diff/merge Skov program soucre trees?

Just asking, it's definitely possible. I used to work at a CAD company, they did some dev work to do these with building models...


Ideally, Skov would need an integrated version control system that understands the visual code.

But now, all you can do is export your code to text files (Ctrl+S) and use traditional text-based diff/merge/version control tools.


This is the right answer, at least for visual dataflow. It should be done by diff/merge of graphs, displayed visually, not by converting it into a text-based format and using something like git, Perforce, or cvs. Dataflow can be expressed textually, but you'd have a hard time following the code.


On Windows 10 Home, the Windows version gives me

"This app can't run on your PC

To find a version for your PC, check with the software publisher."


I don't really know Windows 10 but apparently it's available on ARM processors. The version available on the website will only work on x86-64 processors. Could that be the reason why it doesn't work?


Don't think so, I am running 32-bit Windows on a x64-based processor.


You've reinvented block diagrams, only badly.


I quite like the digraph's organic look, the visuals are _nice_. But I don't think this is very practical, even if Factor itself seems interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: