I also wrote a short explanation of some little experiments I did there when I visited a couple years ago:
> The Dynamicland researchers are not developing a product. The computers that the researchers build are models: not for distribution, but for studying and evaluating. The goal of the research is not to provide hardware or software to users, but to discover a new set of ideas which will enable all people to create computational environments for themselves, with no dependence on Dynamicland or anyone else, and to understand how these ideas can be taught and learned. The end result of the research is fundamentally intended to be a form of education, empowering communities to be self-sufficient and teach these ideas to each other.
Same with gathering user feedback -- the fact that we have such ridiculously unusable basic UI elements on mobile especially (people tend to NOT find basic UI elelemtsn of apps for months, sometimes years - how the f* is that even possible) is just one consequence of the fact that even if say 1000 users intend to do something and fail, the authors of the app never learn about that. WE get clubhouse to listen to one more type of radio, but we never get "userhouse" to get instant stream of people's complaints about an app (and a special physical button ON THE smartphone itself to launch that "instant feedback to the app author"mode so it's part of the base aspect of being a user of a smartphone)...
one can dream.
This has been done before more than once, including at PARC (which is listed as an inspiration in TFA)
I learned how to program on an Apple IIe which had a key sequence to drop to a BASIC prompt and allow manipulating the running program (if it was written in BASIC, which many were). My first programming was hacking my highscores over my brother's.
The accessibility is what made programming incredible fun and alluring to me. I'm happy that there are many systems carrying on these ideals today, but I wish there were far more.
The reason you can't do it the rest of the time is because developers would have to support it if they allowed it, and then they'd never be able to change anything ever again. Apps also aren't set up to allow this because of Conway's law, or maybe I have it backwards there.
The now-discontinued Blender game engine was an example of building a complex system by connecting up boxed with arrows. You could easily get to a few square meters of program, and then finding anything was tough.
Dynamicland PR: "Programs are flexible, and compose readily." So what do you compose? A functional block with ins and outs? What's the equivalent of a subroutine? That's where graphical systems usually fall down.
Most visual programming languages have nodes with ins and outs that contain other nodes, just like you’re suggesting. And yes, I’d consider them the equivalent of subroutines.
Programmers seem to talk about visual programming languages as if they’ve failed, but outside of software engineering they appear to me to be more popular than text-based languages. Some examples are VFX, compositing, audio/MIDI, prototyping, and automation.
My own working hypothesis is that plain-text based version control is why most software engineering uses text files. The big differentiator between software engineering and other fields is there’s very little collaboration with visual programming language projects, they’re almost always made by one person.
That lack of an ecosystem makes most devs nervous and we tend to be (rightfully) distrustful of all-in-one ecosystems since your program (probably your livelihood) is then tied to the continued support of that ecosystem.
It's one reason this movement is interesting to me, if we can establish a common communication medium, specifically a non-proprietary one, then tool builders can come along and start building out that ecosystem - likely as inefficient hacks initially but ones that can be refined as the platform gets more focus.
If there were a highly compelling and productive general purpose visual language, storing the data as text, and even providing tools to aid in things like merge conflicts, would be minor details. But when you actually dig into things like (the excellent) Prograph and its descendents, you find out that shapes and lines are actually less intuitive than text for nontrivial code bases. Contrary to what one might expect, a picture is not worth a thousand words.
Perhaps a new visual paradigm will be introduced that changes things, but there's really nothing today that competes with even your least favorite popular text based language.
The discussion of "general purpose" programming languages doesn't really make sense with respect to visual programming languages because visual programming languages necessarily exist within a GUI, and GUIs themselves don't scale to general purpose. So what we actually have is specialized visual programming languages that double as (or exist within) specialized GUI applications.
A system that appears to be working quite well in creating much more efficient systems for non-programmers to accomplish programmatic tasks. This kind of poor mapping of textual programming languages to certain problems is why the Stripe globe is state of the art text-based rendering (https://stripe.com/blog/globe), when people are doing algorithmic architecture in Houdini (https://mvsm.com/project/imac-pro). It's not even comparable, text-based languages are getting absolutely decimated outside of software engineering by better tools that leverage visual programming languages.
And comparing something developed by a design studio with god-knows-how-much rendering time to something designed in-house that runs live in your browser seems a little strange.
For context, "Grasshopper" has been a nodes-and-boxes plugin for Rhino 3D—a 3D modeling app—for many years. By "new" they mean that it has been promoted from plugin to just part of Rhino itself. But if you look for people doing things with Grasshopper, you'll find a ton of examples.
I'd love to find more impressive examples using text-based programming languages though, those are harder to find.
I agree that comparing a video to something that runs live obviously isn't fair, but that's not really the axis I'm concerned about here. Plain-text code seems to struggle at the artistic level vs. visual tools, not the performance level. E.g., the counterargument I'm looking for would be impressive visuals created purely in code that's either pre-render (e.g., as a video), or running live, I don't really care, I just want to see visually impressive work.
Developing isn't just writing (drawing) new code. It is iterative process of juggling many pieces of functionality so they work well together. When you have to refactor some logic, suddenly blocks move to different places, and their relative positions change. Whenever related blocks change relative orientation, your internal model for navigation becomes obsolete. Human brain is not used to this and it creates discomfort and frustration.
Traditional programming done with text files in which functions have 1D organization. Because 1D has less spatial relations (just above and below), code reorganization is cognitively easier. It's easier to find good place for piece of code and it's easier to remember where it lives.
I think a truly usable visual programming tool should not allow users to place blocks at arbitrary positions, but should automatically (and predictably) arrange pieces on screen. Structural text editors (for example ) are step in right direction, while connected blocks seem to me like a dead end.
Another problem is structuring the code - the 2D space is simply too crowded to represent layers of abstractions in flattened way. The hyperbolic plane would help here . Not every bit of whole design can be shown at once (this would result in information overload), but editor could show what is in focus as well as one or two levels of context.
As to success of existing visual languages, I'd argue that they are all targeted at non-technical people and they all run within sandbox environment developed with traditional text code. Why aren't their creators dog-fooding? My own answer to this is that existing visual languages cannot support the necessary amount of complexity.
I think this is worth stating, but I think it really means that expressions take up much more space.
This can be confronted from two directions.
First, don't use nodes and noodles for expressions, because it doesn't help clarity and hurts density. Use nodes at a higher level instead of searching for a silver bullet.
Second, of course one flat work space does not scale. Node based tools already have ways of grouping nodes as well as making groups that can be instanced instead of duplicated.
> connected blocks seem to me like a dead end.
Connected blocks of any sort mirror how programs work in the first place. Automatic organization is typically pretty helpful though.
> Another problem is structuring the code - the 2D space is simply too crowded to represent layers of abstractions in flattened way. The hyperbolic plane would help here .
I'm not sure what this means, but I'm guessing it is also solved by grouping nodes and making components by groups that can be instanced.
> As to success of existing visual languages, I'd argue that they are all targeted at non-technical people and they all run within sandbox environment developed with traditional text code. Why aren't their creators dog-fooding? My own answer to this is that existing visual languages cannot support the necessary amount of complexity.
This is definitely not true. There are many extremely technical people using visual languages because they are huge speed ups in productivity - more interactive and less error prone by a huge degree. Shader writers use nodes instead of text all the time. Lots of tools transitioned from being text based to being graphs. The difference is that they are domain specific and not general purpose programming languages. It is the general purpose nut that hasn't been cracked.
Anything that needs to be sequential is not done well with nodes as well as branching and looping. Also programs like nuke, houdini etc. are basically limited to one data type (images) or a few data types (images, geometry, 1D channels). Shaders are limited to their primitive data types as well.
The advantages are the ability to see output at every stage, work in real time (including seeing errors in real time), etc.
You also get to see descriptions, limitations and special interfaces for each parameter. Not only that, but parameters can be switched between constants, expressions and arrays (channels) easily for debugging. Overall there is a lot more information going on.
I don't think source control has anything to do with it. Anyone can save more versions of a file and many are text. A lot of times version control is used with text fragments that make up reusable groups of nodes.
There's a large category of conventional wisdom on HN that's categorically false yet constantly repeated—the idea that "visual programming languages failed and are now dead" is one of those. It's... irritating :-)
The DynamicLand I experienced was a piece of paper represented a function written in a text based programming language. You could choose to have inputs and outputs based on proximity but you didn't really "code" using the blocks. You "code" using a text based programming language. The sheet of paper shows the output of its text based code. If you want to edit that code, you grab keyboard and point it at that sheet and then you can toggle between showing the text based code or showing its output projected on the sheet.
I didn't see many examples that composed all that well. I saw a few like there was a plant->bunny->wolf simulation. Plants spawn randomly, bunnies eat plants, wolves eat bunnies. So that was written in a text based programming language represented by one sheet of paper.
Another sheet of paper had text based code that would graph a value over time. If you pointed its input at the sim it would show a graph of plant population, or bunny population, or wolf population. I don't remember how you chose which output to graph. It might have been based on which edge of the sim paper you connected to.
Another sheet of paper had text based code that would output a value based on its orientation and draw that value on the sheet. In other words it implemented a knob. Lines were coming out of the simulation sheet so if you put the knob so the line from the simulation sheet touched the knob sheet the knob would adjust some parameter of the simulation like plant spawn rate or wolf running speed.
But you can quickly see if you wanted to be able to adjust 5 parameters (plant birth rate, bunny spawn rate, bunny reproduction rate, bunny speed, wolf spawn rate, wolf speed) and you wanted to graph the populations of all 3 types you'd quickly run out of space.
Further, writing anything more complex than simple programs you could write in 30 minutes seemed problematic. In other words, as a learning environment or a toy it was super interesting but at the time they were pitching this not as an educational thing but as an experiment in "computable surfaces" but it was hard to see it as more than something you bring kids to on a field trip and one or two experiences is the end of it.
They said that this particular implementation (paper, projectors) was also not the end goal. You can imagine doing the entire thing with AR glasses (so no projectors needed)
I think the goal of Dynamicland is to build computational paradigms other than procedure-oriented programming. Individual cards can call lua subroutines, but the emergent behavior between cards is not the same type of composition as connecting subroutines.
It reminds me of the gameplay in "Baba is You": the "program" is the set of rules currently in play, and the emergent behavior is the set of moves allowed by those rules.
Yes, same problem, and same solution, as the Blender game engine. ROS, the Robot Operating System, which is really a message passing library that runs on Linux, is something like that, too, with blocks connected by one-way connections.
Emergent behavior between cards is not the same type of composition as connecting subroutines.
Good point. You can start to see where this model works. Things where there's a lot going on in parallel, and loosely coupled modules need some interconnection.
People keep re-inventing this idea, but so far, it tends to get really messy as it scales up. There's a second level of concepts needed here, something comparable to the invention of modules or classes or traits in programming.
Blender's interface problems are about their execution of them, not the fundamental ideas.
Ever heard of a kitchy dive bar having the bathroom door labels on the other door with an arrow pointing to the actual door? |Men ->| _ _ |<- Women|
That's a basic description of blender's interface. It's seems like a fever dream of someone who has never actually had to use it or explain it to anyone.
The positives are many: I think it's awesome to conceive of computers outside the actual computer boxes, especially in an educational setting. I think the notion of "collaboration" is way more engaging when it's kids standing shoulder-to-shoulder and actually pushing pieces of paper around and arguing about things out loud, rather than "collaborating" on a Google Doc, sitting at different computers. I also think that this can be enlarged: all sorts of working meetings could be improved by people standing shoulder-to-shoulder at a table and pushing around tangible objects, with simple programming about how they'll interact.
My negatives are what I took away from this, and may be incorrect or out-of-date, so feel free to jump in if I'm wrong, but I felt that Bret Victor was very much a purist regarding his vision, and had no desire to help spawn clones or variants of Dynamicland anywhere else. In many ways this may be laudable, but it felt like he was protecting his baby from going out into the wild, which has meant that the spread of ideas and possibilities has been greatly curtailed. It seems like it will only ever be destined to be a tiny playground for Bret and the few friends working with him.
I've worked at a company (~30 people) that used a physical kanban blackboard, with slips of paper and magnets.
The slips started out hand-written. Then someone made a printer. Then someone realized there was still too much information being held on Redmine. Then someone connected the printer with Redmine. Then we decided to keep the long descriptions in Redmine, but priorities and assignments on kanban. Then someone decided we need to keep ticket priorities and progress on Redmine as well, because computers are actually better when you need to sort and filter a mass of tickets. Then someone noticed it's difficult to locate either the physical representation of a ticket, or its copy on Redmine, to keep both in sync. Before I left, we were throwing around ideas like printing QR codes on the tickets, or using CV/OCR. The printer would also get jammed, the paper tickets got lost, we never had enough magnets, and I hate chalk.
We've had a very unusual (in my experience) policy of no remote / no WFH, I didn't mind but I wonder how much more of an obstacle it would have been if remote work was more common. It would certainly make zero sense in the pandemic world, but I didn't stick around long enough to find out.
"Protecting his baby" might be a very wise decision at this point, if only because of how the reception generally goes; pigeonholing the project into something like "an AR coding environment" or "visual programming with projectors" is a very real risk that could damage the project's aims - even "clones" such as https://paperprograms.org/ make it abundantly clear that they are not attempting to be an "opensource Dynamicland".
It's only been 3 years since it was founded. While that might be generally considered an eternity in tech-time, I feel it's barely enough to get one's feet wet given the scale of the project, which seems to aim to be decades long. Besides, the roadmap they've got on their website mentions 2022 as the year they go public - so, I personally am stoked for what that will bring.
People will just make assumptions about what the project is based on the photos/videos they see, but won’t absorb the deeper meaning because they won’t get to actually use it.
A movie projector projects onto an otherwise dark screen, but this thing projects on a surface: your black level is going to be good light for "seeing".
The projectors have to be pretty bright, outside light controlled and the sensor array would have to be pretty robust.
A good installation would be pretty expensive, but there must be a $1000 version that's possible.
1. Start with an analog synth from the 70s that's just a bunch of big, bulky metal modules connected by patch cords.
2. Remove the guts of each module and shrink them down to lego size
3. Print an English word name on each lego-sized module
4. Connect them up with patch cords.
Now I have a physical artifact that represents a DSP graph in the visual diagram interface of a program like Pure Data.
Question-- what is the easiest/cheapest way to continually pipe the topology data of the physical artifact into my laptop? It wouldn't be too hard to just have each module shouting a superaudio signal at my laptop and instantiating the corresponding object in the software when it's detected. But that doesn't cover the interconnections.
I could take video input of the artifact but that would be crappy UX with the user constantly having to "show" the artifact to the camera from a non-ambiguous angle.
I feel like there's some simple solution lurking out there with something like tinfoil and a 9v battery...
also, you might enjoy eurorack modular synthesis. I find that it constantly makes me re-consider which things are virtual and which things are analogue as a I reconfigure my synthesizer
some microcontroller in each brick with a bunch of uarts would be ideal. then somewhere you need a usb-to-whatever link.
(maybe somebody really clever could put the right set of passive parts in each brick so that every topology of devices would be distinguishable by some sort of analog probing from the periphery. i'm not that clever.)
Not too hard. 1-Wire, a very low end LAN from Dallas Semiconductor, would be good for this. The parts are cheap, low-power, and powered over the connection cable.
(1-wire requires 3 wires. You could use stereo phone jacks.)
There are probably musician applications for this sort of thing. Some people like cables.
A similar form of fakery is seen in DJ systems where you have vinyl records that contain not music, but time code. The DJ can do DJ turntable stuff as if playing analogue records. They're just sending time code info to the the control unit which has the audio in memory, and the output is the appropriate audio for the time code.
Perhaps you're imagining less restrictive requirements for it than I am.
> 1-Wire, a very low end LAN from Dallas Semiconductor, would be good for this.
i'm aware of 1-wire. it is master-slave. can you outline exactly how you plan to use it in an unknown and reconfigurable topology, and how many separate master and slave interfaces you expect to have in each brick?
Dozens of companies and projects will spring from this. Just give it time.
We'll be back to using real objects instead of keyboards at some point in the near-future. Brett's right that a lot of the difficulty in actually using computers to produce things is in the translation of human interactions into a medium the computer can understand, and translating the data a computer outputs into a medium humans understand. We aren't two-dimensional creatures. The state of the art cannot remain in pictures and text forever.
I'm not sold that everything will be done this way - some things will always need the precision of automated, textual input (see tool assisted speed runs in video-gaming if you think human beings can match pre-programmed precision inputs).
First of all, Dynamicland's goal of "agency, not apps" resonates with me. I'm in favor of things that make programming more accessible to normal people.
However, I worry that Dynamicland will be a step backward for people with disabilities, particularly blind people and people with impaired mobility. For us (I'm legally blind myself), I believe the virtual worlds of today's computers aren't imprisoning but liberating. Consider that a blind person can't see that "fully functional" scrap of paper, and a person who can't use their hands can't write on that paper or manipulate the other physical objects in Dynamicland.
Do you have a plan to solve this problem while holding to your goal of "No screens, no devices"? It seems to me that there's no way to reconcile these conflicting requirements without making an exception for people who can't work directly with the paper and other objects.
Or have you decided that it's better to undo the equalizing effect of computers for people with disabilities, for the good of everyone else? I would obviously be disappointed if that's the case. But I understand that everything's a trade-off, and perhaps it's not reasonable to confine the majority to an inhumane way of working for the benefit of a few. So I don't mean this as an accusation; I really want to know.
But I think you baked into your question a false dichotomy. "Dynamicland" does not necessarily have to _undo_ the "equalizing effect of computers", nor does its absence mean that the majority is confined to an _inhumane_ (wow...) way of working.
That was Bret Victor's characterization of being confined to a screen, not mine.
The medium is limited but interesting.
Perhaps this would be more portable if there is a projector underneath a table with a transparent glass top and a camera on top to view inputs.
1. Mapping (Interactive maps, interactive routing, real-time weather, etc)
2. Collaborative art
3. Engineering CAD and CAM modeling and design.
Portable, multiplayer--and comes with massive improvements in FOV and contrast due to its design.
And you get the advantage of having optional private workspaces on top of that.
AR glasses could hypothetically share a networked virtual environment, so I don't see why they couldn't be collaborative.
I guess this isn't a new project, but I'm glad it's been reposted.
I do worry that this format has some pretty sharp limits, just due to the spatiality and trying to cram functionality into a limited area of a room, amongst other people. Some form of code storage might need to be designed, so one could stack those little code papers on top of one another. Who knows.
Very though-provoking project.
I'm not sure if darklang is going to succeed, but that seems like a far more fruitful direction to approach the problem from. It's a very hard problem, and they are attempting to make programming a little bit more visual while not removing any power from the user, instead of making it entirely visual and almost completely dis-empowering the user. Importantly, it's actually accessible to people all over the world, and they can use it to achieve real world goals.
Will they end up building some product that will be adopted by droves of customers and offers a completely new paradigm for interaction? Probably not.
But I'd argue that's hardly where "esoteric" research like this ends up going, and in my book that's OK. Bret Victor, who is behind Dynamicland, never shipped the full drawing app from his interactive visualizations talk as far as I know. Neither did anyone ever get to buy or download the editor he shows in "Inventing on Principle" as a new IDE that offers incredibly great feedback while programming.
Nevertheless, his talks are among the most inspiring things I and I think many others have ever seen in the area of HCI, and at least in my case are responsible for a large part of my renewed interest in the field.
Will anything out of Dynamicland capture people's imagination and enthusiasm like that as well? Maybe. Maybe not. But the point is to explore, and I applaud those who do.
That's why Apple is rich. They do the hard part.
I've visited dynamic land also, and the vision of this more tangible visible computation is bigger than the incarnation/progress last shown.
I'm a fan of Darklang's goals too. But I do think Dynamic Land is targeting more approachable, teachable, and communal computing. Darklang is going to be more for professionals making backends and doesn't target any user facing audio/visuals whatsoever. It could in the end of course, so I hope it does well as an effort too.
That is the plan, might be a while away though.
I'm building something visual to prove (or disprove) this concept, but I'm also thinking that power users would want a textual language for faster input, as GUIs are better for output where TUIs are better for input.
Some types of code, hopefully less than more is just not very flow based and doesn't have a concrete visual representation.
Graphs are great for showing relationships and high level information at a glance. Human can recognize visual shapes (this can apply to text as well but text as well. Many editors now show text minimaps).
But after a certain scale, the best we have for relationships is probably hypertext links and other uri references.
I don't see what the "almost completely dis-empowering the user" is about either; if anything, Dynamicland seems the more empowering of the two, whereas Dark's endgame is more efficient development of a fundamentally conventional type of software.
It feels rather dishonest; the comparisons are sweeping and vague, with very little substance as to an actual criticism.
"2022: Dynamicland meets the world, in the form of new kinds of libraries, museums, classrooms, science labs, arts venues, and businesses. We will empower these communities to build what they need for themselves, to design their own futures."
Why do you think that making something visual completely disempowers the user?
However, I'd guess that new products and ideas will proliferate if we somehow develop incredibly small and powerful projectors. Something that could be attached to a table, or even to your smartphone, and result in the same experience. I'd certainly want that :)
But after a minute I realized I was just thinking of the Bistromathic Drive in Hitchhiker's Guide, which is a spaceship you pilot by ordering things at an Italian restaurant.
They fit together like puzzle pieces. Kind of a neat visual that could translate to physical medium.
Maybe have a board with lego slots for blocks that represent flow and maybe a little bit of interfacing with a computer to assign graphics to an object, say a sprite then the blocks tell it how to move, you could maybe have blocks that represent data and you can name it but the data stored is just json so no real schema for simplicity....
You could create interactive stories from a board of legos, or game like have a squirrel escape a yard without running into the dog, etc...