It was super fun implementing this. I ended up using OpenCV.js, which has only just been released, reimplementing an OpenCV algorithm and learning about computer vision, learning about OffscreenCanvas, which is so experimental that it crashes periodically (a bug I was able to isolate and submit to the Chrome dev team), learning about how to mount a projector facing downward, and so on.
Few things are as satisfying as a piece of paper rolling out of a printer literally onto the floor, where it starts executing. And then cutting it up with a pair of scissors to change its shape.
I hope you have as much fun with this as I do!!
PS: If you want more inspiration for building cool stuff with projectors and cameras, check this out: http://tangible.media.mit.edu/project/pico/ :) They have a ton more interesting projects like that!
I do very little of my coding on a computer. Instead, I build up all the discrete parts of my code using sticky notes, that kind of get constantly re-organised while I'm writing.
So my desk ends up as a mess of sticky notes that get pushed and pulled around, which seems a lot like what you can do with this - except you can also run each part of this code, instead of relying solely on your memory.
Of course, it's nothing groundbreaking just yet, but I can't help but get excited at the possibilities of this sort of medium.
So even when playing just with the papers, you can be programming.
You can sketch a UI element on the paper (eg a rectangle and token for a numerical slider), then go back to code the interactivity. Units in the code are inches on the table. Subtle things like that break you free from the laptop.
When you want to "commit" your code changes, you print a new version.
IMHO, the ability to create new physical interfaces to editing code might allow for new programming paradigms to pop up, or better interfaces for people to develop logic in their own domain. Someone could combine a calculator interface and a context-sensitive variable selection interface to write arithmetic logic using an interface/"syntax" that they already understand, and debug all the way through that with a testbed program that allows the programmer to set variables to specific values and work through how the math works. You can have yet another program which describes a program as a flowchart and allows people to edit it that way.
The thing is, as you make the IDE physical, you get to invite people to interact with it more - rather than staring at a screen, suddenly you're livecoding, you can have multiple people working on it at once. "Ah, but doesn't that fail if you do this? Sets some variables on the testbed", or "I can fix the UI while you update that calculation".
And then, when your accountant has something on a piece of paper that works, they could point a "make this an app on my computer" program at it and bring it out of Dynamicland, into their workplace.
So, how is it not a clone? If I had to describe it to someone based on what I learned from the website, I would say "Paper Programs is a browser-based JS clone of Dynamicland's paper programming system." Is that incorrect?
I'm excited about Paper Programs. I got to spend an afternoon in Dynamicland, and after exploring a couple hours I was sure that I wanted access to the system back home. Their response was "wait a couple years." Not everybody can make it to Oakland!
Dynamicland focuses on in-person collaboration. Maybe Paper Programs will explore tangible computing with remote collaboration.
Imagine students exploring gravity by placing spheres on a “magic” table and then the table causes the spheres to orbit around eachother. Students can adjust mass or gravitational constant and see how that affects the system, or even come up with their own laws of gravity.
Lots of ideas: https://twitter.com/Dynamicland1
I'm just not sure how you would do the "coding" with the paper?
Maybe the paper could represent certain functions or subroutines, and somehow they could be "connected" together (kinda like a flow programming system)?
GraphObj => input #'s, output => rendered lines.
PlusObj => input two #'s, output => one number.
MulObj => ...etc...
x-obj => series of numbers
ForObj => loops over numbers => sends to graph etc...
like Yugioh / duel monsters but in real life
However - slightly disappointed that the programs are not literally controlled by symbols on the pieces of paper, but rather the paper just holds identifiers who point to a program on a server.
I wonder if you could extend the computer vision system to pick up actual programs from the paper - either by OCRing actual text (perhaps limited to capital letters only, and/or particular shapes for letters such as I-for-India, to make it easier?) or a combination of symbols (each corresponding to a function in the API) and numbers (parameters).
That way you could do really interesting things: modifying somebody's program by literally pasting a scrap of paper on it containing different symbols or parameters, or having programs which interact with other programs (either by treating their program code as input data, or by piping data between the programs).
On the other hand, it's probably easier to write interesting programs with an actual keyboard :)
I wonder if you could create a set of symbols which were sufficiently distinctive enough that you could hand-draw them and stand a chance of the computer recognising them. Difficulty level increasing again...
It's easy to live-edit the code in a piece of paper. Once you start to get used to it, it's really enjoyable to jump between altering physical materials, arranging papers to alter communication between the pages, and editing code within the pages.
I work on software for public transit agencies, and Dynamicland immediately made me think of a workshop that Jarrett Walker (humantransit.org) sometimes runs, which involves physical paper and pieces of coloured strings. So I tried to build something inspired by that (https://twitter.com/JanPaul123/status/923037721760735232).
But that experiment didn't involve having programs interact which each other, which is way more powerful as then anyone looking at the table can rearrange the programs differently, which itself becomes a form of programming (when your programs do small enough things).
So anyway, no specific use case yet, I think it's too early for that.
Dynamicland looks awesome. I wish I could visit. I both live on the wrong coast and it doesn't seem to be open to the public yet.
One fascinating aspect I found on this project was compiling OpenCV to web assembly. I find it difficult and frustrating to compile OpenCV for my Mac never-mind compiling it to web assembly and running it in a browser. I love the idea of doing that, I'll have to try it.
 Open Computer Vision to save a few people a google
Doing all of that is easy and natural, sometimes you even discover cool things by mistake.
Trying to channel the researchers at Dynamicland, I think the answer would be that they aren't investigating AR because doing so isn't in line with their design principles. The mission statement for Dynamicland includes this phrase "incubating a humane dynamic medium" - and since most humans aren't equipped with AR hardware, I'm guessing that is why AR isn't an area of research for this particular technology.
I've been similarly enchanted when I first saw Dynamicland (well, on twitter, the actual place being on a different continent) and I've actually started my own experiments (but never finished them :( ). Glad to see someone follow through, I'm definitely going to be watching you closely!