Hacker News new | comments | show | ask | jobs | submit login
Show HN: Paper Programs – Run JavaScript on pieces of paper (paperprograms.org)
235 points by janpaul123 6 months ago | hide | past | web | favorite | 52 comments



Hi! I was asked to repost this (a la https://news.ycombinator.com/item?id=11662380), so here goes. I'm JP, and I created this project after visiting Dynamicland a couple of times. I wanted to capture some of the magic there in my living room, and decided to implement the "programs running on pieces of paper" part.

It was super fun implementing this. I ended up using OpenCV.js, which has only just been released, reimplementing an OpenCV algorithm and learning about computer vision, learning about OffscreenCanvas, which is so experimental that it crashes periodically (a bug I was able to isolate and submit to the Chrome dev team), learning about how to mount a projector facing downward, and so on.

Few things are as satisfying as a piece of paper rolling out of a printer literally onto the floor, where it starts executing. And then cutting it up with a pair of scissors to change its shape.

I hope you have as much fun with this as I do!!

PS: If you want more inspiration for building cool stuff with projectors and cameras, check this out: http://tangible.media.mit.edu/project/pico/ :) They have a ton more interesting projects like that!


I've been seeing folks get excited about this, and haven't yet been able to see why it is so interesting. However, the people behind it (Bret Victor et al) and who I've seen playing with it are too interesting for me to ignore this...


Seeing the Dynamicland tweets made me rethink all of Bret Victor's previous work, and I wrote up why it's interesting from a design perspective here: http://vitor.io/on-dynamicland


I'm excited, because it fits my current workflow.

I do very little of my coding on a computer. Instead, I build up all the discrete parts of my code using sticky notes, that kind of get constantly re-organised while I'm writing.

So my desk ends up as a mess of sticky notes that get pushed and pulled around, which seems a lot like what you can do with this - except you can also run each part of this code, instead of relying solely on your memory.


It's a very different kind of interaction than with code on the screen, and fits the way one thinks in a much more concrete way. It's like an embodied metaphor, one that people are intimately familiar with, and the immediacy of it allows you to iterate quickly as you have direct feedback, and the whole structure is made visible for everyone to see, enabling a kind of collaboration unusual for computer programming (image the dynamic of a group of people trying to build a tent together compared to a group of people all staring at a static view of an IDE).

Of course, it's nothing groundbreaking just yet, but I can't help but get excited at the possibilities of this sort of medium.


I still don't see how this is true, either in this case or in Dynamicland - neither of those seem to OCR programs from the piece of paper. So you still do classical programming alone, and only get to play with the results. A piece of paper may contain the actual program behind system's reaction to it, or it may contain a picture of a cat - the contents are irrelevant; it's the dots that matter.


If you write small programs that interact with other programs, then rearranging them on the floor/table becomes a form of programming (similar to how combining UNIX-style programs in shell scripts is a high-level form of programming).

So even when playing just with the papers, you can be programming.


There's also the interplay of arranging papers, and live-editing the code in a particular piece of paper. In Dynamicland, the projector highlights printed code in green/red to visualize diffs.

You can sketch a UI element on the paper (eg a rectangle and token for a numerical slider), then go back to code the interactivity. Units in the code are inches on the table. Subtle things like that break you free from the laptop.

When you want to "commit" your code changes, you print a new version.


One of the more interesting things that I've seen in Dynamicland is that it allows you to create programs that modify other programs - basically the beginnings of a physical IDE. You can use a scrubber or dial to modify a constant, as a very basic first step.

IMHO, the ability to create new physical interfaces to editing code might allow for new programming paradigms to pop up, or better interfaces for people to develop logic in their own domain. Someone could combine a calculator interface and a context-sensitive variable selection interface to write arithmetic logic using an interface/"syntax" that they already understand, and debug all the way through that with a testbed program that allows the programmer to set variables to specific values and work through how the math works. You can have yet another program which describes a program as a flowchart and allows people to edit it that way.

The thing is, as you make the IDE physical, you get to invite people to interact with it more - rather than staring at a screen, suddenly you're livecoding, you can have multiple people working on it at once. "Ah, but doesn't that fail if you do this? Sets some variables on the testbed", or "I can fix the UI while you update that calculation".

And then, when your accountant has something on a piece of paper that works, they could point a "make this an app on my computer" program at it and bring it out of Dynamicland, into their workplace.


I wouldn't be surprised if tangible coding could be a lot better of an environment for debugging. Something with real physical space makes it much easier for me to follow the flow of things than scrolling things on a screen.


What project are you referring to that Bret Victor is involved in? I don't see his name anywhere.


Talk from 2014 that previewed the vision: http://worrydream.com/SeeingSpaces/



Respect for the fact you give generous credit to Dynamicland while also saying honestly how your project is differentiated. It would have been easy not to


Hi, I'm honestly confused by the statement "Paper Programs is not a clone of Dynamicland". All the ideas about how it works seem to come from Dynamicland, and everything it does seems to be designed to work exactly the way it does in Dynamicland.

So, how is it not a clone? If I had to describe it to someone based on what I learned from the website, I would say "Paper Programs is a browser-based JS clone of Dynamicland's paper programming system." Is that incorrect?


I think that section is clear. It's not a clone because Dynamicland is a building in Oakland. An open-source software project can port some of the experience, but to clone it you would need to establish a large physical space.

I'm excited about Paper Programs. I got to spend an afternoon in Dynamicland, and after exploring a couple hours I was sure that I wanted access to the system back home. Their response was "wait a couple years." Not everybody can make it to Oakland!

Dynamicland focuses on in-person collaboration. Maybe Paper Programs will explore tangible computing with remote collaboration.


Also, Dynamicland uses Realtalk, a programming language / operating system / protocol for physical computing that is more general than projectors/cameras/papers.


The title made me think it was js running on printed arm chips: https://venturebeat.com/2015/01/21/thinfilm-teams-with-xerox...


Projector added to my Amazon basket immediately! Thanks for sharing.


Do you have a specific product link?


Seconding; choosing an appropriate projectors (resolution, luminance, throw distance) is not a trivial task. I'd like to hear what people have found that fits.


I am eagerly awaiting the Bobby Tables (https://xkcd.com/327/) version for eval()


I work in an elementary school and I'm trying hard to think about how to possibly take advantage of this in my environment. The interaction model is really rich and perfect for the children, but I'm so far coming up empty on the sorts of things we could/should do with it.


how About a space game where you move cargo ships around and manufacture and trade goods. You could teach about coordinate systems and math. Players need to figure out optimum things to haul for their amount of available funds. Different players could have different capabilities and teams need to figure out how to work together to accomplish more together as a team than they could on their own. Game space shows what the cargo is and how far a ship can move with its current cargo.


It would be great if it could be connected to something like the MIT Media LAb inFORM table and/or some kind of under the table system of magnets. That way physical objects could be moved instead of just the projector displaying an image of the cargo ship. Right now the physical manipulation seems one way, from the physical world to the digital. It is “easy” for the the system to motion track an object as a student moves it, but the system doesn’t really have a way for it to move the object itself.

Imagine students exploring gravity by placing spheres on a “magic” table and then the table causes the spheres to orbit around eachother. Students can adjust mass or gravitational constant and see how that affects the system, or even come up with their own laws of gravity.


Here's a demo of an interactive book about symmetry: https://twitter.com/mandy3284/status/951589414911721472

Lots of ideas: https://twitter.com/Dynamicland1


I wonder if something like this could be modified or extended in some manner for LOGO programming; graphical turtle on the floor (or wall) - no need for a physical robot turtle.

I'm just not sure how you would do the "coding" with the paper?

Maybe the paper could represent certain functions or subroutines, and somehow they could be "connected" together (kinda like a flow programming system)?


You talking elementary school: teach the kids how to do their match homework using Paper Programs - not doing the math like a calculator, but doing the math the same way kids do: adding/subtracting columns of numbers to arrive at the answer. Just implementing that as Paper Programs will inspire a host of related projects tied to a need to 'count things'.


Your comment made me think about our math curriculum, and a lot of the work that we do with the younger kids deals with just establishing some basic numeracy; for instance, having small blocks that represent a unit, and then having a different type of block that represents 10, etc. Having a camera recognize these as the children play with them could be absolutely perfect. Thanks for the inspiration.


Even at the high school level, I spend a lot of time trying to develop number sense with my students (especially with units). It would be great if I could somehow show students how numbers and units “flow” through a science problem.


Simple number / function / inputter-outputter => graphs would work fine.

GraphObj => input #'s, output => rendered lines.

PlusObj => input two #'s, output => one number.

MulObj => ...etc...

x-obj => series of numbers

ForObj => loops over numbers => sends to graph etc...


a card game with real life trading cards and animated monsters!

like Yugioh / duel monsters but in real life

yeah!!!


Very interesting interaction model. I love the idea of a program being able to read the shape of the paper - lots of cool possibilities.

However - slightly disappointed that the programs are not literally controlled by symbols on the pieces of paper, but rather the paper just holds identifiers who point to a program on a server.

I wonder if you could extend the computer vision system to pick up actual programs from the paper - either by OCRing actual text (perhaps limited to capital letters only, and/or particular shapes for letters such as I-for-India, to make it easier?) or a combination of symbols (each corresponding to a function in the API) and numbers (parameters).

That way you could do really interesting things: modifying somebody's program by literally pasting a scrap of paper on it containing different symbols or parameters, or having programs which interact with other programs (either by treating their program code as input data, or by piping data between the programs).

On the other hand, it's probably easier to write interesting programs with an actual keyboard :)


This is how Dynamicland currently works, also, but their goal is what you've stated. It's a hardware limitation.


I'm not sure how large the paper needs to be to be recognized. Maybe you could have snippets that hold only a single identifier or syntax element and programs would be arranged by puzzling them together. That should be an appropriate granularity for manually placing (no fidgeting with tiny individual letters) while still allowing full programmability with minimal trips to the printer (only to create some new identifier).


Something like Scrabble tiles, perhaps?

I wonder if you could create a set of symbols which were sufficiently distinctive enough that you could hand-draw them and stand a chance of the computer recognising them. Difficulty level increasing again...


If you have a 1080 HD projector and camera and 100" diagonal table, that's 22ppi in and out... OCR would be tough.

It's easy to live-edit the code in a piece of paper. Once you start to get used to it, it's really enjoyable to jump between altering physical materials, arranging papers to alter communication between the pages, and editing code within the pages.


This is awesome work. Great job.

Dynamicland looks awesome. I wish I could visit. I both live on the wrong coast and it doesn't seem to be open to the public yet.

One fascinating aspect I found on this project was compiling OpenCV[1] to web assembly. I find it difficult and frustrating to compile OpenCV for my Mac never-mind compiling it to web assembly and running it in a browser. I love the idea of doing that, I'll have to try it.

[1] Open Computer Vision to save a few people a google



Are the dots essentially a "barcode" that map to a particular program on the server? How does the projector factor in? Do you adjust the projected image to ensure the result falls in the respective paper's area? Interesting project - did you have a use case in mind for it?


Yeah, exactly what you said.

I work on software for public transit agencies, and Dynamicland immediately made me think of a workshop that Jarrett Walker (humantransit.org) sometimes runs, which involves physical paper and pieces of coloured strings. So I tried to build something inspired by that (https://twitter.com/JanPaul123/status/923037721760735232).

But that experiment didn't involve having programs interact which each other, which is way more powerful as then anyone looking at the table can rearrange the programs differently, which itself becomes a form of programming (when your programs do small enough things).

So anyway, no specific use case yet, I think it's too early for that.


Wondering if this could be done with AR (e.g. Hololens or Glass)?


The benefit of physical objects (like paper) is that you don't need to simulate reality in AR. Using physical objects lets you discover things that you wouldn't have otherwise with simulation. One of the neat things about the paper at Dynamicland is that can you "stretch" paper by just tearing it in half (or into quarters) and move those out.

Doing all of that is easy and natural, sometimes you even discover cool things by mistake.


Augmented reality headsets can theoretically project onto real piece paper just like a projector (actually, a projector is a form of AR). They don't mask out the current reality like a VR headset does, they have no need (and indeed no capability) to simulate a different reality.


Ah. I see what you're saying. I thought you were proposing to replace paper with AR ("this desk is now a screen", etc)

Trying to channel the researchers at Dynamicland, I think the answer would be that they aren't investigating AR because doing so isn't in line with their design principles. The mission statement for Dynamicland includes this phrase "incubating a humane dynamic medium" - and since most humans aren't equipped with AR hardware, I'm guessing that is why AR isn't an area of research for this particular technology.


I'm pretty sure Bret Victor has been asked this before and responded that it wasn't tangible or collaborative enough. For their purposes, it wouldn't work, but it seems like something that could work for another project.


I love the idea of having multiple AR display techniques to 'hand-off' between experiencing AR on glasses and AR from a projector. I can imagine fatigue can occur by wearing glasses and projector AR can help facilitate longer AR use. I wrote a technical report and developed a prototype of this hand-off characteristic if anyone is interested in reading more.


That's been my main question about this project as well. It does seem gratifying to a certain extent to use all physical objects—but I wonder if that can make up for the tradeoff of not being able to spontaneously generate any kind of object from nothing, the way we're used to when writing software.


I guess it could, but everyone in the room should have one..


Yes!

I've been similarly enchanted when I first saw Dynamicland (well, on twitter, the actual place being on a different continent) and I've actually started my own experiments (but never finished them :( ). Glad to see someone follow through, I'm definitely going to be watching you closely!


One of the coolest things I've seen in a while. GJ!


Really interesting, thanks for sharing!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: