It's not OCR. Those colored dots along the periphery are fiducials. They're how Realtalk recognizes a program.
Every object that you can interact with in Realtalk has a unique set of fiducials. When the camera sees one, it looks up its behavior and projects it accordingly.
Correct, that doesn't work in the system as implemented. But my understanding is that there has been a goal from the start to _eventually_ get physical code editing to work. I recall hearing the researchers talking about wanting to eventually get handwriting recognition working, so you could edit code by hand.
Cameras good enough to do that kind of work aren't really available right now. It does still make the code more accessible by making it immediately visible on the object so if you want to know how a thing works you can just pick it up.
... so in the intro video at like 2:37, it says that a program can be in language, or an arrangement of components, or a hand-drawn diagram, because anything you can write an interpreter for is a program. The "arrangement of components" and the "hand-drawn diagram" shots show stuff drawn on a whiteboard, which seems like it can be edited by redrawing. Are you saying that a hand-drawn diagram program can be freely edited by changing the diagram, but a language-based program cannot be edited by changing the representation of its code? That seems ... worse.
I would think that you'd want a fiducial marker on code-printouts to identify that a piece of code has a language and a namespace perhaps (i.e. "this page contains a kotlin definition that lives in com.acme.foo"), but that the contents should be modifiable. Otherwise, what's the point? If to edit the code you're gonna pull up a keyboard and screen (or project an editor window) ... then this seems like what we already have, plus you have to print stuff out.
It's been 6 years since I've been there. I'm sure they've made developments since then.
I also think these comments along the lines of "what about X" are unhelpfully reductivist/dismissive. The whole point is that it's a research project. If you can think of a better way to make ephemeral room-sized computing work - cool, let's try that! Just because it worked some way when I was there in 2018 or some way in the video doesn't mean that's the end vision for how it will always be.
This isn't a product. It's a vision for the future.
> The whole point is that it's a research project.
> It's a vision for the future.
Neither of these mean we can't or shouldn't be able to have discuss whether parts of it are good or bad, make more or less sense. Is it a vision of a future you'd want to work with/in?
> I also think these comments along the lines of "what about X" are unhelpfully reductivist/dismissive.
My first statement was "I think the overall idea here is really cool". The intent is not to be dismissive. But if you think the only acceptable reaction is unalloyed praise ... then why even have it on a discussion-oriented site?
I think the way of working being demonstrated seems like a great fit for some kinds of work and that trying to awkwardly shoehorn software-development to happen in their system detracts rather than adds to it.
> If you can think of a better way to make ephemeral room-sized computing work
... I think an IDE, a keyboard, and a projector are better than printing code blocks at a specific revision which is identified by a computer-readable id, and which must be given a new ID and a new printed page every time you want to try executing a new version.
Yours wasn't the only comment along those lines, and I was replying to the class of them rather than yours specifically.
I don't mean to curtail discussion or say that only praise is allowed. I just want to steer away from "gotcha" energy on a research project.
Ideas/discussion/critique are welcome! "This project is dumb because it does things differently than I'm used to" or "because it currently only supports digital changes to the physical paper" are less helpful. Part of the fun of a research project is trying weird stuff to see what feels better and what doesn't.
Again, none of this is directed at you personally or about your specific comment. I just noticed a trend of comments about the code editing experience that felt more like trying to dunk on the concept than promoting curious discussion.
> I think an IDE, a keyboard, and a projector are better than printing code blocks at a specific revision which is identified by a computer-readable id, and which must be given a new ID and a new printed page every time you want to try executing a new version.
You're making several incorrect assumptions here:
1. That you can't interactively try out the code as you're editing it.
2. That the system as implemented is the final vision of how the system ought to work forever.
From https://youtu.be/5Q9r-AEzRMA?t=150 "Anyone can change any program at any time and see the changes immediately", which demonstrates live editing the code and seeing the projected flow lines change color. So you can keep editing and iterating on the program, trying out your changes without ever having to print anything. Once you are satisfied with your improvements, you then "commit" the code, which results in the system printing out a new page with its new identifier.
And if any part of your expectations isn't how things work, it's likely because this is a research project and nobody has written the code to make it behave the way you'd like. Since Realtalk is built on itself, one would only need to make the appropriate changes to improve the system.
For those unfamiliar, the founder is Bret Victor. He made a name for himself working on human interfaces at Apple in the Steve Jobs iPad era. In 2012, he gave a couple of influential talks: Inventing on Principle, and Stop Drawing Dead Fish.
Bret's take on being a visionary/futurist is fascinating. He imagines the near-future world he wants to live in, prototypes enough of it to write a talk about, and gives the talk with the hopes that someone in the audience will be inspired to make it a reality. He gives ideas away with the hope that he'll be paid back with a world where those ideas have been realized.
I assume his point is that he's not going to be able to execute on all his ideas anyway, so why not make them free to people who might? If the idea succeeds, the benefit to him is a better world, while the cost is nothing. It's a positive-sum proposition, what's not to like?
While turning ideas into products isn't the benchmark for a successful idea, there are countless product folk who have definitely been inspired by Bret's work.
For example, this is Vlad Magdalin, one of the founders of Webflow:
> But I won’t claim credit that it was some magical insight that I had. It was a specific video that I saw that I think every maker and every creator should see called “Inventing on Principal” by Bret Victor. Seeing that talk, it’s a maybe 50-minute talk around creating games and doing animation and this broader concept of direct manipulation, but more importantly the principal behind why you do the work you do, what drives you.
> Seeing that video and being a designer and a 3D animator and a developer all at once, it just sparked that idea of, “holy crap.” The kinds of tools that we can have in animation land, the kind of tools we already have for game design and level design, the tools we have in digital publishing, all those things can be married together to front end and back end development and make it a much more human type of interface. That’s when it was boom, this has to be a product and a thing.
Some ideas—key novel concepts and conceptual frameworks, for example—absolutely have value, but they're not valuable in the way a business is valuable. You won't become business-owner rich just by coming up with the right concept, but you can have a successful research career, get enough funding to run a small research group, win a Nobel/etc prize... etc. But that says more about how our society and economy are organized than it does about the inherent value of ideas qua ideas.
It's pretty hard for one person or even one small team to both (a) do advanced green-field research in whichever uncertain direction they feel most exited to explore, and (b) make a complete and polished saleable product which best meets the needs of a well-defined set of customers.
The skills, personalities, organizing principles, and methods involved are substantially different, and focusing on making a product has a tendency to cut off many conceptually valuable lines of inquiry based on financial motivations.
Notice that Bret Victor's goal (like most researchers) is not to become as rich as possible.
Whether researchers or product developers ultimately have more leverage is something of a chicken-and-egg question. To make an analogy, it's like asking who was more influential, Karl Marx or Otto von Bismarck.
The world needs powerful new ideas and people willing to bring those ideas to everyone through creation of products and services. We need BOTH but it's getting harder to build a career focusing on inventing and discovering new ideas nowadays unfortunately.
The question is, what do you want more: to live in a good world, or be rich?
As the ancients said:
A wise man plants trees he won’t live to see fully grown, and a greedy man overfertilizes trees in a mad dash to get the tree purchased by a faceless investor before anyone notices it’s sick.
No... teams of people had to execute on those ideas first, usually with far more valuable ideas of their own, as well as teams of engineers and employees to make them a physical and profitable reality. He didn't simply manifest his ideas from the aether like an arcane sorcerer.
He has delivered on many things, and not delivered on many other ideas. It’s just that the things he didn’t deliver ( the vision ) helped him sell stocks as well as cars, via fame.
> He imagines the near-future world he wants to live in, prototypes enough of it to write a talk about, and gives the talk with the hopes that someone in the audience will be inspired to make it a reality.
Every time I try to quit hacker news, I get pulled back in by threads like this one. Seriously, thanks for sharing your knowledge! Talk about a trump card in any argument, lol
Substantively: more people should do that. Especially the wildly-insanely-unimaginably rich ones.
I had the good fortune of taking a field trip there in 2018.
The video is a very good overview of the project.
One interesting artifact of "the real world simulates itself" is version control. At Dynamicland, each version of a program is a sheet of paper (with a unique set of fiducials along the edges). If you want to edit a program, you grab a keyboard and point it at the program. A text editor comes up; you make your changes, and hit commit. When you do, it spits out a new piece of paper with your changes. Put it in the view of the camera to use the new version. Take it away and use the old paper to roll the change back.
Anything on the table is open to reference by anything else on the table afaik. Direct references are usually done as cursors projected off the page so you can "point" and "click" on other objects to control precisely what you reference or select with the other tool.
You could also implement input and output piping between programs in a more organized way where their physical orientation isn't as critical as most page references seem to be. eg: Put a tag on a sticky note that represents a pipe, put it close enough to the 'output' of one page then stick it to the input of another compatible page.
Not sure how protected the individual pages are from outside modification because real details are quite thin on the ground. I think right now you could probably turn every display page into a rick roll if you wanted.
Right, I guess that's why it's called Realtalk, it's sort of messge based like smalltalk but with real stuff. Infinite recursions and resource conflicts must be tough to handle.
I suspect each piece of paper, if examined with a good enough camera, has a unique fingerprint, like a snowflake, and perhaps this could be used in the future for an "Isomer Addressed Filesystem". In other words, all pieces of paper ship with a UUID already, woven into their atoms.
I would suggest instead convincing every printer manufacturer to embed in every printer a routine that encodes a unique identifier on every print and then reading that using more typical cameras. The hard part has already been done.
I'm pretty sure that I mentioned the printer tracking dots to the researchers at the lab and certainly mentioned DataGlyphs. So they were aware of alternatives. The trick is to get a workable system with cameras that have the resolution to pick out those details from a dozen feet away, as well as a software stack that can recognize them at ~60fps.
That said, and this is purely my opinion, the system works well enough as it is, and there is so much fun stuff to build on top of what works, that it's hard to prioritize a better object recognition system over the myriad of other interesting things to be done.
I imagine it would very difficult to read these dots from a distance and dynamically. I just mention it because most printed documents already have indentifiers printed on them that don't require seeing individual fibers.
Fun fact: Otavio Good, who led the winning team, learned about the printer dots on this very site. As I recall, he said that the dots were like a map that let them reconstruct the shredded documents.
I would had thought, even then, that this was not only commonly known, but not the sole only annulment/rule among a generally "rules-free" competition, having been rather obvious, especially to the audience these competitions attract.
Thank you for reminding me, and others, how immediate and obvious success can be.
I'm not sure if there's a source online as I learned about it from Otavio directly. The slightly longer story, as I recall it, is that their team basically built a "game" to help humans unshred documents, and was using that approach until Otavio happened to read about the printer dots on HN. He updated his code to take those into account and was astonished at how helpful they were.
Not entirely it also uses bits of color ink to make the black and white page look better, at least according to the printer companies who conveniently also benefit by selling you more ink.
Black ink isn't perfectly black. Adding other colors of ink makes it darker. If you don't really care about how dark the result is, the term you're probably looking for in settings is "rich black."
If you are happy to live with B&W printing for your untraceable printing needs, maybe you can fill the yellow cartridge with clear ink (not sure if water is OK).
But then, how do B&W laser printer allow for tracing ?
> But then, how do B&W laser printer allow for tracing ?
They don't, because they don't need to. The point of the yellow dots is to prevent the printing of counterfeit currency, which itself tends to require the use of more ink colors than only black. It's probably possible to refill a "black" ink/toner cartridge with the exact shade of green (or whatever) to replicate the color of a currency note, but if it was easy to do then there'd probably be a lot more counterfeit bills floating around.
The ink/toner is also half the battle. The other half is the paper, since obviously the US Treasury doesn't use ordinary printer paper to print $20 bills. The usual trick is to take a $1 or $5 bill, bleach it (or otherwise remove the existing printing), and print a $20 bill design onto it - but that's easier said than done, due to both the ink/toner color issue mentioned above and due to the difficulty of getting the donor bill exactly aligned (and doing so again, in the exact same way, for the other side of the bill).
I’ll make it n=2. I was pretty disappointed by the build quality of my System 76 Galago. And it wasn’t very repair friendly either because parts are very difficult to get.
Make it three. I'm on the third battery for my darter pro: twice so far, the battery has swelled up and made the keyboard buckle. System76 support consists in selling me replacement batteries at a serious markup.
I've decided not to install the third battery, so I have more of a desktop now.
FWIW, the Ars reviewer installed COSMIC on a Framework and seemed to really enjoy it. They even sell a mini case that transforms their laptop into a desktop.
The post hat-tips their "design system" - a term that didn't exist in 99, but sounds like a lot of the same type of work that Eazel would have been going through to invent a core app for a nascent OS.
Sounds like the victim got lucky that the thieves were in a relatively small town. I can't imagine SFPD taking you seriously if you said "someone stole my thing and here is the pin on my map where they are."
I feel like fucking with the Post Office is one of those things that sounds minor but can be a big fucking deal. Your petty theft just became a literal federal case, complete with a police force whose whole job is to protect the integrity of the mail.
Google Home was kind of magical when it launched, but as its reliability has degraded over time, its appeal has worn off. It is still nice to be able to ask it to play music or set an alarm or whatever without engaging with a screen.
I haven't used a HomePod, but I suspect it's Apple's version of that experience.
The Sonos patent wars sucked. Things are somewhat back to ok, but man, it's just absurd & sad seeing basic networked av get ravaged by lawyers like so.
I'm super glum about audio casting at the moment. It feels like less and less speakers have audio casting builtin. For a while there was a spare of speakers, some even battery powered, but it's turned into a trickle. There's also incredibly few options for amplifiers with builtin audio casting; what the NexusQ originally did! So frustrating.
This is an ecosystem I am super bought into, and it feels like it's fading and there's not a replacement (especially with Sonos's recent enshittification seppuku).
I ended up just getting normal speakers and attaching a dedicated box to them. I hate how it looks though, so many more wires, and you better hope your speakers are close enough to route a long USB cable for power to a wall outlet
I don't know if it's similar, only because I don't really use Google products. You can ask the HomePod to turn on the Apple TV and play something, or to read out your messages (depending on your settings and whatnot). I don't really have an issue with it doing anything but playing music. I use it for HomeKit controls constantly, and love my HomePod mini's.
Every object that you can interact with in Realtalk has a unique set of fiducials. When the camera sees one, it looks up its behavior and projects it accordingly.
reply