Neat! I could probably do something similar with my 3D printer, although I'd make it more friendly - instead of using coordinates (i.e MOVE X0 Y0 Z0) I'd probably just do something like MOVE LEFT and set it to a fixed amount.
I'd want it to try and steer the commands towards actual creation, so I'd make a few commands for the printer like MAKE CIRCLE, MAKE SQUARE, MAKE LEFT LINE etc. and after each command, move the printer up one layer.
I think it'd be cute to keep the absolute coordinate system, however create a simple web app which lets you point and click where you want it the brush go. For the Z coordinate, you could have a meter which goes up and down from 0-60 while you hold the mouse, and registers when you let go. (similar to many mini-game mechanics)
It could be done with the current Twitch stream in maybe 2-3 hours of dev work, thanks to Twitch using IRC for the chat backend.
> create a simple web app which lets you point and click where you want it the brush go
A lot of the "fun" and challenge from these TwitchPlaysPokemon / TwitchInstallsLinux type projects comes from using people's typing into Twitch chat as the only input.
Both of the projects I mentioned earlier would have been much easier with a custom UI outside of chat also but then it's not really a "TwitchDoesX" project.
I considered those control schemes, as well as relative coordinates. I think ultimately chose absolute coords because of the 10-15 second stream delay. Absolute means if the group has a consensus they can just spam those coords to be persistent. I wanted to go for the surgeon simulator type appeal.
high level commands defeat the purpose of crowdsourcing intent.
if we consider Twitch Plays Pokemon, "beat the game" would be a very high level command, albeit a useless one. "travel to lavender town" is less high level and still encourages actual gameplay, but still kind of destroys the novelty.
HA! Well, that's hilarious. I was just talking to @eliot at Hackaday supercon about making this with WaterColorBot and CNCserver at his suggestion, I guess if it's out there it's out there. Definitely a different setup than I would have done, but it seems like it might work.
Really though this is just asking for people to write interfaces and use twitch IRC chat as a terrible API. I'm forced to think that only when it's unpopular (but popular enough), anything resembling what a user intends will be created. At least in the end you're guaranteed to get art.
And joshu was too! Talked to him about drawing algos. He's too smart for his own good ;)
Oh I would probably do something along the lines of adding simple shapes (Circle x/y & radius, ETC), auto re-inking of last color for long lines... and who knows what else. I ran out of time to work on it.
Abstracted color getters! This seems to be the biggest issue with users. You have to wet the brush and wiggle on the color to actually get anything on the brush. They can't seem to get that far.
Ah, I think the concept I'm going for is that this is not a painting robot (despite the title), but that it is a set of things I have given the internet to see what they will do with it.
streaming robots doing stuff... seems a big entertainment area in future. I could see a company that rents out 'teams' of robots/drones or whatever to subscriber groups and then these groups 'duke it out' or participate in whatever other contest remotely. The actual dangerous physical elements happening in some remote desert/warehouse and being streamed to the users/controllers.
There's a group in Waterloo called LabForge who set up actual physical lab experiments for undergrads to interact with over the internet. I think they may have pivoted since then, but there's a demo of it on YouTube:
I'm with a robotics company, and we've also periodically toyed with setting up "robot playpen" where you could log in, pay by the hour, and run your algorithms or whatever on actual hardware. The reality is, though, that for a lot of the interesting problems in robotics (perception, mapping, localization), a recorded dataset is more than enough, and for the rest (controls, guidance), a sufficiently high-fidelity simulation is better for development anyway.
By the time you've made all those software investments, you probably own hardware anyway, so the total market for an "AWS of robots" is probably not really all that large.
I'd love to implement your work in RoboPaint, might even be able to pull some color information to make portraits from black sharpie outline with your implementation and watercolor fill. I've completely failed at any attempts to make decent raster -> paint vector so far.
I'd want it to try and steer the commands towards actual creation, so I'd make a few commands for the printer like MAKE CIRCLE, MAKE SQUARE, MAKE LEFT LINE etc. and after each command, move the printer up one layer.