See  for another, older version of this concept that also supports another Daft Punk song, Technologic, although this new one is definitely novel for showing the keyboard layout and being not in Flash. I do like the Flash version's use of shift as a meta key to shift registers, though!
I like how my old blog, ilictronix, is still there in the links section :) I remember when I first emailed him asking if we could link swap - that was pretty big for our traffic. I haven't contributed in about 3 years, but a couple people still run it. Good times - it's where I learned web development.
I have no specialty in signed language linguistics, but I would say that perhaps Location/Handshape/Movement could be considered similar to phonemes, or at least similar to subconcepts like 'place of articulation'.
Similar to spoken languages, the individual sounds/shapes may or may not mean something on their own, but they tend to be regular enough to be able to be described in a writing system:
http://en.wikipedia.org/wiki/Stokoe_notation (note that Stokoe notation is described as a 'phonemic script' here).
Perhaps this is just rephrasing ianawilson's reply, but it seems that if Crowsnest eventually supports fifty different cameras (or switches or thermostats or what have you), then (via the tumblr demo):
would be an abstraction layer over the internal functions of all of the cameras.
That way, if I wanted to, say, build an app that does mood lighting depending on how many people are in a room, I could use Crowsnest as my middleman, and then my app would support fifty different cameras and fifty different switches (hypothetically), instead of the one of each that I happen to own and test on. That way, I could swap out devices or distribute my code to others, without having to worry so much about hardware integration. That sounds valuable from my personal perspective.
At least, I think that's how it works, from browsing the site and demos. Feel free to correct me, ianawilson. :)
The idea is that Crowsnest organizes what devices can do into capabilities, and anyone can build a plugin for anything using our device integration framework. A plugin maps between what Crowsnest knows as capabilities and the actual calls that need to be made to the device. Of course, we can't build something for every device, so this framework will be open source with the idea that anyone can use plugins already created for existing devices, and hopefully we can engage the community to contribute and maintain these for devices as new ones are created. As soon as we release this, we're going to seed the community with a handful of integrations that we'll maintain. And if there is anything of particular interest to our users, we'd love to support that.
We've also been toying with the idea of using some device discovery so that devices can tell Crowsnest what they are capable of without needing to build a formal plugin for it.
Although it's hard to tell from the images presented with the article, the face generation looks like it could be similar to the techniques used in Nishimoto et al., 2011, which used a similar library of learned brain responses, though for movie trailers:
Their particular process is described in the YouTube caption:
The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:
 Record brain activity while the subject watches several hours of movie trailers.
 Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured.
(For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)
 Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.
 Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.
I'll give the disclaimer that this paper isn't in my field, and I'm merely an observer. However, I'll do my best to explain, since it's a little unclear.
Based on my perspective, there were three sets of videos:
1) The several hours of "training" video, that they used to learn how the test subject's brain acted based on different stimuli. (The paper (which I've only skimmed) says 7,200 seconds, which is two hours)
2) 18,000,000 individual seconds of YouTube video that the test subject has never seen.
3) The test video, aka the video on the left.
So, the first step was to have the subject watch several hours of video (1), and watch how their brain responded.
Then, using this data, they predicted a model of how they thought the brain would respond for eighteen million separate one second clips sampled randomly from YouTube (2). They didn't see these, but they were only predictions.
As an interesting test of this model, they decided to show the test subject a new set of videos that was not contained in (1) or (2), the video you see in the link above, (3). They read the brain information from this viewing, then compared each one second clip of brain data to the predicted data in their database from (2).
So, they took the first one second of the brain data, derived from looking at Steve Martin from (3), then sorted the entire database from (2) by how similar the (predicted) brain patterns were to that generated by looking at Steve Martin.
They then took the top 100 of these 18M one second clips and mixed them together right on top of each other to make the general shape of what the person was seeing. Because this exact image of Steve Martin was nowhere in their database, this is their way to make an approximation of the image (as another example, maybe (2) didn't have any elephant footage, but mix 100 videos of vaguely elephant shaped things together and you can get close). They then did this for every second long clip. This is why the figure jumps around a bit and transforms into different people from seconds 20 to 22. For each of these individual seconds, it is exploring eighteen million second-long video clips, mixing together the top 100 most similar, then showing you that second long clip.
Since each of these seconds has its "predicted video" predicted independently just from the test subject's brain data, the video is not exact, and the figures created don't necessarily 100% resemble each other. However, the figures are in the correct area of the screen, and definitely seem to have a human quality to them, which means that their technique for classifying the videos in (2) is much better than random, since they are able to generate approximations of novel video by only analyzing brain signal.
Sorry, that was longer than I expected. :)
Edit: Also, if you see the paper, Figure 4 has a picture of how they reconstructed some of the frames (including the one from 20-22 seconds), by showing you screenshots whence the composite was generated.
I used to do a something similar to this (though less sophisticated) every time a new Chrome update came out, changing a single byte in the binary so that I could restore the http:// at the front of URLs.
I considered making a website to publish the proper offset to change for each version, but I got complacent after a while.
I haven't done it in a year or two, so it took me a minute to figure out the basics again.
The short version is that I crawled through the source of Chromium for a while until I found the flag that controls it .
Then, since FormatUrlType was a uint32, and I assumed the storage of constants would be close together, I did a little trial and error searching through the binary in Hex Fiend until I found the value for kFormatUrlOmitAll. Then I would change this value from a 7 to a 5, which would remove the kFormatUrlOmitHTTP flag (or sometimes to a 1, to see if I liked trailing slashes on bare hostnames).
Of course, since Chrome autoupdates, I had to do this every few times I restarted the browser, until I just got too lazy. :) I can't seem to find the offset this time, though, so I very well might be missing a step!
Do we know what fraction of active users has over 1000 karma? As someone with forty-two karma currently who only comments rarely, it's a bit scary to know my comments will face moderation to be posted, although it will surely increase the substance/message ratio, which has seemed to be decreasing some.
It's not so much that I care about the karma, as I'd post more if I did, but more that if someone asks a question that not many other users care about, but I happen to have unique insight, I'd hope that my message can get through to them. :)
This is especially troublesome to me with regards to posts that quickly drop off the first page. Will there be enough page views by users with karma > 1000 on posts like that to get any comments approved?
I can't speak for other longtime HN users, but I wrote my own news reader and regularly browse threads that have disappeared off the front page. Anybody with an RSS reader or other similar thingy would do the same.
It's also not that hard of a game to find, even with the destruction: AtariAge  rates it a 1 out of 10 ("Common") in rarity, and I know I own at least two copies.
Also, it's interesting to note that HSW (Howard Scott Washaw, who coded the whole game in 5.5 weeks) has said at least once that he doesn't believe the landfill incident actually occurred.
Quoth HSW: "I had many friends all over Atari, if the company was burying all these carts someone would have told me. And the moment they did, I would have immediately grabbed a photographer and hopped the next flight out and gotten some great protraits of me standing on the pile. How could I possibly not get that picture as a momento?"
I can't believe that post is over ten years old already!
I bought it when the Ames department store was having their final liquidation sale, the only friggin thing left in the entire store the day before closing was a pile of the E.T. game on an otherwise empty shelf marked down to a dollar or so.
Also, it seems that at a certain point length, the two score boxes will shift to being on top of one another instead of side by side, which moves the whole playfield's place in the window. Adds a bit of an added challenge!