Although it's hard to tell from the images presented with the article, the face generation looks like it could be similar to the techniques used in Nishimoto et al., 2011, which used a similar library of learned brain responses, though for movie trailers:
Their particular process is described in the YouTube caption:
The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:
 Record brain activity while the subject watches several hours of movie trailers.
 Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured.
(For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)
 Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.
 Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.
I'll give the disclaimer that this paper isn't in my field, and I'm merely an observer. However, I'll do my best to explain, since it's a little unclear.
Based on my perspective, there were three sets of videos:
1) The several hours of "training" video, that they used to learn how the test subject's brain acted based on different stimuli. (The paper (which I've only skimmed) says 7,200 seconds, which is two hours)
2) 18,000,000 individual seconds of YouTube video that the test subject has never seen.
3) The test video, aka the video on the left.
So, the first step was to have the subject watch several hours of video (1), and watch how their brain responded.
Then, using this data, they predicted a model of how they thought the brain would respond for eighteen million separate one second clips sampled randomly from YouTube (2). They didn't see these, but they were only predictions.
As an interesting test of this model, they decided to show the test subject a new set of videos that was not contained in (1) or (2), the video you see in the link above, (3). They read the brain information from this viewing, then compared each one second clip of brain data to the predicted data in their database from (2).
So, they took the first one second of the brain data, derived from looking at Steve Martin from (3), then sorted the entire database from (2) by how similar the (predicted) brain patterns were to that generated by looking at Steve Martin.
They then took the top 100 of these 18M one second clips and mixed them together right on top of each other to make the general shape of what the person was seeing. Because this exact image of Steve Martin was nowhere in their database, this is their way to make an approximation of the image (as another example, maybe (2) didn't have any elephant footage, but mix 100 videos of vaguely elephant shaped things together and you can get close). They then did this for every second long clip. This is why the figure jumps around a bit and transforms into different people from seconds 20 to 22. For each of these individual seconds, it is exploring eighteen million second-long video clips, mixing together the top 100 most similar, then showing you that second long clip.
Since each of these seconds has its "predicted video" predicted independently just from the test subject's brain data, the video is not exact, and the figures created don't necessarily 100% resemble each other. However, the figures are in the correct area of the screen, and definitely seem to have a human quality to them, which means that their technique for classifying the videos in (2) is much better than random, since they are able to generate approximations of novel video by only analyzing brain signal.
Sorry, that was longer than I expected. :)
Edit: Also, if you see the paper, Figure 4 has a picture of how they reconstructed some of the frames (including the one from 20-22 seconds), by showing you screenshots whence the composite was generated.
I used to do a something similar to this (though less sophisticated) every time a new Chrome update came out, changing a single byte in the binary so that I could restore the http:// at the front of URLs.
I considered making a website to publish the proper offset to change for each version, but I got complacent after a while.
I haven't done it in a year or two, so it took me a minute to figure out the basics again.
The short version is that I crawled through the source of Chromium for a while until I found the flag that controls it .
Then, since FormatUrlType was a uint32, and I assumed the storage of constants would be close together, I did a little trial and error searching through the binary in Hex Fiend until I found the value for kFormatUrlOmitAll. Then I would change this value from a 7 to a 5, which would remove the kFormatUrlOmitHTTP flag (or sometimes to a 1, to see if I liked trailing slashes on bare hostnames).
Of course, since Chrome autoupdates, I had to do this every few times I restarted the browser, until I just got too lazy. :) I can't seem to find the offset this time, though, so I very well might be missing a step!
Do we know what fraction of active users has over 1000 karma? As someone with forty-two karma currently who only comments rarely, it's a bit scary to know my comments will face moderation to be posted, although it will surely increase the substance/message ratio, which has seemed to be decreasing some.
It's not so much that I care about the karma, as I'd post more if I did, but more that if someone asks a question that not many other users care about, but I happen to have unique insight, I'd hope that my message can get through to them. :)
This is especially troublesome to me with regards to posts that quickly drop off the first page. Will there be enough page views by users with karma > 1000 on posts like that to get any comments approved?
I can't speak for other longtime HN users, but I wrote my own news reader and regularly browse threads that have disappeared off the front page. Anybody with an RSS reader or other similar thingy would do the same.
It's also not that hard of a game to find, even with the destruction: AtariAge  rates it a 1 out of 10 ("Common") in rarity, and I know I own at least two copies.
Also, it's interesting to note that HSW (Howard Scott Washaw, who coded the whole game in 5.5 weeks) has said at least once that he doesn't believe the landfill incident actually occurred.
Quoth HSW: "I had many friends all over Atari, if the company was burying all these carts someone would have told me. And the moment they did, I would have immediately grabbed a photographer and hopped the next flight out and gotten some great protraits of me standing on the pile. How could I possibly not get that picture as a momento?"
I can't believe that post is over ten years old already!
I bought it when the Ames department store was having their final liquidation sale, the only friggin thing left in the entire store the day before closing was a pile of the E.T. game on an otherwise empty shelf marked down to a dollar or so.
Also, it seems that at a certain point length, the two score boxes will shift to being on top of one another instead of side by side, which moves the whole playfield's place in the window. Adds a bit of an added challenge!
As soon as they made having an imo.im login mandatory, I had a sinking feeling it was the beginning of the end, and jumped ship (to the terrible compromise of AIM Express).
I had preferred Meebo for my third-party web messaging for a few years (handy in computer labs where you don't own the box!), but the Google acquisition took that away, so I'd switched to imo. Now they're both kaput.
Reasonable! I've just been through my fair share of primary email addresses over the years (be it from my ISP, university, or the webmail provider du jour), so the idea of an email address being forever unchanging was a bit incongruent with my experiences. :)