Hacker Newsnew | comments | ask | jobs | submit | chch's commentslogin
chch 22 days ago | link | parent | on: 2048 As A Service

Schemaverse[1] could be considered something similar, I think, down to the "space-based strategy game", although it is controlled by raw SQL queries instead of an API.

[1] https://schemaverse.com/

-----

jordanlev 22 days ago | link

Wow, that's really cool -- thanks!

-----


Although it's hard to tell from the images presented with the article, the face generation looks like it could be similar to the techniques used in Nishimoto et al., 2011, which used a similar library of learned brain responses, though for movie trailers:

http://www.youtube.com/watch?v=nsjDnYxJ0bo

Their particular process is described in the YouTube caption:

The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:

[1] Record brain activity while the subject watches several hours of movie trailers.

[2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured. (For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)

[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.

[4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.

With the actual paper here:

http://www.cell.com/current-biology/retrieve/pii/S0960982211...

-----

anigbrowl 27 days ago | link

Forgot about this, thanks for re-posting.

-----

JackFr 27 days ago | link

Thanks for posting that. I count myself as enormously skeptical of TFA research, but that paper appears to be quite good. I may need to re-evaluate my biases.

On the other hand, this is presented as in the press release as mind reading, but the reality is more like trying design something similar to a cochlear implant.

-----

yeukhon 27 days ago | link

Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.

Mind to explain to me why the right clip is inconsistent with the left clip?

https://www.youtube.com/watch?v=nsjDnYxJ0bo

Starting at 20-second I feel like the right clip is out of touch with the left clip. between 20th and 22nd second I see at least three individuals rendered from the reconstruction.

From 26th to the end of the clip I also see multiple individuals. The names also look different from one another... When you say find the closest is that an expected result?

-----

chch 27 days ago | link

I'll give the disclaimer that this paper isn't in my field, and I'm merely an observer. However, I'll do my best to explain, since it's a little unclear.

Based on my perspective, there were three sets of videos:

1) The several hours of "training" video, that they used to learn how the test subject's brain acted based on different stimuli. (The paper (which I've only skimmed) says 7,200 seconds, which is two hours)

2) 18,000,000 individual seconds of YouTube video that the test subject has never seen.

3) The test video, aka the video on the left.

So, the first step was to have the subject watch several hours of video (1), and watch how their brain responded.

Then, using this data, they predicted a model of how they thought the brain would respond for eighteen million separate one second clips sampled randomly from YouTube (2). They didn't see these, but they were only predictions.

As an interesting test of this model, they decided to show the test subject a new set of videos that was not contained in (1) or (2), the video you see in the link above, (3). They read the brain information from this viewing, then compared each one second clip of brain data to the predicted data in their database from (2).

So, they took the first one second of the brain data, derived from looking at Steve Martin from (3), then sorted the entire database from (2) by how similar the (predicted) brain patterns were to that generated by looking at Steve Martin.

They then took the top 100 of these 18M one second clips and mixed them together right on top of each other to make the general shape of what the person was seeing. Because this exact image of Steve Martin was nowhere in their database, this is their way to make an approximation of the image (as another example, maybe (2) didn't have any elephant footage, but mix 100 videos of vaguely elephant shaped things together and you can get close). They then did this for every second long clip. This is why the figure jumps around a bit and transforms into different people from seconds 20 to 22. For each of these individual seconds, it is exploring eighteen million second-long video clips, mixing together the top 100 most similar, then showing you that second long clip.

Since each of these seconds has its "predicted video" predicted independently just from the test subject's brain data, the video is not exact, and the figures created don't necessarily 100% resemble each other. However, the figures are in the correct area of the screen, and definitely seem to have a human quality to them, which means that their technique for classifying the videos in (2) is much better than random, since they are able to generate approximations of novel video by only analyzing brain signal.

Sorry, that was longer than I expected. :)

Edit: Also, if you see the paper, Figure 4 has a picture of how they reconstructed some of the frames (including the one from 20-22 seconds), by showing you screenshots whence the composite was generated.

-----

chch 27 days ago | link

Alternatively, instead of reading all those words I just said, you can watch [1], which is a video explanation of Figure 4 from the paper. :)

https://www.youtube.com/watch?v=KMA23JJ1M1o

-----


I used to do a something similar to this (though less sophisticated) every time a new Chrome update came out, changing a single byte in the binary so that I could restore the http:// at the front of URLs.

I considered making a website to publish the proper offset to change for each version, but I got complacent after a while.

-----

wging 31 days ago | link

How did you figure out which byte to change?

-----

chch 31 days ago | link

I haven't done it in a year or two, so it took me a minute to figure out the basics again.

The short version is that I crawled through the source of Chromium for a while until I found the flag that controls it [0].

Then, since FormatUrlType was a uint32, and I assumed the storage of constants would be close together, I did a little trial and error searching through the binary in Hex Fiend until I found the value for kFormatUrlOmitAll. Then I would change this value from a 7 to a 5, which would remove the kFormatUrlOmitHTTP flag (or sometimes to a 1, to see if I liked trailing slashes on bare hostnames).

Of course, since Chrome autoupdates, I had to do this every few times I restarted the browser, until I just got too lazy. :) I can't seem to find the offset this time, though, so I very well might be missing a step!

[0] https://code.google.com/p/chromium/codesearch#chromium/src/n... [1] https://code.google.com/p/chromium/codesearch#chromium/src/n...

-----

coldpie 30 days ago | link

I'm so glad Firefox still allows this as an option. I guess it's irrational, but it drives me absolutely nuts to see bare URLs without protocols.

-----


http://faculty.msmary.edu/heinold/bitwise_and.pdf

seems to be a paper on this phenomenon, with

http://faculty.msmary.edu/heinold/maa_pres2009-11-14.pdf

being a similar work in presentation form.

I've never seen it before; I like it!

-----


Do we know what fraction of active users has over 1000 karma? As someone with forty-two karma currently who only comments rarely, it's a bit scary to know my comments will face moderation to be posted, although it will surely increase the substance/message ratio, which has seemed to be decreasing some.

It's not so much that I care about the karma, as I'd post more if I did, but more that if someone asks a question that not many other users care about, but I happen to have unique insight, I'd hope that my message can get through to them. :)

-----

pg 33 days ago | link

I picked that number pretty arbitrarily, but it's just a variable and HN has a repl.

-----

k2enemy 33 days ago | link

It sounds like users with over 1000 karma also need to go through the pending stage. So in that sense you are still on equal footing with them. You just won't be able to vote on other pending posts.

-----

greg5green 33 days ago | link

This is especially troublesome to me with regards to posts that quickly drop off the first page. Will there be enough page views by users with karma > 1000 on posts like that to get any comments approved?

-----

thaumaturgy 33 days ago | link

I can't speak for other longtime HN users, but I wrote my own news reader and regularly browse threads that have disappeared off the front page. Anybody with an RSS reader or other similar thingy would do the same.

-----

gus_massa 33 days ago | link

I found a user with approximately 1000 karma and used http://hn-karma-tracker.herokuapp.com/ to get the number of active users.

There are ~6000 with more than 1000 karma and ~9000 with more than 500 karma.

-----


It's also not that hard of a game to find, even with the destruction: AtariAge [1] rates it a 1 out of 10 ("Common") in rarity, and I know I own at least two copies.

Also, it's interesting to note that HSW (Howard Scott Washaw, who coded the whole game in 5.5 weeks) has said at least once that he doesn't believe the landfill incident actually occurred[2].

Quoth HSW: "I had many friends all over Atari, if the company was burying all these carts someone would have told me. And the moment they did, I would have immediately grabbed a photographer and hopped the next flight out and gotten some great protraits of me standing on the pile. How could I possibly not get that picture as a momento?"

I can't believe that post is over ten years old already!

[1] http://atariage.com/software_page.html?SoftwareLabelID=157

[2] http://atariage.com/forums/topic/7337-5-million-copies-of-et...

-----

rwhitman 33 days ago | link

I bought it when the Ames department store was having their final liquidation sale, the only friggin thing left in the entire store the day before closing was a pile of the E.T. game on an otherwise empty shelf marked down to a dollar or so.

-----


It is a bit odd when the numbers go into scientific notation. :)

http://i.imgur.com/Q2qHoSs.png

Also, it seems that at a certain point length, the two score boxes will shift to being on top of one another instead of side by side[1], which moves the whole playfield's place in the window. Adds a bit of an added challenge!

[1] http://i.imgur.com/BW6RV2e.png

-----


As soon as they made having an imo.im login mandatory, I had a sinking feeling it was the beginning of the end, and jumped ship (to the terrible compromise of AIM Express[0]). I had preferred Meebo for my third-party web messaging for a few years (handy in computer labs where you don't own the box!), but the Google acquisition took that away, so I'd switched to imo. Now they're both kaput.

A sad day for multiprotocol web messaging.

[0] http://www.aim.com/products/express/

-----


As someone who uses an email address that no longer exists to log into Facebook, I'm not sure what you're implying about the inexorability of email addresses. May I ask for an elaboration?

-----

Pelerin 58 days ago | link

If I may jump in here, I believe lugg was suggesting that email addresses are much more longer-lived than another type of username you might use to log into a service.

-----

chch 58 days ago | link

Reasonable! I've just been through my fair share of primary email addresses over the years (be it from my ISP, university, or the webmail provider du jour), so the idea of an email address being forever unchanging was a bit incongruent with my experiences. :)

-----

frou_dh 58 days ago | link

As usual the solution is to add a layer of indirection by getting your own domain name and then you can swap out backend email services without changing your address.

-----

chch 124 days ago | link | parent | on: Regex Golf

Originally had 568, then saw this and improved. :)

580pts: 00(0$|3|6|9|12|15)|[^0]14|.53|^3[^38]|55|43|23|9.7

-----

3JPLW 124 days ago | link

Nice. A smidge better at 582:

    5[54]|2[437]|00($|[369]|1[25])|^8[17]|^3[29]|9.7

-----

jaytaylor 124 days ago | link

One more smidgen, behold 584:

    5[54]|2[437]|00($|[369]|1[25])|^[83][1729]|9.7

-----

grobie 124 days ago | link

586:

    ^[378][12479]|00($|[369]|1[25])|5[45]|2[347]

-----

nwellnhof 123 days ago | link

589: ^[378][12479]|00($|[369]|1[25])|55|2[347]

-----

More

Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Bugs | Y Combinator | Apply | Library

Search: