Hacker Newsnew | comments | show | ask | jobs | submit | chch's comments login

See [1] for another, older version of this concept that also supports another Daft Punk song, Technologic, although this new one is definitely novel for showing the keyboard layout and being not in Flash. I do like the Flash version's use of shift as a meta key to shift registers, though!

[1] http://www.najle.com/idaft/

Edit: Before anyone asks, no, I didn't make this one. I just remember seeing it a long time ago, so it's interesting that someone independently (?) made something very similar!


I like how my old blog, ilictronix, is still there in the links section :) I remember when I first emailed him asking if we could link swap - that was pretty big for our traffic. I haven't contributed in about 3 years, but a couple people still run it. Good times - it's where I learned web development.


I have no specialty in signed language linguistics, but I would say that perhaps Location/Handshape/Movement could be considered similar to phonemes, or at least similar to subconcepts like 'place of articulation'.

Similar to spoken languages, the individual sounds/shapes may or may not mean something on their own, but they tend to be regular enough to be able to be described in a writing system: http://en.wikipedia.org/wiki/Stokoe_notation (note that Stokoe notation is described as a 'phonemic script' here).


Perhaps this is just rephrasing ianawilson's reply, but it seems that if Crowsnest eventually supports fifty different cameras (or switches or thermostats or what have you), then (via the tumblr demo):

    crowsnest_data = request.json
    image = base64.b64decode(crowsnest_data['files']['image']['data'])
would be an abstraction layer over the internal functions of all of the cameras.

That way, if I wanted to, say, build an app that does mood lighting depending on how many people are in a room, I could use Crowsnest as my middleman, and then my app would support fifty different cameras and fifty different switches (hypothetically), instead of the one of each that I happen to own and test on. That way, I could swap out devices or distribute my code to others, without having to worry so much about hardware integration. That sounds valuable from my personal perspective.

At least, I think that's how it works, from browsing the site and demos. Feel free to correct me, ianawilson. :)


That way, I could swap out devices or distribute my code to others, without having to worry so much about hardware integration. That sounds valuable from my personal perspective.

It is valuable, but has the downside of coupling to a closed-source third-party cloud.

Perhaps a better goal would be to encourage a public collection of CoAP- or OSC-based interfaces for these various end-point devices.

Crowsnest is encouraging people to write and submit such end-point code; better that it be workable with OSS infrastructure.


How would that work? Would there be a set of standard drivers that Crows Next interacts with, that you need to have for your devices?

It would be nice if this turned out to be the "VLC Player" of home hardware, where you could fire it up and expect it to work with most devices.

That would take a ton of work on Crows Nest's part though.


The idea is that Crowsnest organizes what devices can do into capabilities, and anyone can build a plugin for anything using our device integration framework. A plugin maps between what Crowsnest knows as capabilities and the actual calls that need to be made to the device. Of course, we can't build something for every device, so this framework will be open source with the idea that anyone can use plugins already created for existing devices, and hopefully we can engage the community to contribute and maintain these for devices as new ones are created. As soon as we release this, we're going to seed the community with a handful of integrations that we'll maintain. And if there is anything of particular interest to our users, we'd love to support that.

We've also been toying with the idea of using some device discovery so that devices can tell Crowsnest what they are capable of without needing to build a formal plugin for it.


Yup, that's exactly it!


Schemaverse[1] could be considered something similar, I think, down to the "space-based strategy game", although it is controlled by raw SQL queries instead of an API.

[1] https://schemaverse.com/


Wow, that's really cool -- thanks!


Although it's hard to tell from the images presented with the article, the face generation looks like it could be similar to the techniques used in Nishimoto et al., 2011, which used a similar library of learned brain responses, though for movie trailers:


Their particular process is described in the YouTube caption:

The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:

[1] Record brain activity while the subject watches several hours of movie trailers.

[2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured. (For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)

[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.

[4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.

With the actual paper here:



Forgot about this, thanks for re-posting.


Thanks for posting that. I count myself as enormously skeptical of TFA research, but that paper appears to be quite good. I may need to re-evaluate my biases.

On the other hand, this is presented as in the press release as mind reading, but the reality is more like trying design something similar to a cochlear implant.


Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.

Mind to explain to me why the right clip is inconsistent with the left clip?


Starting at 20-second I feel like the right clip is out of touch with the left clip. between 20th and 22nd second I see at least three individuals rendered from the reconstruction.

From 26th to the end of the clip I also see multiple individuals. The names also look different from one another... When you say find the closest is that an expected result?


I'll give the disclaimer that this paper isn't in my field, and I'm merely an observer. However, I'll do my best to explain, since it's a little unclear.

Based on my perspective, there were three sets of videos:

1) The several hours of "training" video, that they used to learn how the test subject's brain acted based on different stimuli. (The paper (which I've only skimmed) says 7,200 seconds, which is two hours)

2) 18,000,000 individual seconds of YouTube video that the test subject has never seen.

3) The test video, aka the video on the left.

So, the first step was to have the subject watch several hours of video (1), and watch how their brain responded.

Then, using this data, they predicted a model of how they thought the brain would respond for eighteen million separate one second clips sampled randomly from YouTube (2). They didn't see these, but they were only predictions.

As an interesting test of this model, they decided to show the test subject a new set of videos that was not contained in (1) or (2), the video you see in the link above, (3). They read the brain information from this viewing, then compared each one second clip of brain data to the predicted data in their database from (2).

So, they took the first one second of the brain data, derived from looking at Steve Martin from (3), then sorted the entire database from (2) by how similar the (predicted) brain patterns were to that generated by looking at Steve Martin.

They then took the top 100 of these 18M one second clips and mixed them together right on top of each other to make the general shape of what the person was seeing. Because this exact image of Steve Martin was nowhere in their database, this is their way to make an approximation of the image (as another example, maybe (2) didn't have any elephant footage, but mix 100 videos of vaguely elephant shaped things together and you can get close). They then did this for every second long clip. This is why the figure jumps around a bit and transforms into different people from seconds 20 to 22. For each of these individual seconds, it is exploring eighteen million second-long video clips, mixing together the top 100 most similar, then showing you that second long clip.

Since each of these seconds has its "predicted video" predicted independently just from the test subject's brain data, the video is not exact, and the figures created don't necessarily 100% resemble each other. However, the figures are in the correct area of the screen, and definitely seem to have a human quality to them, which means that their technique for classifying the videos in (2) is much better than random, since they are able to generate approximations of novel video by only analyzing brain signal.

Sorry, that was longer than I expected. :)

Edit: Also, if you see the paper, Figure 4 has a picture of how they reconstructed some of the frames (including the one from 20-22 seconds), by showing you screenshots whence the composite was generated.


Alternatively, instead of reading all those words I just said, you can watch [1], which is a video explanation of Figure 4 from the paper. :)



I used to do a something similar to this (though less sophisticated) every time a new Chrome update came out, changing a single byte in the binary so that I could restore the http:// at the front of URLs.

I considered making a website to publish the proper offset to change for each version, but I got complacent after a while.


How did you figure out which byte to change?


I haven't done it in a year or two, so it took me a minute to figure out the basics again.

The short version is that I crawled through the source of Chromium for a while until I found the flag that controls it [0].

Then, since FormatUrlType was a uint32, and I assumed the storage of constants would be close together, I did a little trial and error searching through the binary in Hex Fiend until I found the value for kFormatUrlOmitAll. Then I would change this value from a 7 to a 5, which would remove the kFormatUrlOmitHTTP flag (or sometimes to a 1, to see if I liked trailing slashes on bare hostnames).

Of course, since Chrome autoupdates, I had to do this every few times I restarted the browser, until I just got too lazy. :) I can't seem to find the offset this time, though, so I very well might be missing a step!

[0] https://code.google.com/p/chromium/codesearch#chromium/src/n... [1] https://code.google.com/p/chromium/codesearch#chromium/src/n...


I'm so glad Firefox still allows this as an option. I guess it's irrational, but it drives me absolutely nuts to see bare URLs without protocols.



seems to be a paper on this phenomenon, with


being a similar work in presentation form.

I've never seen it before; I like it!


Do we know what fraction of active users has over 1000 karma? As someone with forty-two karma currently who only comments rarely, it's a bit scary to know my comments will face moderation to be posted, although it will surely increase the substance/message ratio, which has seemed to be decreasing some.

It's not so much that I care about the karma, as I'd post more if I did, but more that if someone asks a question that not many other users care about, but I happen to have unique insight, I'd hope that my message can get through to them. :)


It sounds like users with over 1000 karma also need to go through the pending stage. So in that sense you are still on equal footing with them. You just won't be able to vote on other pending posts.


I picked that number pretty arbitrarily, but it's just a variable and HN has a repl.


This is especially troublesome to me with regards to posts that quickly drop off the first page. Will there be enough page views by users with karma > 1000 on posts like that to get any comments approved?


I can't speak for other longtime HN users, but I wrote my own news reader and regularly browse threads that have disappeared off the front page. Anybody with an RSS reader or other similar thingy would do the same.


I found a user with approximately 1000 karma and used http://hn-karma-tracker.herokuapp.com/ to get the number of active users.

There are ~6000 with more than 1000 karma and ~9000 with more than 500 karma.


It's also not that hard of a game to find, even with the destruction: AtariAge [1] rates it a 1 out of 10 ("Common") in rarity, and I know I own at least two copies.

Also, it's interesting to note that HSW (Howard Scott Washaw, who coded the whole game in 5.5 weeks) has said at least once that he doesn't believe the landfill incident actually occurred[2].

Quoth HSW: "I had many friends all over Atari, if the company was burying all these carts someone would have told me. And the moment they did, I would have immediately grabbed a photographer and hopped the next flight out and gotten some great protraits of me standing on the pile. How could I possibly not get that picture as a momento?"

I can't believe that post is over ten years old already!

[1] http://atariage.com/software_page.html?SoftwareLabelID=157

[2] http://atariage.com/forums/topic/7337-5-million-copies-of-et...


I bought it when the Ames department store was having their final liquidation sale, the only friggin thing left in the entire store the day before closing was a pile of the E.T. game on an otherwise empty shelf marked down to a dollar or so.


It is a bit odd when the numbers go into scientific notation. :)


Also, it seems that at a certain point length, the two score boxes will shift to being on top of one another instead of side by side[1], which moves the whole playfield's place in the window. Adds a bit of an added challenge!

[1] http://i.imgur.com/BW6RV2e.png



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact