Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Creepyface – A JavaScript library to make your face look at the pointer (creepyface.io)
81 points by 4lejandrito on Nov 25, 2019 | hide | past | favorite | 36 comments



I really dislike that you're hiding the data collection behind a link most people wouldn't ever click.

"Use them for research We want to train an AI that can generate a creepyface based on just 1 image. For this to happen we need as many images as possible so please tick this box if you find it interesting."

"Use them as samples in the front page The images will be stored and publicly served as reduced size thumbnails."

These are opt-out instead of opt-in. Hidden behind "By downloading your creepyface you accept these conditions."

Cute cat though.


I think you're right. I tried to be as transparent as possible but it is true that making it opt in would be the fairest. I just didn't want to bother users.

Although to be honest, I don't think I'll ever use those images more than to show them as samples. I just thought they would be nice to have if I ever decide to dig into machine learning.

Edit: I will make it opt in.


The privacy friendly default is opt in. Opt out is a trap.


That phrasing too. "Find interesting" is so incredibly not the same as "give you rights to my data and likeness".


I've had to simplify the site in order to avoid the hn hug of death.

This is how it should look like if I were a competent SW Engineer:

https://imgur.com/waVjMVe

Thanks!


I would strongly suggest serving this landing page via http/2.0 - your 504 requests over http/1.1 are currently taking 3.5 minutes for me to load 2MB of data.

https://ibb.co/5BWrqbx


I would check why my code do 500 requests in the first place. That's unsane unless you do some kind of video game.


This is all I am doing to serve images:

https://gist.github.com/4lejandrito/6d4f903e0a692b87344e8600...

The resize function only resizes the first time and uses a cached version of the image after that.

I checked the server and the CPU is not too high so I think it must be HTTP as parent pointed out.


Yes, I checked and all I see is 22 requests. I don't know where OP got the 504 numbers. It also loads instantly for me.

Ignore my comments, then.


I misunderstood your original comment. The reason why there were 500 requests is because I was showing a background full of creepyfaces. It made the site look much nicer but it turns out it didn't scale very well.


:) Well, good thing you updated the site then.


500 requests are perfectly reasonable here. It loads mutliple pics per example avatar. And there are many examples. The only problem here is HTTP/1.1.


From what I see in my viewport, no it's not reasonable.

Using 500 requests for this because http2 exists is like stating it's ok electron apps eat 1 Go of ram because we have big configs today.

That's also why, in 2019, with quad cores, SSD and more memory than ever, things doesn't feel faster than 10 years ago.


Turns out the other heads were removed so it's not doing 500 requests anymore. So where does that leave us? Should we keep arguing over the original situation?


Thanks for the suggestion!

For now I am just disabling the background samples so that only 1 creepyface is loaded per user. Should be ready in a few minutes.


How does one ensure they are using HTTP/2.0 vs HTTP/1.1


There are various guides for enabling it in web servers that support it. It requires HTTPS as a prerequisite.

For testing it in the browser: in Chrome at least, the developer tools Network tab details the protocol of each request/response. I believe Firefox has similar functionality.


What is this supposed to do? I looked at the pointer and I looked away from the pointer and couldn't tell the difference. Does it need a webcam to work?


A better example in realtime, https://www.ftm.nl/ move your cursor over the eye in the left top corner.


xeyes 2.0

For those who don't know xeyes. Same thing, little X window widget that looked at the mouse pointer.


Xeyes was inspired by the original NeWS "Big Brother" eye.ps script written in PostScript by Jeremy Huxtable, which were actually in round windows of course, as eyeballs should be.

https://www.donhopkins.com/home/archive/news-tape/fun/eye/ey...

Keith Packard saw it at SIGGRAPH '88 and made the xeyes knock-off in a rectangular window. It was later extended (with many lines of complex boilerplate code, much more than the NeWS version) to finally make the window round, using the dreadfully byzantine X11 shaped window extension. But even now you can't move them around individually and put them anywhere you wanted without an annoying window frame.

https://github.com/xyproto/xeyes/blob/master/Eyes.c#L512

One fun thing you could do with naked individual NeWS eyes was vandalizing great works of art or photos of people. In 1990, Pat Lashley at Sun automated that process by writing "MonaEyes" for OpenWindows, which made it seem like Mona Lisa was really interested in whatever your cursor was pointing at.

https://donhopkins.com/home/archive/NeWS/monaeyes

I made a "recursive" version of eyeball windows for The NeWS Toolkit, integrated with the window manager, whose iris was a separate round canvas that glided around, with a menu that split the eyeball in two along the shortest axis, and which supported drag-and-drop, so you could pick up one eye and drop it inside of the iris of another eye, and nest them arbitrarily! (I still haven't seen an X11 Window Manager that supports that feature of dragging a window inside of another other window!)

https://donhopkins.com/home/archive/NeWS/eyes.ps.txt

It's simply a subclass of ClassBaseWindow, so the white part of the eye is actually the window frame, and the iris is actually the client area, so drag-n-dropping eyes into other eyes is simply implemented by overriding the /MoveStopSub method to look under the mouse for other eyeball containers like irises, and reparent the eyeball window into the first one it finds.

    /SplitEye { % ctl => -
      pop
      gsave
        Parent setcanvas
        /bbox self send                   % x y w h
        2 copy ge {
          exch .5 mul exch                % x y w/2 h
          4 copy /reshape self send
          4 -1 roll                       % y w/2 h x 
          2 index add                     % y w/2 h x+w/2
          4 1 roll                        % x+w/2 y w/2 h
        } {
          .5 mul                          % x y w h/2
          4 copy /reshape self send
          dup 4 -1 roll                   % x w h/2 h/2 y
          add                             % x w h/2 y+h/2
          3 1 roll                        % x y+h/2 w h/2 
        } ifelse
        Parent setcanvas
        /makedemo ClassEyeBall send
        { /reshape self send /map self send
        } exch send
      grestore
    } def

    /MoveStopSub { % event => -
      /MoveStopSub super send
      TrackHasMoved? {
        gsave /framebufferof self send setcanvas
          currentcursorlocation
          /Mapped false def
          canvasesunderpoint
          { dup /EyeBallContainer! known
            Selected? { SelectedWindows 2 index known not and } if {
              /active? 1 index send {
                gsave setcanvas
                  /location self send
                  { /Parent currentcanvas def } stopped {
                    pop pop pop pop
                  } {
                    /move self send
                    exit
                  } ifelse
                grestore
              } { pop } ifelse
            } { pop } ifelse
          } forall
          /Mapped true def
        grestore
      } if
    } def


The page shows some faces, and the faces look at the mouse pointer.

For example, faces below the mouse pointer look up, faces above it look down.


It was responding too slowly for me to notice that.


That's what I thought it would be by the abstract, but it seems like it just swaps out one of nine or so images depending on where your cursor is, rather than actually encouraging you to direct your real face toward your cursor.


You are right. Creepyface just swaps the images based on the relative position of the cursor with respect to the image element.

It doesn't require a webcam although you can take your images using the site: https://creepyface.io/create


Since it made it to the front page I am afraid the site is having problems to serve images and you didn't get the chance to play around with the samples.

If you wait long enough it'll hopefully load.

Sorry for that!


Instant upvote for the maine coon.


It's called Nala, but I am afraid she is a Ragdoll not a Maine Coon. We do have another one which is a Maine Coon but it is too shy for pictures.

Thanks for the upvote though!


I would suggest you create a GIF from all the images and move GIF back and forth with cursor movement.

This would save a lot of data since each photo has almost 95% content common.


You can't control a GIF that way in the browser. You'd need a video tag.


I'm curious about the tagline "Ideal for resumes or team sites". Why is this ideal for these use cases, and how would it be implemented, and what is the benefit?


Creepyface exists because I created it for my online CV. I've also seen similar patterns applied in some sites presenting the members of a team.

Do you have any ideas on how to improve the tagline?


I swear Metamask uses the same thing with fox lol.


Why is this on the front page of HN? This is, like, 15 minutes of work...


I thought the same when I started with this project.

It turns out making it an open source library with decent documentation, landing page, tests, edge cases covered, etc takes much longer than I expected.

I wish it had been just 15 minutes but I am happy people liked it, that made the work totally worth the effort.


Reminds me of how the heads in the The Sims pie menus look around to follow the cursor as you point at different menu items!

https://www.youtube.com/watch?v=-exdu4ETscs

>The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo: This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that I developed for The Sims with Will Wright at Maxis and Electronic Arts.

https://medium.com/@donhopkins/pie-menus-936fed383ff1#61bc

>The Sims pie menus show the currently selected Sim’s head in the menu center, looking straight at you if nothing’s selected, and turning to look at the selected item, like the Brady Bunch intro. That helps you identify which character the menu will affect, and focus on which item is selected.

>When I implemented this effect, I actually wanted to make the Sim’s heads nod or shake according to how much they wanted to perform the selected item (so she’d look delighted about “Tickle”, or shy about “Kiss”). But that would have been a burden to produce and program for every menu item, and we had to stop thinking of cool stuff to do and just ship the damned game.

Kinda like the "Brady Bunch" intro, with Alice in the middle!

https://www.youtube.com/watch?v=dZhhUdc_Vt0

It was easy to implement the heads looking around with the character animation system, but later I made a version of pie menus for Unity3D that used the webcam to project my live face into the center of the menu:

https://www.youtube.com/watch?v=sMN1LQ7qx9g

>They're very general purpose, and you can configure them for whatever you need, and then you can integrate them with your app, to give interesting feedback. This is a pie menu with eight items for SimCity, for selecting an editing tool. Now you pop a menu up and you see in the center right next to your cursor where you're looking, the menu title and the menu description. And then you see all the labels of the items. [...]

>I don't know if you've noticed, but there is this head in the middle of the menu, that's looking around at the menu items, kind of like in The Sims. Now this is to demonstrate the kind of things you can do by having notifiers that react to the tracking that modify objects in the world to show you what's going to happen before you selected it, or just, you know, just to give you some interesting feedback. Now I'm not actually moving my head around. It's just twisting this 3D object.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: