I really dislike that you're hiding the data collection behind a link most people wouldn't ever click.
"Use them for research
We want to train an AI that can generate a creepyface based on just 1 image. For this to happen we need as many images as possible so please tick this box if you find it interesting."
"Use them as samples in the front page
The images will be stored and publicly served as reduced size thumbnails."
These are opt-out instead of opt-in. Hidden behind "By downloading your creepyface you accept these conditions."
I think you're right. I tried to be as transparent as possible but it is true that making it opt in would be the fairest. I just didn't want to bother users.
Although to be honest, I don't think I'll ever use those images more than to show them as samples. I just thought they would be nice to have if I ever decide to dig into machine learning.
I would strongly suggest serving this landing page via http/2.0 - your 504 requests over http/1.1 are currently taking 3.5 minutes for me to load 2MB of data.
I misunderstood your original comment. The reason why there were 500 requests is because I was showing a background full of creepyfaces. It made the site look much nicer but it turns out it didn't scale very well.
Turns out the other heads were removed so it's not doing 500 requests anymore. So where does that leave us? Should we keep arguing over the original situation?
There are various guides for enabling it in web servers that support it. It requires HTTPS as a prerequisite.
For testing it in the browser: in Chrome at least, the developer tools Network tab details the protocol of each request/response. I believe Firefox has similar functionality.
What is this supposed to do? I looked at the pointer and I looked away from the pointer and couldn't tell the difference. Does it need a webcam to work?
Xeyes was inspired by the original NeWS "Big Brother" eye.ps script written in PostScript by Jeremy Huxtable, which were actually in round windows of course, as eyeballs should be.
Keith Packard saw it at SIGGRAPH '88 and made the xeyes knock-off in a rectangular window. It was later extended (with many lines of complex boilerplate code, much more than the NeWS version) to finally make the window round, using the dreadfully byzantine X11 shaped window extension. But even now you can't move them around individually and put them anywhere you wanted without an annoying window frame.
One fun thing you could do with naked individual NeWS eyes was vandalizing great works of art or photos of people. In 1990, Pat Lashley at Sun automated that process by writing "MonaEyes" for OpenWindows, which made it seem like Mona Lisa was really interested in whatever your cursor was pointing at.
I made a "recursive" version of eyeball windows for The NeWS Toolkit, integrated with the window manager, whose iris was a separate round canvas that glided around, with a menu that split the eyeball in two along the shortest axis, and which supported drag-and-drop, so you could pick up one eye and drop it inside of the iris of another eye, and nest them arbitrarily! (I still haven't seen an X11 Window Manager that supports that feature of dragging a window inside of another other window!)
It's simply a subclass of ClassBaseWindow, so the white part of the eye is actually the window frame, and the iris is actually the client area, so drag-n-dropping eyes into other eyes is simply implemented by overriding the /MoveStopSub method to look under the mouse for other eyeball containers like irises, and reparent the eyeball window into the first one it finds.
/SplitEye { % ctl => -
pop
gsave
Parent setcanvas
/bbox self send % x y w h
2 copy ge {
exch .5 mul exch % x y w/2 h
4 copy /reshape self send
4 -1 roll % y w/2 h x
2 index add % y w/2 h x+w/2
4 1 roll % x+w/2 y w/2 h
} {
.5 mul % x y w h/2
4 copy /reshape self send
dup 4 -1 roll % x w h/2 h/2 y
add % x w h/2 y+h/2
3 1 roll % x y+h/2 w h/2
} ifelse
Parent setcanvas
/makedemo ClassEyeBall send
{ /reshape self send /map self send
} exch send
grestore
} def
/MoveStopSub { % event => -
/MoveStopSub super send
TrackHasMoved? {
gsave /framebufferof self send setcanvas
currentcursorlocation
/Mapped false def
canvasesunderpoint
{ dup /EyeBallContainer! known
Selected? { SelectedWindows 2 index known not and } if {
/active? 1 index send {
gsave setcanvas
/location self send
{ /Parent currentcanvas def } stopped {
pop pop pop pop
} {
/move self send
exit
} ifelse
grestore
} { pop } ifelse
} { pop } ifelse
} forall
/Mapped true def
grestore
} if
} def
That's what I thought it would be by the abstract, but it seems like it just swaps out one of nine or so images depending on where your cursor is, rather than actually encouraging you to direct your real face toward your cursor.
Since it made it to the front page I am afraid the site is having problems to serve images and you didn't get the chance to play around with the samples.
I'm curious about the tagline "Ideal for resumes or team sites". Why is this ideal for these use cases, and how would it be implemented, and what is the benefit?
I thought the same when I started with this project.
It turns out making it an open source library with decent documentation, landing page, tests, edge cases covered, etc takes much longer than I expected.
I wish it had been just 15 minutes but I am happy people liked it, that made the work totally worth the effort.
>The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo: This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that I developed for The Sims with Will Wright at Maxis and Electronic Arts.
>The Sims pie menus show the currently selected Sim’s head in the menu center, looking straight at you if nothing’s selected, and turning to look at the selected item, like the Brady Bunch intro. That helps you identify which character the menu will affect, and focus on which item is selected.
>When I implemented this effect, I actually wanted to make the Sim’s heads nod or shake according to how much they wanted to perform the selected item (so she’d look delighted about “Tickle”, or shy about “Kiss”). But that would have been a burden to produce and program for every menu item, and we had to stop thinking of cool stuff to do and just ship the damned game.
Kinda like the "Brady Bunch" intro, with Alice in the middle!
It was easy to implement the heads looking around with the character animation system, but later I made a version of pie menus for Unity3D that used the webcam to project my live face into the center of the menu:
>They're very general purpose, and you can configure them for whatever you need, and then you can integrate them with your app, to give interesting feedback. This is a pie menu with eight items for SimCity, for selecting an editing tool. Now you pop a menu up and you see in the center right next to your cursor where you're looking, the menu title and the menu description. And then you see all the labels of the items. [...]
>I don't know if you've noticed, but there is this head in the middle of the menu, that's looking around at the menu items, kind of like in The Sims. Now this is to demonstrate the kind of things you can do by having notifiers that react to the tracking that modify objects in the world to show you what's going to happen before you selected it, or just, you know, just to give you some interesting feedback. Now I'm not actually moving my head around. It's just twisting this 3D object.
"Use them for research We want to train an AI that can generate a creepyface based on just 1 image. For this to happen we need as many images as possible so please tick this box if you find it interesting."
"Use them as samples in the front page The images will be stored and publicly served as reduced size thumbnails."
These are opt-out instead of opt-in. Hidden behind "By downloading your creepyface you accept these conditions."
Cute cat though.