Hacker News new | comments | show | ask | jobs | submit login
Very cool, but very creepy, open source project (notcot.org)
456 points by monochromatic on Sept 21, 2011 | hide | past | web | favorite | 75 comments

Made with:

1. OpenCV 2.0 - C++, C, Python interfaces; BSD license - http://opencv.willowgarage.com/wiki/

2. FaceTracker - C/C++ API; "research purposes only", to download email Jason.saragih@csiro.au - http://web.mac.com/jsaragih/FaceTracker/FaceTracker.html

3. Method Art Image Clone - realtime cloning library (from gts, glew, glib); MIT license - http://methodart.blogspot.com/2011/07/fast-image-cloning-lib...

4. openFrameworks - C++ toolkit; MIT license - https://github.com/openframeworks/openFrameworks

5. FaceOSC (ofxFaceTracker) - active appearance model addon for openFrameworks; "open source" - https://github.com/kylemcdonald/ofxFaceTracker

So it might be open-source, but depends on the non-open-source (and not even publicly available) project FaceTracker?

Kyle McDonald discusses this:

"I imagine FaceOSC being used to prototype ideas surrounding face-based interaction. I created it because Jason Saragih, the researcher behind FaceTracker, uses an open source non-commercial license for his code. He asks that anyone who wants to use the code email him directly, as a way to keep track of the usage. This is great, but I know that one of the fastest ways to get cool stuff happening is to make new tools and research accessible to a wide audience. So I asked him if it would be ok to make a standalone app for people to prototype their ideas — even if they don't have access to the code. Everyone already 'speaks' OSC so I thought this would be the easiest way to get the technology out there. Eventually, if people need to integrate it into a single application, they can contact Jason directly and use my ofxFaceTracker addon to get started: https://github.com/kylemcdonald/ofxFaceTracker

And if they need to go the commercial route, there’s FaceAPI http://www.seeingmachines.com/product/faceapi/ "

Source: http://electronica.fm/news/110714/music-your-face-artist-kyl...

"Open source" and "non-commercial license" are mutually exclusive. See the Open Source Definition, which is the official definition of the term "open source" by the people who invented it, http://www.opensource.org/docs/osd, point 6, "No Discrimination Against Fields of Endeavor," whose rationale is explained on http://www.opensource.org/osd.html: "The major intention of this clause is to prohibit license traps that prevent open source from being used commercially. We want commercial users to join our community, not feel excluded from it."

I really hate the hijacking of the term "open source" by the OSI. I think open source is a generic term that should mean what it says -- the source is open, i.e., accessible and modifiable. I don't think that the distribution limitations should go into it.

While I certainly appreciate all the open-source projects that allow commercial usage, I really think that a trend of projects using non-commercial or licensee-only licenses would do a lot of good, because as it stands I'd rather have the source code for every program I use even if I'm not allowed to share that code with someone who hasn't paid the original author.

I recognize that copyleft and all that is important and good, but I think we'd see much more source availability if people were less rabid about their demands for unlimited distribution. Unlimited distribution is rad but it greatly reduces the effective profitability of the product, and there's some software that just doesn't get made if it doesn't have some cash behind it.

> I really hate the hijacking of the term "open source" by the OSI.

The OSI was founded a couple of months after the invention of the term "open-source software", by the same people who invented the term. They aren't the ones who are "hijacking" it. You are. It's dishonest. Please stop doing it, and please stop attempting to rewrite history in order to justify your dishonest attempt to hijack the term. The people who read HN aren't so ignorant that you can fool them that way.

See here for usages of "open source" prior to 1998, when OSI was founded: http://groups.google.com/groups/search?as_q=&as_epq=open...

You can see on the first page results from 94, 95, 96 all used to refer to "open source" code, i.e., a program whose source code was available to users. That's just the first page. The term had gained sufficient prevalence to be mentioned in a press release from 1996. There's only one hit on the first page (for me, at least) that refers to "open source" in another context ("unable to open source file").

That's just the first page, there are many more pages of hits.

That's not '"open source" code', that's 'open "source code"', and almost all the hits Google finds for "open source code" are duplicates of a single press release about Willows TWIN. The rest of the page (for me) is almost entirely about open-source intelligence, which has nothing at all to do with software or source code, but rather refers to reports that CIA and related agencies produce by reading public information like what's in the newspapers. That's what the Open Source Solutions 1994 conference was, for example.

You can save yourself the trouble of looking through the many more pages of hits. I've done it before.

The word "source" is short for "source code." They aren't exactly distinct things, so that is really splitting hairs. If we acknowledge that use of "open," then "open source" falls right out of it.

And there are lot of the dupes you're talking about, but if you filter those out in your search, you'll see that is not the only occurrence. Quickly sifting through, I found these:

From 1995 we have a guy asking for a "modeler with open source code."

1996 has Caldera advocating "an open-source code model" (it even has the hyphen in the right place!), as well as an essay about "Open Source Code and the Early Development of Unix." The former case is interesting because it explicitly defines open-source code as source code that is openly distributed, and clarifies that a small fee is required to sell derived products commercially. (So SCO actually claimed the term before OSI did!)

And 1997 has a discussion about whether "open source cryptography" is better than "proprietary … cryptography" (and in which "published" is used as a synonym for "open source").

So it appears that the term was much less common before OSI — but it was used to talk about source code you could read, long before they came along with their definition.

      The word "source" is short for "source code."
No, the word "source" is short for "source", as in ...

      any thing or place from which something comes, 
      arises, or is obtained;
Like how sheet music is a source for learning to play new songs.

Who's hijacking the term now?

You are, because "source" in "open source" refers to "source code", "open source code".

It's not code that originates from a source that's "open" .. what does that even mean?

It means the source code is open to the public.

> "an open-source code model" (it even has the hyphen in the right place!)

I think in that case both "open-source" and "code" are modifying "model", so it's still not the same thing.

> And 1997 has a discussion about whether "open source cryptography" is better than "proprietary … cryptography" (and in which "published" is used as a synonym for "open source").

That's from the spook community, which uses "open source" to mean "publicly known information", and they're talking about cryptosystems, not source code.

> The word "source" is short for "source code." They aren't exactly distinct things, so that is really splitting hairs.

Someone could have potentially used the word "source" as short for "source code" in the phrase "open source" before Chris Peterson did in that meeting, but I haven't found any cases of that happening. Not a single one.

The whole point is that the phrase "open source" now has a specific meaning not derivable from the meanings of its component words, much like "source code", "pro-choice", or "death row". If a Congressperson declares that they're "pro-choice" but votes to amend the Constitution to prohibit abortion, or someone claimed that they were fulfilling their contractual obligation to provide you with "source code" by giving you the SKUs they use to order office supplies, you'd be justifiably angry at the deception. If you find a quote from someone in 1958 using the term "pro-free-choice" to mean "opposed to racial segregation", it does not make any less dishonest a hypothetical use of the term "pro-choice" with that meaning today.

Are you really going to claim that no one has ever used the generic term "open source" prior to the folks in the OSI?

Yes, that is correct. No one, as far as I can tell, ever used the generic term "open source" (for software, not intelligence) prior to the folks in the OSI. There were a few cases you can find on Usenet of people saying "open source code", but that's not the same thing; "open" there modifies "code", rather than modifying "source".

You can read the history at http://www.opensource.org/history. I wasn't at the meeting, but I was part of the community for years before that, and I was part of the email exchange that followed that led up to the formation of the OSI. Although I'm no fan of ESR, I can attest that the OSI folks' claim to have invented the term is legitimate and correct.

You appear to be splitting hairs here. Most folks I know frequently say "source" as a shorthand for "source code". It's not a big leap to tack on "open" in front of that when you mean the source is available for inspection by those outside the person/company who developed it (i.e., not "closed").

If you want to claim that OSI popularized the phrase and imbued it with a very specific meaning, you'd probably have a point, but I don't see how they could lay claim to ownership of such a generic term. At the very least it seems out of bounds to be accusing someone of being "dishonest" about it and "rewriting history".

> It's not a big leap to tack on "open" in front of that when you mean the source is available for inspection by those outside the person/company who developed it (i.e., not "closed").

Yet digging through Usenet, web, and mailing-list archives has not, so far, found a single example of anyone making that leap prior to the 1998 meeting.

Well, the examples that have been found you have explained away by careful parsing of the language in such a way as to ignore them. Given your fervor on this particular subject I don't think there is evidence you would accept.

Here's another one: http://groups.google.com/group/comp.protocols.ibm/browse_thr...

Frankly, it really is just baffling to me that you are defending this so aggressively to the point of accusing others of outright dishonesty when discussing it. Do you really think that appropriate?

> Here's another one

That one is interesting. The most plausible interpretation is that it is actually a counterexample! He very likely means "source code" and not, say, "FTP site".

> Frankly, it really is just baffling to me that you are defending this so aggressively to the point of accusing others of outright dishonesty when discussing it. Do you really think that appropriate?

Yes. It's like "pro-choice" or like "organic farming": one group of people is using this term with a specific idiomatic meaning, and there are other people who want to associate themselves with that meaning in order to free-ride off the goodwill that comes with it, but without paying the price. An anti-abortion politician isn't "pro-choice" because they support school vouchers, an apricot sprayed with malathion isn't "organic" because it contains carbon compounds, and a free-for-noncommercial-use program isn't "open-source" because its source code isn't secret.

We're obviously going to just disagree on this one, but I do think it's unfortunate that you still feel the need to associate some ulterior motive to people who don't agree with you ("free-ride off the goodwill"). You adhere to a very specific definition of the phrase and there's nothing wrong with that. However, attacking others who do not is unnecessary as well as unhelpful for your cause.

> There were a few cases you can find on Usenet of people saying "open source code", but that's not the same thing; "open" there modifies "code", rather than modifying "source".

Ah, come on! You can't be serious.

"Source" in "Open Source" is short for "source code", and no other meaning of the word "source". The history page you link doesn't state it explicitly, but they do mention "source code" a few times, in addition to mentioning no other meaning of the word "source", so I think it's pretty reasonably to assume that these OSI people, like many programmers, when speaking about "source" were using it as a short for "source code".

Because really, a "source" (in the meaning of "origin") that is "open", what does that even mean? That things can flow from it? Sorry but you are also aware of the term "closed source" right? Except that, if nothing originates from it, it's no longer a source. Yet if you interpret it as "closed source code", it still makes sense.

I don't really see how your link to the opensource.org/history proves your point, either. In fact it strengthens the other point as well. Sure it shows they decided to name their new licensing scheme "Open Source" (where it seems to me, "Source", from the context to be short for "Source Code") -- even though people had (naturally) already been using the term "open" for source code that is publically available.

So you can make the point, that possibly, they were the first that shortened "open source code" to "open source" and using that as a brand, while at the same time hijacking the meaning to something much more strict (namely excluding of open non-commercial licenses).

What's the correct term, in your opinion?

I've heard the term "shared source" used.

How about "The source code is available"?

Isn't this why we have the disambiguators gratis and libre? Or, if you prefer, “... as in beer” and “... as in speech.”

I don't think he cares he was clearly just trolling.

If you write code that you give away, your project is technically open source. After all, you don't have the code that your CPU uses to translate x86 instructions into its internal opcodes, but you can still use an open-source operating system on top of it.

In this particular, an interested party would only have to rewrite one library, instead of one library and one application on top of it. So it's a win, even if it's not GNU-certified Free Software.

I'm not claiming FaceOSC isn't open source just because it depends on proprietary software. I'm just trying to clarify the situation — open-source software that I can't practically run is a lot less exciting than what I originally thought this was.

The author of FaceTracker, Jason, is a great guy who you should totally email if you're interested in these ideas.

We could have just as easily used asmlib https://code.google.com/p/asmlib-opencv/ but FaceTracker is superior in terms of speed, accuracy, and efficiency.

Thanks! I imagine he's overwhelmed with email right now, but I in no way intended to attack Jason or FaceTracker.

For face replacement in video, the state-of-the-art is this upcoming SIGGRAPH Asia 2011 work:



Having worked on this problem before, I know how tough it is to escape the uncanny valley when doing replacement, and these guys have really done impressively well at it (albeit with a fair amount of manual preprocessing and in controlled situations).

I think the "creepy" factor of the images is probably due to the "uncanny valley"... Pixar fought this effect when they were first rendering humans

"The uncanny valley is a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The "valley" in question is a dip in a proposed graph of the positivity of human reaction as a function of a robot's human likeness."


Once they get over that barrier, though, it'll become much creepier for an entirely different reason.

The net will be awash with G.I.R.L. 2.0 !

Interestingly, one of the hardest aspects of human rendering is hands. Hands flex, bend and the skin stretches in ways which are difficult to represent mathematically, which is why in the late 90s and early 2000s (when human rendering started to emerge), most rendered human beings always wore gloves or were hidden from view entirely.

This has nothing to do with the Uncanny Valley. This is creepy because masks add a deadening look to the face and we sense that. It seperates the person from the face.

Read up on the "Uncanny Valley" again. Your example is one of the cases that lead to the development of the hypothesis in the first place. (The concept dates back to 1906 with a German paper.)

How long before this becomes good enough to fool people on Skype? And how long before someone ends up writing software to detect this?

I think this is the first time in my life that I've felt like I was living in a scifi novel.

Real-time virtual puppeting has been done in movies/television and research for a while now and yes, it can easily fool people.

A professor friend of mine Jeremy Bailenson at Stanford actually uses the Kinect to track facial movements and uses 3D models of others to create puppets in real-time. Even more interesting, he can morph your face with the person you're video conferencing with to create a feeling of commonality in them.

He actually wrote a book on it called Infinite Reality [1] which talks about all kinds of ways people will probably get manipulated in the future. He talks about things like mirroring movements (which he can do automatically in a video conference), looking into the eyes of every participant in a group video conference and other really interesting psychological hacks.

[1] http://www.amazon.com/Infinite-Reality-Avatars-Eternal-Revol...

It is amazing how DFW nails that in Infinite Jest. In the book, people have the technology to use video-phones, but they stopped using it after realizing that video-phones eliminate the convenience of being able to communicate without actually partaking in all other kinds of communication. A silly example: with a video phone, you must be paying attention at the person in the camera, can't talk to someone while clipping your nails.

Also in the book, people started using more technology to cover for these problems, from software that would make you look "presentable" to a complete avatar that would simulate all the motions that one should do while on camera.

In the end, people just realize that no one is actually using the video part of the phone anymore, so they just stop using it and go back to audio-only.

So, as much as I think we will have the technology to fool someone as you mention, I don't think it will happen simply because no one will adopt that technology in the first place.

Kottke also comments on it: http://kottke.org/10/06/david-foster-wallace-on-iphone-4s-fa...

As a user of video phones/laptops, what ends up happening is that you clip your nails anyway and keep on talking. Just like if a close friend was doing that while you were in the same room. It quickly becomes not a big deal.

I don't mean to pick on you, but what is so exciting about having a cam on to see (or not see?) your friends doing random things?

I see the value in having the occasional video conference, especially when talking with family or friends that are not geographically close. But I still see it as an attention cost. I would be slightly annoyed if someone that I'm video-chatting with decided to treat it as "not a big deal".

It's exactly a distance thing. Close friends and family that want to see you. It's like going over to a friends house and hanging out. You don't have to pay attention all the time, but it's nice to see facial expressions and such. You don't clip your nails for the entire conversation either, and you can stop and go within the conversation. Eventually video calls can take as much attention as an audio only call, and you can look at the person when you want to.

Funny. Just yesterday I had to use my girlfriend's laptop and (not being a mac user) I didn't know that Skype on Mac OS X has a "start video call" by default.

I had a skype call with a business associate, and it was quite bad when I realized that I was talking to him without wearing a shirt, and that the camera started by itself. Now I can only hope that I was fast enough to cancel the video before it actually started streaming.

Perhaps you are right in the sense that people will get used to the idea that "always-on" video is normal. However, I will not. I am as far as possible from being technophobic, it's just that I like the barrier of not being on display.

I find it helpful to ask not "can this fool people" but something more like "can this fool people at 320x200 with an X kbps stream?" Same for "realistic" computer graphics. I haven't seen anyone push computer graphics that can fool me at "HD resolution", but pushing something "photorealistic" at low-grade web cam resolutions is perfectly doable. I bet the same is true here.

> I haven't seen anyone push computer graphics that can fool me at "HD resolution", but pushing something "photorealistic" at low-grade web cam resolutions is perfectly doable. I bet the same is true here.

Sure. Today.

That's why I use this metric. Instead of a boolean "is/is not photorealistic", this gives us a way to measure progress.

I've seen some radiosity-based architectural renderings that have me fooled at about 800x600, I haven't seen anything in motion that has me at that resolution yet, and even the architectural stuff still breaks down at 1024+ (the models are still too clean).

Kyle just posted a new demo video - he's playing around with the idea of the "Scrambler Suite" from A Scanner Darkly - http://vimeo.com/29391633

Very cool. Much better than the effect used in the movie.

Really awesome. That second video shows much better results. Now, still some fine-tuning and someone could create application where people can try out different make-ups or eyeglasses.

Hair styling is another big area where this would be really useful.

Here's where his broken link meant to go: http://methodart.blogspot.com/2011/07/fast-image-cloning-lib...

Wow, how uncannily timely. Just yesterday, I read the part of a speculative fiction near-future novel, "Halting States" by Charles Stross, which includes this as an interesting detail with an eye towards technical imperfection; one of the protagonists is bemused by someone's neck glitching up into their face during a video call while they use this sort of software.

Reminds me somewhat of the scramble suit from A Scanner Darkly!

I imagine this will do to movies what autotune did to music. In other words, you no longer will have to find someone who is both good at acting and attractive.

It could also let movie producers get rid of the high salaries paid for popular movie stars.

That's an interesting point, and one that I could easily imagine happening.

I suppose the only problem for the movie producer is who holds copyright on a persons face?

Imagine George Lucas & co. wants to make 'Indianna Jones 5 - the prequel' with this technology using a likeness of a young Harrison Ford.

The studio owns the likeness of the character 'Indianna Jones', but surely Harrison Ford owns the likeness to his own face (IANAL, but curious if there's precedent for this sort of thing).

I think the more common use case would be getting face copyright from someone beautifull , and building it to stardom.

I think if a major studio would want to build stars that way , it won't take them long.max a few years. so there won't be a long transition period anyways.

Cool! The title should be edited to describe the link though.

And as for being an open source project, link/source please?

Here's at least some of the author's code. Not sure if it's the whole thing.


Edit: that code doesn't actually cover the substitution. Here's more info: http://vimeo.com/29279198

High-res source 3D face scans + a Kinect to track target head position and rotation better would hide the modifications quite well. Awesome project.

This could be used in video call centers in the future. Image you make a video call to your bank, and a blonde girl appears on the screen. In reality, however, you are talking to a dude in India. However, this would also require "voice substitution."

And once more, while we were discussing about a new language to replace javascript, some folk wrote a kickass software program in C++. :)

Why aren't the videos embedded simply from Vimeo? Why do I have to wait on "uploads.notempire.com"? Not to mention that it's not behaving very "empire"-like.

If it is open source, where is the code? I am only seeing links to libraries it uses.

The thing being claimed as Open Source is OpenFrameworks (https://github.com/openframeworks/openFrameworks/), which is MIT licensed.

However the source for FaceTracker looks like it's "please email us" - so the licensing for that isn't clear. The end result could be (legitimately) proprietary, as long as they supply the MIT license notice.

Really cool stuff. would be nice to have this feature used together with augmented reality games.

We could then use your preferred char outfit and face while playing a wii like game, so the game would present a video of you as the main char with any outfit, like playing streetfighter being Mario :D

Really ingenious idea.

How do professional movie productions do this? For example the Winklevoss brothers in The Social Network both had the face of one of the actors.

Armie Hammer's face was digitally transplanted on top of Josh Pence's body in shots where both brothers appeared in the same shot during post. Otherwise, they would change angles and re-film a scene with Hammer as both.

For a video that shows how they did this, see http://videos.nymag.com/video/Vulture-Exclusive-The-Winklevi...

This is simple 're-texturing'. I.e. the image of other face is projected onto your face (or the 3D reconstruction of it). To actually change, re-render and replace the original geometry is much much more complicated, and also involves re-creating all direct and indirect illumination, which also needs full 3D reconstruction of the surroundings and light sources. That's how it's done in movies and it involves motion capture and huge amount of manual labor.

Edit: Speaking of replacing with a generated 3D face. Winklewoss replacement could be done 'simply' by double filming both actors in same position and then photoshopping the faces replacing the one with another, frame-by-frame. Face-tracking could be used to align the faces.

This will be perfect for plastic surgeon and hairdressing applications. Also to try out a new smirk before actually growing it.

This would be much better than face pixelation on Google street view.

Online dating just got a whole lot riskier...


I approve of this!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact