Hacker News new | past | comments | ask | show | jobs | submit login
Github restricts public Faceswap repo to logged-in users (github.com)
299 points by jordwalke 29 days ago | hide | past | web | favorite | 218 comments



> Like any technology, it can be used for good or it can be abused.

Important to note that it is also possible for a technology to have more potential for abuse than good.

Sure, you can come to a philosophical place where nothing is good or bad, or the good is perfectly balanced by the bad, but if we're looking at increasing freedom, peace and trust, it's hard to see how the upside of this tech is equivalent to it's potential for abuse.

The best argument might be that eventually no one will trust video.


Basically, this is all possible using existing technologies, it's just labour intensive.

As such, tools like this just prove the conclusion: No one should trust video.


It's funny how no major CS curriculum at a major University includes a legitimate ethics course. In the modern era, programmers are akin to lawyers or politicians; we wield immense, and often implicitly trusted power over hundreds of millions of opinions. It's long past due that programmers be required to adhere to ethical codes, maybe even to the extent of bad-style licensing.


I recently graduated from a middle tier state school in CS/Math. The CS portion required an ethics course, I believe its inclusion had something to do with an ACM mandate. Out of over 100 enrolled juniors/seniors I don't believe attendance ever reached double digits outside of 2 tests.

Sure, there was a "requierment," but I wouldn't really consider it legitimate. It took concerted effort to "earn" any grade lower than an A. I wasn't personally acquainted with any other student who seriously considered the social and ethical implications of computers during or after taking a 400 level course titled "Social and Ethical Implications of Computers."

On the bright side, I heard through the grapevine that rigor, or at least workload for the course, has increased since I took it.

Although anecdotal and perhaps specific to my institution, every recollection of my University career makes me feel deeply thankful that I ended up eventually pursuing a double major in mathematics. The quality and dedication of the instructors wildly surpassed that of the CS department. Mathematics professors were there to share knowledge. CS perfessors were pressured by hiring statistics into "preparing us for the workforce" by essentially quizzing specific interview questions like "what's the difference between interfaces and subclasses in Java."


A counter-example: I also attended a middle tier state University for CS, and our single term required Ethics in Computing course was all but easy. That said, it certainly didn't capture the attention of students.


My one hour exam final prompt was “Networking is useful, but explain the ethics involved in networking.” I answered with an essay about conflicts of interest in business while my friend wrote about file-sharing.


This matches my experience. In addition to taking the course as an undergrad, I was the graduate TA for three semesters. While it wasn't super deep on any one topic, the course was far from easy and required to graduate.


Another data point: My university has a mandatory ethics course, and it also has a reasonably low attendance, light workload, pass on a short essay.


My Canadian software engineering degree included a substantial engineering ethics course shared with the rest of the engineering flavors (including mechanical and aerospace). At the time, I thought it was a waste, but the further down my career I go the more valuable it was in retrospect. I also can't call myself an engineer in Canada without joining a provincial engineering organization (like the PEO in Ontario).


I agree for engineering classes, but those completing a computer science (CS in a science or arts faculty) degree in Canada usually do not have any regulatory oversight of any kind and no ethics classes.

However, in the Quebec system (and presumably the equivalent in other provinces) we have 3 philosophy classes (humanities, world-view etc) in college/cégep (somewhat mandatory before university). I also had a mandatory ethics class since I completed a technical degree, which I also did not appreciate at the time, but it helped me develop more critical thinking later on.


The global nature of software development limits the utility of licensing.

You can't go overseas to get a bridge built across the river in your town. You can't get a foreign barrister to represent you in court in your country.

You can easily get a new social media website or IoT server built and hosted in any country you want.

The scenarios where the effects of bad ethical decisions most often make the news are not things that lend themselves well to being subject to local licensing regulations, like embedded avionic or medical devices or internal banking systems. Problems do happen in those spaces, but they are rare.

None of this even touches on usability and the difficulty of restricting tool use to the "good guys". The jslint licence got a lot of stick for that.


The framework for regulation of these sorts of systems will almost certainly come down to “possession of data”.

You can build/host things anywhere, but something centered around “if you generate profits in the USA, and hold the data of Americans, these requirements exist for your system” makes sense. In particular, liability needs to rest somewhere, such that someone gives enough of a shit to do things right.

A civil engineering firm could outsource the design of a bridge anywhere, but in the end, somebody’s neck is on the line if it fails.


> something centered around “if you generate profits in the USA, and hold the data of Americans, these requirements exist for your system” makes sense

This is exactly the stance taken by GDPR. Most devs don't like it from what I've seen on HN. (Saying this without any judgment. I am personally very much in favor of what GDPR does.)


> “if you generate profits in the USA,

This sort of thing already gets dodged by international companies who "generate profits" in countries with lower taxes and "incur costs" in countries with higher taxes.

>" and hold the data of Americans"

GDPR? System requirements is one thing, requiring a licensed engineer from multiple jurisdictions is another. Bear in mind that because software changes constantly, this has to be a senior staff position, not a one-time sign-off like it could be for a bridge.


I really liked my university's required ethics course. It was taught by a CS professor with a law degree, and in addition to covering law included philosophy and ethics of what you should do with technology, and when an engineer should refuse to build something. Participation was about on par with other courses of the term.

I assumed all universities had a similar program, if they don't students are really missing out.


I had an ethics course as a part of my curriculum. It was an extremely interesting course, but completely inapplicable to my professional career.

The main problem is there is not a certifying organization that standardizes ethics (and includes licensing). A single developer refusing something on ethics is pretty meaningless. A company will just find someone else who will do it.


Does following an ethics course actually encourage ethical behaviour?


It may or may not encourage ethical behavior, but it certainly may make students aware of ethical concerns that they might have not yet considered.


I'm wondering how someone can even go about teaching ethics when they're a purely cultural artifacts.

Isn't it just subtle propaganda? Good, bad, just, unjust - what's ethical in China, for example, or Saudia Arabia is not the same as what's ethical in the US or even Europe.

Nevermind the thought of relatively centralized institutions acting as arbiters of ethics and, by extension, core aspects of culture.


> I'm wondering how someone can even go about teaching ethics when they're a purely cultural artifacts.

Cultural artifacts are quite teachable; that's generally how they are transmitted. Why would that be difficult?


>Why would that be difficult?

How do you decide which brand of ethics to teach? Especially if your class is represented by a range of nationalities?

Look at this thread and how oblivious everyone is to the variability of the definition of ethics. We take the subject as some kind of absolute, but really we're just viewing the rest of the world through priveleged western lenses.


> How do you decide which brand of ethics to teach?

There's two common options chosen:

(1) broad multi-system survey rather than a single system or narrow set, and

(2) teach the system or systems most connected to the target legal system (for cases where ethics is being taught largely to create a safety buffer around legality and anticipate legality in adopting to address where legality had not yet settled.)

When you know why you want to teach ethics, it's fairly trivial to choose the approach.


> "How do you decide "

Do you earnestly believe that Harvard University will be paralyzed by the choice between a modern western ethics system in which women have all the rights of men, or (to use your example) a Saudi ethics system in which women are property?

Most people in the real work are not so ridiculous that they allow themselves to be paralyzed by knowledge of ethical relativism.


This isn't some trivial difficulty with choosing a narrative. It's the implied cultural supremacy in arrogantly believing that YOUR perception of what is moral is so correct that it should be taught in university.

The same reason that government shouldn't be legislating religion. I went to college to learn science, mathematics, liberal arts, so that I could learn to make decisions for myself-not to be indoctrinated with someone else's idea of what is right and wrong.

Edit: imagine a Saudi funded institution in the U.S. offering courses on ethics. Would you be ok with that? Why are you even sure that the particular ethics courses in Harvard agree with your own cultural norms? When you mix subjectivity with education, you get propaganda. Sure, some of it is unnavoiable because ultimately there are only so many topics one has time to learn and all educators/authors are biased humans, but the topic of ethics does not even allow for the possibility of objective treatment. It does not belong in school.


It's a solved problem. Every major university teaches ethics. If your university CS program didn't have an ethics course, that is the exception rather than the rule, and your university did you a disservice.


What isn't a cultural artifact, outside of nature?


The problem is that people treat and teach ethics as though they are absolute. Further, ethics as a field of study is unique because it is supposed to directly influence behavior. What no one seems to realize is that ethics lessons are a form of social conditioning with administrators deciding on the content.

Here's a quick illustrative example: what do you think passed for ethical to the average citizen at the height of Nazi Germany? Or Cold War era Soviet Union? To CCP members during the great leap forward? Supporters of Dutuerte? Liberals vs Conservatives in the U.S.?

So which brand is your University picking and choosing to offer in class? The whole idea is dangerous - colleges should not be in the business of teaching ethics, because in order to do so they must decide on what is ethical.


> Good, bad, just, unjust - what's ethical in China, for example, or Saudia Arabia is not the same as what's ethical in the US or even Europe.

That's bullshit. Just because some places have unethical norms, doesn't mean their norms are ethical. Just because some places have a norm to mistreat some people, doesn't mean those people suddenly don't feel mistreated.


I attended a pretty well known university in California and we had a professional ethics course where we analyzed case studies and current events in tech using the IEEE/ACM code of ethics. A number of states in the US have professional licensing requirements to be considered a Software Engineer so maybe that is a step in the right direction.

I do agree with your pretense though that programmers have a tendency to not think about ethics and should be held to some sort of code in the same way doctors and lawyers are supposed to be.


The University of Oslo has a course on philosophy and ethics that is obligatory for all to pass in order to receive a bachelor’s degree.

https://www.uio.no/studier/emner/hf/ifikk/EXPHIL03E/index.ht...


Examen philosophicum is common in Norwegian universities.

It is worth noting that this is common to all first degrees, and may or may not be tailored to your particular subject area, so it's often up to the learner to work out apply the theory to their own scenarios.


That is just another required course to pass though? I mean unless the teacher is especially good (which they practically never are if for any part of a non lap course there is compulsory attendance) you regurgitate the subject and forget everything the day after the exam, right?


You've got a good point, but do you really think an ethics class is going to stop Microsoft or Google? Do you really think an ethics class will make Facebook respect privacy?

I totally support the idea that all professionals need to be ethical and moral, and I definitely try to do so in my life, but I've given up hope that society at large is interested in this. I think individuals generally are, but any company, once it becomes powerful, seems to also become evil.


Do you know of any good resources for learning more about ethics (in the context of CS) online? I can't really imagine what an ethics course would look like; would it just be "don't build stuff that can be abused"? I'm genuinely asking because I don't know much about this. What would the end result look like?


An ethics course is mostly about familiarizing you with several frameworks that can be used to make decisions about ethics.

For instance, "don't build stuff that can be abused" is not a good framework for ethical decision-making. "Abuse" has no clear definition, and "can be" is an incredibly high standard. By this standard, the hammer should have have been invented. It's incredibly easy to hurt someone with one! There are no safeguards whatsoever!


Do you seriously think that taking an ethics course in college would be the difference between a world where programming is used as a weapon and a world in which it's not?

Man if only we could make all our political science grads take an ethics course, then there'd be no more war!


In the current climate of low trust in the US, how would you come up with ethics that everybody can agree to?


Studying ethics is not about "this is right or wrong", it's about "these are ways to consider what is right or wrong". It's one level higher in terms of "metaness" than you think it is.

If an ethics course teaches students which decisions actually have an ethical component in them, that's already a huge win: Whenever they encounter such a decision, they will be more likely to notice and ponder the ethical implications contained therein. This does not necessarily prescribe a particular set of ethical rules that they have to adhere to.


Did you find the statistics for this? Mine certainly did when I studied there, and it was required.


So -- I am finding amusement in this -- but ethics is very labour intensive. I'm sure we all wish we could dream up new projects in a vacuum without real-world consequences (simply for the fun of it).


Mine certainly required one. What's your source for this strawman?

Edit: see also https://www.acm.org/code-of-ethics


I was taught "Law and Economics" in my Cambridge computer science course in the late 90s. There wasn't a lot of it and it was a bit erratic, but it was there.


I would be very interested in that. At the time I earned my degree I wouldn't have though, so I'm not sure that I would have got much from it.


Pretty certain my UK one did. I graduated nearly 15 years ago though and it was mostly focused on things like copyright law.


My red brick university CS course also had an ethics module. That was in the 90s and I can't remember a thing from it...


Ethics is the study of what you can get away with. University level ethics people doesn't make people a whit better than they were already.

So maybe it's best this way. Computer scientists will still do bad stuff, but at least they'll be embarrassed about it when caught, rather than come up with clever justifications.


That’s not how the ethics class I took in college worked. It reminded us to feel empowered to speak up rather than build something we didn’t think was right, and what the best means of doing so were.


Bar-style licensing* typo


Stanford requires such a course!


I graduated from UNSW in Australia. Ethics was a very popular course amongst the engineering and CS curriculum.

I first thought it was a lot of philosophical bullshit about good and bad, but the history part of things were an eye opener.

Like the census data being abused by Nazis to exterminate jews. It always starts up as good intentions.

Our job is to prevent the worst case. The worst case eventually will happen.


Going by SIGGRAPH demos, nobody should have been trusting video since at least 2008.


Exactly.

Which is why it is crucial everyone realizes how easy it really is to create these fakes, before the masses are duped in favor of the next war or genocide by these techniques.


>The best argument might be that eventually no one will trust video.

Obama talked about it this afternoon. He said "This is bad, blah blah oh no." Of course, you don't believe me because I made this up. That doesn't preclude you from believing written quotes, given the right chain of trust. It's been great to have formats like video that didn't require the chain of trust for a while, but if that time has passed, there's nothing we can do. It is hard, but in the context of text where quotes have been easy to fake for ages, we have dealt with it. It's good for everyone to be on the same page.


I think there's a very visceral part of seeing a human face do/say something that puts it in another league from text. Even though intellectually it may be known that both text and video are trivially forgable, I think it will be a long time before people truly start to question video.


Even when people know the source is not to be trusted, it still influence their judgement in some way.

E.G: we all saw the close shots used journalists to magnify a so-so event and make it newsworthy. Yet, when seen one, many still consider it as "news". We all know which politician lied last year. Yet, when speaking again, many still listen. We all know which company abused consumers. Yet, when a new product is advertised, many still buy.


FWIW, video doesn't need to be forged or manipulated in order to provide an untrustworthy or inaccurate portrayal.

It's possible a video doesn't reveal the appropriate context. (e.g. what happened before the start of the video, and maybe what happened afterwards; or what's happening out of view).

That said, that isn't inherent to video. (And, sure, "swapping faces" doesn't lead to a more accurate portrayal).


Video, all told, isn't that old. We had a society before capturing video. We will have a society after video is trustworthy. The brief window of human history of trustable video is coming to an end. We will get used to it, and be fine.


The problem is that there will be a window were people will trust video while it is not trustable and it will be abused.


Yes, but this will happen, regardless of legislation. The only thing legislation is able to achieve in this space is restrict the technology to actors that are not malicious to begin with.


Define malicious, because the people in power will surely have another definition.


Yes, this is actually a part of my point. There's no point in legislating it because all it will do is prevent an arbitrary class of people from using it, leaving it only in the hands of people who aren't interested in complying with legislation to begin with.

In other words, it will happen so the best thing we can do is acknowledge this and prepare for it.


Face swap software is only one form of video editing software, and video editing software has been used to deceive people from the moment it was first created. You can easily use adobe aftereffects or blender to deceive people.

Check out the "CaptainDisillusion" channel on youtube, it's full of examples of people deceiving others using video editing software. To my knowledge none of the examples he's talked about have used face swapping.


Meh. We live in a world where text and images are completely untrustworthy. Why is video such a big deal?


Because while use of photoshop is widely known, many are still unaware that video can be edited like this and look so real.


There was a point when Photoshop wasn't well known, and people fell for all sorts of manipulated images all the time. Now, years later, it's common knowledge that photos shouldn't be trusted as evidence. These things are all cyclical.


Once "deep-fakes" become mainstream - how long would it take for the public to become aware of them? A week? A month? Any fake video of consequences will be immediately ripped apart by mainstream and social media. Why would 'deep-fakes' be any worse than what is being done with CG today?

I do believe that some people will fall for them, but the damage won't be any worse than when people fall for shopped images and phishing emails. It's just something we have to deal with.


The real issue is a large number of people suddenly distrusting real video evidence rather than those few who were already susceptible being led astray.


Again, why would video be blindly trusted today anyway? You can fake videos through creative cutting and CG.


So...these people have never seen a movie then?

I mean they can accept just about any kind of special effect in a movie as being possible, but editing videos to swap faces is a huge stretch?


Good point. I strongly suspect the worry about deep-fake videos is overstated. Yes some people will fall for them, just as some people fall for photo-shopped images and Nigerian email scams but that's just par for course - we'll have to deal with that. Also, it isn't like videos can't be faked today with out-of-context edits, CG, and audio manipulation (e.g. dubbing).


> Important to note that it is also possible for a technology to have more potential for abuse than good.

This is never the case independent of context.

You can imagine in a major famine that people may start killing each other over scraps of food. In that context a kitchen knife becomes more likely to be used as a murder weapon than to prepare the food that nobody actually has.

But nothing about the knife has changed, it's the context that has changed. And you don't solve the problem by banning cutlery and every other thing with a point or some heft, you solve it by resolving the famine.

You don't solve deepfakes by restricting information, you do it by adapting to their existence. Because they're not going away.


All technology can be weaponised. This is one of the principle reasons for the creation of new technology: to reduce the effectiveness of prior technology, weaponised.


This is apparently github's doing, and not the people behind faceswap: https://github.com/deepfakes/faceswap/issues/392

Github has an infamous history with imposing their feelings on projects they don't like.


> Github has an infamous history with imposing their feelings on projects they don't like.

Can you elaborate on this part? I don't remember seeing something like this before.


IIRC, they nuked 4chan's C Plus Equality repo [0]. The project moved to Bitbucket later, which then again got removed. However, you can still find some copies of it on Github [1]

[0] - https://github.com/FeministSoftwareFoundation/C-plus-Equalit...

[1] - https://github.com/TheFeministSoftwareFoundation/C-plus-Equa...


Good riddance.

Guidelines refresher:

“You agree that you will not under any circumstances upload, post, host, or transmit any content that:

is unlawful or promotes unlawful activities; is or contains sexually obscene content; is libelous, defamatory, or fraudulent; is discriminatory or abusive toward any individual or group; ...”


Github got offended over the use of a word "retard" and they nuked a whole repo.

https://www.techdirt.com/articles/20150802/20330431831/githu...

There's lots more examples of their employees getting triggered and offended by various things and then arbitrarily banning or censoring projects.


Interesting, in French "retard" is a valid word (it means delay) which could be used as a project's name.

Honestly it's a lost battle to try to censor hurtful projects names. At best you can moderate US-centric ones.


The same meaning exists in English as well, as a verb. Commonly found in aviation.

The alternative meaning of "mentally-disabled person" is derived from this meaning, as their brain is "slow" / "delayed". That repo was absolutely using it in this latter sense - "WebM for retards".


It has the same meaning in English as well, but is rarely used aside from a derogatory reference to mental development.


Funnily enough, it was introduced as a euphemism to "idiot" or "cretin", primarily in a medical context, because those other words were considered too unsavoury.

Now, I daresay "retard" is considered worse than "idiot".


One instance in which it's still used is for retarders, a form of brake often used on trucks and trains. Some forms, such as the jake brake, are very loud so you'll occasionally see towns put up signs that say "no jake brakes" or "no brake retarders" (because 'jake brake' is a generalized trademark.) As you might expect, these later signs can turn into a source of amusement...


I think the term "fire retardant" is also fairly common in some circumstances as well, though I'll admit to not being terribly familiar with brake retarders.


Meant to delete this comment, saw why I was wrong. (Can’t delete it now that I’ve edited it, oops)


"> From a business standpoint, too, if I were Github I probably wouldn’t want things like “C++ Equality” to be associated with my company’s name."

There is a major logical flaw here. When you provide a service without discrimination except so much as required by law, you are in no way connected to the usage of your product. You facilitate the usage as a business - end of story. It's only when you begin to selectively censor or target projects for subjective reasons of your choosing, that you end up tying yourself to the content of consumers. Because of your own actions, you now implicitly advocate or support everything which you don't discriminate against.

Imagine for instance a pizza delivery company started to discriminate against who they delivered to. This would be perfectly legal, so long as the discrimination was not based on the handful of protected classes. And so they generally decided to stop delivering to people they considered subjectively bad. Well now they have a huge problem - because anytime they delivered to somebody, who somebody else though was bad, it'd be an implicit endorsement of them.

This is why entering into the discrimination game to begin with is a fool's errand, even if you think things such as censorship are desirable. Keep in mind we're still in the baby steps of the internet and 'access theory'. For thousands of years we thought it was a good idea for reading and writing to be reserved exclusively for the elite of society - clergy and a handful of aristocracy. YouTube, by contrast, did not even exist a mere 15 years ago. I imagine the future will look back on the times of today with some degree of bemusement. Frankly it's quite hard to not be bemused while living through this mess!


You don't have a god-given right to take advantage of a company's brand or audience to make your view more visible than it would be if you published it anywhere else.

Unlike, say, the phone network, it doesn't actually cost you anything more to throw up a git repository on any number of free-speech-supporting websites, including many that offer substantially the same features as GitHub. You can still reach the same people without dragging your own cable half way across the planet - you just don't get to steal someone else's reputation in order to make it easier to do so.

The other side of things is that many websites actually want (algorithmic) editorial control over what their users wind up seeing because that's more profitable for them, and if they want that, they're definitely in a position where choosing to promote content is a direct reflection on the company even by your standards. GitHub is, as far as I'm aware, not one of these companies though.


- McDonalds food is unhealthy, we shouldn't eat it.

- You don't have a god-given right to take advantage of a company's food to make you more healthy[...]

This is basically the argument you're making. I agree with the premise: We don't have a right to take advantage of a company or forcing them to host anything, but we can also evaluate their quality and their level of professionalism, and choose one company or the other based on that. This thread is just pointing out that GitHub is not trustworthy and they lack professionalism when it comes to deciding what can be accepted or not in their platform.


Sure - I'll choose GitHub based on this and other well-publicised "incidents". I don't see any evidence of a lack of professionalism - in fact, I see a company attempting to follow at least some definition of an ethical code, whether that is as a result of internal or external pressure.


If following an ethical code is banning projects that don't align with your political ideology, then I don't want to associate with companies that "follow an ethical code".


Not to excuse Github but now it's Microsoft. I've never used Microsoft properties that included a public presence like Github before, what is their history like in these cases?


Apparently this was done at least nine months ago based on the issue linked above, so it was a pre-microsoft move.


Yes, this has been reported here on HN as early as feb 2018, well before the acquisition by MS https://news.ycombinator.com/item?id=16346242


This is troubling in the context of OpenAI deciding not to release their code and dataset for fear of it being put to bad use. It's a tricky topic but I get nervous at the idea of research being censored.


That was very strange for me as well, as the original stated goal of OpenAI was to decentralize power. It looks like instead of that they just want to be one more of the few powerful entities.


It's possible they would be willing to share with legitimate researchers who ask. Putting it out there for anyone to download is not the only way to do it.


You're interpreting it totally backwards.

If they thing OpenAI made isn't interesting enough of a discovery without its data (because it's all arbitrary anyway), but is very useful to spammers as a piece of code, OpenAI has truly achieved the 200% opposite goals they were looking for.

I mean the Faceswap people have the same problem. They could give less a shit about porn. But that's what people used it for.


Are you surprised? It's too tempting.


I thought OpenAI's goal was to make sure that the first Strong General AI was also a SAFE AI, to the point that they've said that if it looks like they're not going to win that race they'll work for the leader instead? Under the theory that if it's a race at the end then corners will be cut that might doom us all, while them throwing their weight behind the leaders would allow enough margin over the #3 AI development program to go slow and get safety right.


[flagged]


70s? Was it ever different?


[flagged]



Sure, he left OpenAI's board a year ago and he's no longer chairman of Tesla. That doesn't especially change things in practice. Could his lawyers have suggested leaving the OpenAI board due to perceived conflict of interest and problems with his association with other public companies? And can people from Musk's companies then still share information with Open AI? Absolutely.


I like your bubble...


OpenAI doing this was just to get attention. Any funded entity could trivially reproduce their work. There is no way this was done out of any serious, principled fear of bad actors getting their hands on it.


But if they publish the pretrained model then not just funded entities can reproduce their work, but essentially any person that can type `pip install tensorflow` or whatever. That's pretty big reach difference. Although, probably only a few months timewise.


We will get better protections against deepfakes etc. at a much slower rate if we limit their public visibility. We need better counter-tools.

Human ingenuity will not be contained like this. I'm almost certain that somewhere between 10-100 people who saw the OpenAI censored release saw it as a challenge for them to recreate it on their own.

This is fine. Maybe this makes things significantly more chaotic in the short term. But we have to take the long view on this. Ten years from now this tech will be seen as a joke compared to whatever they will have. It's time to start preparing for that.


Ya, but 'pip install tensorflow' is about as hard as reproducing the work from their paper too. Anyone with a CS degree should be able to do it with a bit of effort. I agree that that is still a reach difference, but I think it's kind of negligible here.


I think you are vastly underestimating the difficulty of achieving those result.


I don't think so. They published a paper describing their methods. I've implemented techniques from papers like these before, it's not that hard. What they're doing doesn't seem especially complicated to me.


As a grad student, my trust that they actually did what they said they did is 0. If you don’t publish your source code without a damned good reason (i.e. your legal department says you are not allowed to), your publication is near worthless in non-theory CS since it is very likely to be unreproducible and probably has bugs which render the conclusions invalid.


Most start with good intentions. After realizing the power/advantage they have, whether it be from advanced technology, political office, or other position, they become jealous of it and find moral justification for clinging onto it despite conflict with their original intention.

There's probably a word for this sentiment that I'm not aware of.


especially for an organization calling itself "Open"something.


The concern around deep fakes is that they could be used to trick people, fair enough, but apparently tricking people doesn’t require much, if any, believably. Throughout history up to today people are tricked by the most obvious untruths with disastrous consequences on large scales.


The end result of this is a bunch of trolls in Russia might be out of a job soon and get replaced by a server farm running in the target country pumping out similar but not identical stories.

What this might usher in is the era of cryptographically signed news articles. Not just credibility but verifiability. Blocking


> What this might usher in is the era of cryptographically signed news articles.

Actually, how about cryptographically signing videos as they get written on the recording device?

Maybe there even are ways to sign data so that the integrity can get validated on shorter segments, so that clips can be cut. Write a signature every 5 seconds for the past 5 seconds?

Edit: This exists and the term for it is 'video authentication'.


> Actually, how about cryptographically signing videos as they get written on the recording device?

That wouldn't prove much besides that the person sharing the video had access to the device's private key. I think the best you can do is timestamp the video by uploading a hash of it to a blockchain, but even then that only proves the video existed sometime before that instant.


> What this might usher in is the era of cryptographically signed news articles. Not just credibility but verifiability. Blocking.

Huh, I'd never even considered that you could do that.


For what it's worth, it's almost never the case that the lack of proof of the identity of a news article author is what causes it to be fake news. More often, it's:

- a fact that has been distorted to be interpreted in a 180 degree way (Americans paying tariffs to the US Gov for buying Chinese goods = Trump saying "China is finally paying us!"), or

- a total untruth slipped in between valid concerns (like the fake Russian Black Lives Matter pages piggybacking off of civil rights abuses mentioned by the American Black Lives Matters campaigns), or just

- incitement of uncertainty in more or less solved problem domains (anti-vaxxers)

If you are interested in learning about more (failed attempts at) verified news platforms, though, try looking up verrit, and pravduh


Yes, but people quoted or referenced can provide their signature as proof to say they not only agree this is correct but that they also confirm it is not taken out of context or misconstrued.


I agree, you could probably attach some kind of social proof key-ring to news articles. That said, I feel like this would devolve into a "social currency for real currency" under-the-table paid sponsorship kind of deal rather quickly. We seem to have plenty of stealth ads nowadays, and it's especially disconcerting because iirc only 1 in 10 could discern them. I guess it could still be worth giving a shot in the hands of the right tinkerer.


The problem is that it doesn’t matter if someone is told something isn’t true, their beliefs aren’t changed.


I wonder if there's an extent it can be brought to that is so absurd that not even the most ignorant people can continue to buy into.



That thread is 5 months old, so apparently the censoring is 5 months old.


Yes, microsoft blocked a lot of features in days after takeover. Try searching for code being logged out.


Code search has been restricted to logged-in users way before the MS takeover.

Proof: When searching "github code search login" on hn.algolia.com, it turns up this HN thread from September 2016, nearly 2 years before MS bought GitHub: https://news.ycombinator.com/item?id=12581068


This particular repo was blocked well before the MS buyout. Whatever decision was made, it was GitHub not MS who made it.


>Due to our concerns about malicious applications of the technology, we are not releasing the trained model.

> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.

Excerpt from the recent OpenAI blogpost about GPT2 text models. It seems valid since giving the code or probably a web app can make anyone easily create malicious intent content online


`git clone https://github.com/deepfakes/faceswap.git` still works without login...


You can also see forks without login. https://github.com/0i0/faceswap


Technology is not inherently neutral, and it’s time our industry collectively grows up and stops treating it as such. I hope this is a harbinger of that.


What do you think about Tor? Bittorrent? Bitcoin? How about things like IDA Pro? Many pretty amazing technologies have pretty destructive popular if not primary uses.

I feel these tools are worth having on their own and it seems widely accepted at this point that the tools themselves aren't at fault for their user's actions, even if those actions are the most popular use of the tools.

Personally I'm much more concerned about the ethical actions of internet advertisers and social media giants - those who are making direct ethical decisions that impact their users privacy and access to information.


Tor, Bittorrent, Bitcoin and IDA Pro are all fascinating technology. Guns are also fascinating technology. The world has gun control, but not software control. I agree with your parent comment that we need to stop letting dangerous knowledge and (even worse) ready-to-use tools be spread openly.

As far as I know, at least Hex-Rays screens customers very carefully before selling IDA Pro.


The world very much DOES have software control. There are whole sets of countries that cannot use certain versions of SSL and other popular encryption tools because of software export controls. Yes, the governments of these places can still get access, but they can also buy weapons ect on the black market.

https://en.wikipedia.org/wiki/Export_of_cryptography


How exactly are you going to stop the spread of knowledge? Has there ever been an example of that which worked?


Attempting to control such a thing would be even more frivolous than the War on Drugs, governments ability to control things is quite limited without taking extremely draconian measures. We don't need a War on Software.

Your stance against so called "dangerous knowledge" worries me - would you encourage banning books about cryptography and software development?


Your examples are technologies for decentralizing power. Those have very clear up and downsides (more freedom, but less enforcement).

These technologies are to the detriment of enforcement. Because enforcement is far from a universal good, these technologies are far from a universal bad. Contrast this with faceswapping, where the upside is far less clear. Same goes for e.g. Stuxnet. It is a beautiful piece of technology, but not really a force for good given that it is widely available.


What about torture devices or biological weapons or even something like Cambridge Analytica's analysis products?


Tools are amoral, they can be used for moral or immoral purposes. Lock picks can be used by a burglar to harm people, or by a locksmith to help people. With a hammer, you can build an orphanage or hit orphans.


Tools some times have a clear (im)moral focus. If I make a tool to make fingernail extraction super painful to facilitate torture you're going to have a hard time spinning it is amoral. Now perhaps it doubles as a good drawing pin (aka thumbtack) extractor, but the tool has an immoral characteristic IMO because it was given it in the design and production stages.

A bio-weapon delivery vessel might be a great essential oil diffuser, but I'd argue that the tool still has an essential immoral quality by virtue of its specialised design.

Does that apply here? I don't think so, I don't think this tech was created for the purpose of fomenting unrest and committing frauds, but maybe I'll be corrected on that.


If you use or threaten to use such a torture device on somebody, that would be immoral. Your use of it immoral, not the device itself. If you create this torture device and put it on display as a novelty, then it's not being used for anything immoral. So yes, even your hypothetical torture device is amoral.

In fact, it's not so hypothetical. The vast majority of iron maidens ever created have only ever been used as novelties, not for torture.


AIUI the iron maiden was created solely as a novelty.

Creating the improved capability for torture is itself immoral IMO.

Worth noting here that a torture device works not only by physical application, but working on the mind as a possibility - just the presence of such a device, or awareness of such a device can create palpable fear. (Like brandishing a weapon, which is often illegal, is still effective, the weapon didn't have to be fired to be used.)


> Creating the improved capability for torture is itself immoral IMO.

I can agree with that. Creating is an act and acts can have morality, hence brandishing at somebody is immoral as well. The inanimate object you create is still amoral though. I might put you on trial for war crimes for advancing the state of the art of torture, then go on to use your torture device in a moral way, by putting it on display to serve as a warning to future generations.

To bring this discussion back to the topic of software, distribution of software is an act and can be judged as immoral or moral. Depending on the nature of your software, it may be immoral to distribute it indiscriminately to the general public. Some people will probably find that contentious, but I think the possibility is definitely there. On the other hand, depending again on the nature of your creation, it might be immoral to not give it to the general public. Imagine if Alexander Fleming was a misanthrope and took the secret of penicillin to the grave.


Putting it on display isn’t using it.


What good will come out of treating technology as a priori coloured as opposed to neutral? Which entity will get to decide what is neutral or not? How will it be prevented from abusing its position?

It seems like an extremely bad idea to me.


I would argue that e-mail has been used via phishing and other (419 scam) tricks to swindle more people out of their money than other technologies.

Clearly it is a dangerous tool that must be restricted to select users.


Yes it is. The basic definition of technology is applied science. It's as neutral as it gets.

The morality and behavior come from the humans that use it.


What if the morality and behavior comes from the humans who apply it? Is applied applied science still as neutral as it gets?


"Applied" in this context means turning theory into practice

It doesn't refer to who's actually physically using the tech.


[flagged]


Would you please not get personal in arguments here?

https://news.ycombinator.com/newsguidelines.html


Happened to me too, after I published my book criticizing Russia: https://github.com/saniv/text/blob/master/one-life-in-russia... Hi Nikita, Thanks for writing in. Your repositories were set to require a login to view following multiple reports from users concerned about their contents. This was done as an alternative to hiding or disabling the content entirely. Thanks, GitHub Support Russians support freedom of speech that much.


I'm of many minds about this

Censorship: bad.

That Life: The tech will get created by someone else so censoring does almost nothing. Better to put it out there so we can try to make defenses. Maybe make a bunch of fakes with famous people's permssion to spread the word how you can't trust video anymore

Dangerous: To take an extreme example imagine you figured out how to make some kind of E=MC^2 bomb simply such that anyone with the knowledge could make a device that could blow up a city for $100 and a few hours of time. Would it be ok to upload those instructions to the internet for any disgruntled teen to repo?

deepfakes are certainly not at that extreme but we can also clearly imagine the harm they could do as they progress.

There have been several examples recently of people seeming to react to arguably false perceptions. I'm actually thinking of ones in the last 2-3 days but I'm sure there are plenty of others.


Hey! We love opensource and AI so let's being AI to communities!

- Community creates a project that makes it impossible to track faces in social media and anywhere online.

yghmmm, no, not that kind of AI


I'm at a loss for what this will actually do in the long run. Seems like someone can just do an unofficial "pirate fork" or dump the code somewhere that doesn't track it's users. That would be a virtual certainty, if this ends up being an in demand thing.


My guess is its to prevent a few of the non programmer journalists from viewing it and creating a drama storm.


Fair enough, I guess there's reasons, especially from a PR perspective. It just seems like a useless gesture to me.


What's more scary than fake sex tapes is the potential for easy plausible deniability for real criminality.


I honestly think it's incredibly important for tools like this to be publicly available for this exact reason.

If they're not and they're hoarded by tech companies or intelligence agencies then we'll just have a lopsided system where people aren't aware of how capable such technologies are, what their limitations are, how to analyze them to spot issues, etc.

Imagine if only nationstates knew about these sorts of technologies and used them for war or if only certain elites in tech had access to them and used them to implicate competitors in crimes? The technology is out there now - at this point, public knowledge is our best defense - people always question if a contentious image is photoshopped, we want that same level of questioning to happen for videos.

In terms of this being used as an excuse to get someone out a criminal charge, it might make us take a better look at the chain of custody on video evidence but I don't think it would invalidate it completely.


There are corrupt governments/police that would be able to use this tech to create fake evidence against people so its really important that everyone knows this is possible.


Maybe, to some extend, it will invalidate video proofs at some point.

This might seem nice against ever growing CCTV, but probably state security cameras will be "trustworthy" and all media evidence gathered by private persons will be dismissed...


Or false incrimination, false flag operations and so on. You can start a war with a video. That might change to some degree as trust in video decreases, but what trust will take its place?


We could already, and we probably did, fake video to start wars. The tech just make it easy for everybody, but state actors have been manipulating pictures for a long time.


Looks like tribalism so far. Signalling group membership and other kinds of virtue signalling, probably overriding rationalism and evidence.


Or breaking somebody's life buy making the perso do something bad, and make the family stumble upon.

The potential for manipulation is huge given how many people trust pictures. I know that I don't distrust most pictures I see.


The first consequence of this tech is that we'll have to stop trusting video evidence. What you describe is a use case that will only be valid for a short period of time while society adjust.

What GP is describing is the long term consequence of not being able to trust video evidence. Now even if you film someone red handed, they can deny it.

Another dire consequence is that the entire archive of all videos filmed since the beginning are now tainted by doubt. Any past politician speech, any past horror caught on film, etc. can now be said to have been crafted recently.


We faked photos for decades now. People still trust them.


It's still largely possible to spot most fake images produced by non-specialists (I mean like newspaper or IG images).


Stalin used doctoring 70 years ago with pitiful tech, and it worked. Today even basic photoshop skills can do much better, and it doesn't matter if it's called out by the righ people, it will have done damages. Some people still confuse The Onion articles with real ones after all.


Or framing people.


Since MS took over Github also code-search requires being logged in.


Are you sure about that? I seem to recall them doing this for quite a while. Or maybe that was advanced code searches?


Code search has been restricted at least since September 2016 according to a quick search on HN: https://news.ycombinator.com/item?id=19185835


Quite sure you are right, unless my memory is failing sitewide github search of codebases required a login. Searching for repo names without login was ok though.

Possibly it was only enforced for very generic search queries returning thousands of results but it has been around a long time, Github acquisition was only in October 2018.


Sitewide code search has been used to find AWS keys and other secrets people have accidentally uploaded. The login requirement is so they can set meaningful rate limits.


You could use site:github.com in a google search. Choose your poison.


Alternative:

!gh or !git anywhere in a ddg search will restrict it to github.


I can’t imagine the blowback about censorship would be worth this.

Censoring will just draw more attention and traffic. What’s really unsettling is that GitHub is playing politics with its users, without even informing them or communicating with them. You would think they would have the courtesy to tell the owner.


I don't think they're playing politics, I think they're playing "don't get your arse kicked by the government". Very interesting that someone wants to know the names and emails of everyone accessing OSS deep fakery - I wonder what else there is...


It's interesting that git cloning the repo works even without logging in.


Are your ssh keys being used? Or did you use the https:// endpoint?


The https endpoint works without authentication


Another curiosity is that it still shows up in the search results for anonymous users: https://github.com/search?q=faceswap

Hard to guess at the intention.


Yet another curiosity is that the repo itself is only accessible to logged in users, it's almost 3 thousand forks are not. This is a half baked GitHub feature.

It's pretty stupid to make it only available to logged in users as all it's doing is annoying people. Hope this "feature" stays half-baked, don't want GitHub to become authwalled like LinkedIn has become.


Heh. They might have achieved the opposite of what they wanted to do. I did not know about this repo, and now I have a local clone, just in case they "censor" it for good.


Streisand effect. I also cloned the repo, just in case - wouldn't have otherwise.


Throughout human history, people have published information in order to discredit or defame their opponents. Sometimes this has worked. But in the majority of cases, what determined the result was the climate around said evidence, not the evidence itself.

I can show verifiable, witnessed audio recordings of a guy saying he likes to grab women by the pussy, but that won't stop that guy from becoming President. Powerful tools don't run societies, people do.


Just to make sure, is the relevant behavior here that going to the URL shows its generic login form when not logged in? Definitely can't be unintentional.


I see why they might have done it, but I suspect we've already seen what's coming next. Limiting access to code used by very notorious and controversial groups. Limiting access to code written by very notorious and outrageous people. Limiting access to code produced by somebody who is sympathetic to very notorious and outrageous people. Limiting access to code produced by somebody who once said something controversial on Twitter and the outrage machine decided to dig it up today. Etc. etc. The slope exists, and we already have seen how slippery it is. It always starts with almost 100% clearly bad cases. It rarely just ends there.

P.S. and yes, before the obligatory "it's a private business" comments come in, I know I can build my own Internet and avoid all this. Thanks for reminding.


My understanding of the deepfakes timeline:

1) One day somebody posts a handful of really obviously faked janky looking porn videos. We all have a good laugh, briefly imagine the possibilities, and then move on

2) Like 3 weeks later, every social media platform explicitly bans this dumb toy that wasn't even any good

3) a year or so passes

4) Now governments are passing dramatic legal bans on these things, and there's all kinds of shady things happening. Like, this is the first instance of this kind of public restriction I have _ever_ seen on github.

So: which major news events were completed fabricated?


SPA = Single Page Application

Notice how that says "Application", not website. It amazes me how people want to make their WordPress site into an SPA simply because someone told them to do so or it was the next "hip" thing to do.

SPA have their place... migrating a desktop application to the web and making it a SPA makes perfect sense to me.


Wrong parent.


This has been the case since it's release a year ago. very interesting, are they collecting data on who visits it?


I wonder if some AI technologies will need a non proliferation pact similar to nuclear weapons. Similar to how centrifuge technology was a guarded secret, perhaps we should consider something for the most damaging of AI patterns.

While I agree technology isn't inherently good or evil, this feels more harmful than helpful.


Asking for a login in incognito for me.


> We will take a zero tolerance approach to anyone using this software for any unethical purposes and will actively discourage any such uses.

Why not change the license to enforce the use restictions?


It looks like it was 5 months ago, not much after Microsoft bought GitHub. It looks like they do enterprisy things already that will make people like GitLab better.


This behavior was around since before the acquisition [3].

[3] https://github.com/deepfakes/faceswap/issues/392


How can management at Microsoft/GitHub be so clueless?

When I think about people I know who have been long-time users of GitHub and how this kind of censorship resonates with them... Oh my.

These early adopters could migrate away very quickly.


Soon, "Sign in with your Microsoft account?"


Not a surprise. GitHub comply with censorship requests from non-US government agencies around the world since forever.


Comedian Kyle Dunnigan does the best stuff around with face swap https://www.instagram.com/kyledunnigan1/ He does parody videos of Trump, Kardashians & a lot of other celebrities. Really funny stuff. Take a look if you haven't seen! Everyone is focusing on the negative, but this guy is making something positive I believe.


Can someone explain the reasoning for requiring login for this particular project?


HN changed the title of this from the original question: “GitHub censors deepfake source code for non logged in users? (Try incognito)”

I have no opinion about whether or not that is a better title, but I thought it should be known that it was modified from its original.


Good you mention it, because I did not understand intended meaning of current title ("Faceswap Github repo is public but requires a logged in user"). I scoured the README to see how they managed to restrict a standalone, offline piece of software. Turns out, it's Github who's restricting access.

While censorship may not be an appropriate word, this is weird. Why would Github do something like that, except to force people accessing the repo to leave a trail leading to their PII?


That's a fair point. I've modified it again to make it clearer that it's Github that did the restriction.


A moderator changed it because of the tendentious use of the word 'censors'. Initially we changed it to the title of this issue that addresses the topic: https://github.com/deepfakes/faceswap/issues/392, but it turns out that wording was a bit unclear, so we modified it to be clearer.


I like the latest title better than the one it was first changed to, thanks.


Big deal. Why should we care?

Anyone can fork and mirror it where they want, and make it accessible to anonymous users. Sure, that would "inconvenience" some users, but so what? Github doesn't exist to please every single person out there.

Create your own mirror, and let us know the URL. Don't just whine and try to manufacture outrage if you aren't willing to do contribute resources required to host the code yourself.

I fully support Github's right to use their property (github.com) as they please, because I want the same right for myself.


Github is free to do whatever they want and everyone else is free to complain about. Freedom to do something does not mean you're free of consequences for doing so.


“I may not agree with what you have to say, but I will defend to the death your right to create fake nudes of celebrities.”

— definitely Voltaire, for sure. /s


As a nudist I’ve often thought this DeepFakery might allow people to take a sigh of relief, take off all their clothes if they so please, and dismiss any photographic or video ‘evidence’ of the event as a fake. And wouldn’t that be liberating? I’m reminded of indistinguishable-from-real-but-fake-data called “bogons” from Neal Stephenson’s Cryptonomicon.


I wonder if this might actually have some benefits for victims of revenge or leaked porn, including celebrities in the long run - "oh, that's photoshopped" seems an easy way to put something to bed.


Precisely — but how dumb is it that we actually experience a societal need to put what evidently is pretty widespread ordinary behaviour “to bed” in a plausibly deniable manner?


How soon .docx for Readme?

Works with clone though. Wonder how many more such repos exist?

Do they have a transparency report which includes such action?

akerro 29 days ago [flagged]

Ah yes, Microsoft and their love and commitment to opensource!


As someone else said, this behaviour has been around since before the acquisition. Completely unwarranted cynicism.


I would not be surprised if it turns out Microsoft is behind this.


Well, Microsoft does own GitHub. Who else would be behind it?


I think the implication was that MS directed someone in the GitHub org to do this, rather than someone in the org making that decision independently.

As a Microsoft employee, it would be even more enormously disappointing if this were a top down rather than internal org decision.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: