Important to note that it is also possible for a technology to have more potential for abuse than good.
Sure, you can come to a philosophical place where nothing is good or bad, or the good is perfectly balanced by the bad, but if we're looking at increasing freedom, peace and trust, it's hard to see how the upside of this tech is equivalent to it's potential for abuse.
The best argument might be that eventually no one will trust video.
As such, tools like this just prove the conclusion: No one should trust video.
Sure, there was a "requierment," but I wouldn't really consider it legitimate. It took concerted effort to "earn" any grade lower than an A. I wasn't personally acquainted with any other student who seriously considered the social and ethical implications of computers during or after taking a 400 level course titled "Social and Ethical Implications of Computers."
On the bright side, I heard through the grapevine that rigor, or at least workload for the course, has increased since I took it.
Although anecdotal and perhaps specific to my institution, every recollection of my University career makes me feel deeply thankful that I ended up eventually pursuing a double major in mathematics. The quality and dedication of the instructors wildly surpassed that of the CS department. Mathematics professors were there to share knowledge. CS perfessors were pressured by hiring statistics into "preparing us for the workforce" by essentially quizzing specific interview questions like "what's the difference between interfaces and subclasses in Java."
However, in the Quebec system (and presumably the equivalent in other provinces) we have 3 philosophy classes (humanities, world-view etc) in college/cégep (somewhat mandatory before university). I also had a mandatory ethics class since I completed a technical degree, which I also did not appreciate at the time, but it helped me develop more critical thinking later on.
You can't go overseas to get a bridge built across the river in your town. You can't get a foreign barrister to represent you in court in your country.
You can easily get a new social media website or IoT server built and hosted in any country you want.
The scenarios where the effects of bad ethical decisions most often make the news are not things that lend themselves well to being subject to local licensing regulations, like embedded avionic or medical devices or internal banking systems. Problems do happen in those spaces, but they are rare.
None of this even touches on usability and the difficulty of restricting tool use to the "good guys". The jslint licence got a lot of stick for that.
You can build/host things anywhere, but something centered around “if you generate profits in the USA, and hold the data of Americans, these requirements exist for your system” makes sense. In particular, liability needs to rest somewhere, such that someone gives enough of a shit to do things right.
A civil engineering firm could outsource the design of a bridge anywhere, but in the end, somebody’s neck is on the line if it fails.
This is exactly the stance taken by GDPR. Most devs don't like it from what I've seen on HN. (Saying this without any judgment. I am personally very much in favor of what GDPR does.)
This sort of thing already gets dodged by international companies who "generate profits" in countries with lower taxes and "incur costs" in countries with higher taxes.
>" and hold the data of Americans"
GDPR? System requirements is one thing, requiring a licensed engineer from multiple jurisdictions is another. Bear in mind that because software changes constantly, this has to be a senior staff position, not a one-time sign-off like it could be for a bridge.
I assumed all universities had a similar program, if they don't students are really missing out.
The main problem is there is not a certifying organization that standardizes ethics (and includes licensing). A single developer refusing something on ethics is pretty meaningless. A company will just find someone else who will do it.
Isn't it just subtle propaganda? Good, bad, just, unjust - what's ethical in China, for example, or Saudia Arabia is not the same as what's ethical in the US or even Europe.
Nevermind the thought of relatively centralized institutions acting as arbiters of ethics and, by extension, core aspects of culture.
Cultural artifacts are quite teachable; that's generally how they are transmitted. Why would that be difficult?
How do you decide which brand of ethics to teach? Especially if your class is represented by a range of nationalities?
Look at this thread and how oblivious everyone is to the variability of the definition of ethics. We take the subject as some kind of absolute, but really we're just viewing the rest of the world through priveleged western lenses.
There's two common options chosen:
(1) broad multi-system survey rather than a single system or narrow set, and
(2) teach the system or systems most connected to the target legal system (for cases where ethics is being taught largely to create a safety buffer around legality and anticipate legality in adopting to address where legality had not yet settled.)
When you know why you want to teach ethics, it's fairly trivial to choose the approach.
Do you earnestly believe that Harvard University will be paralyzed by the choice between a modern western ethics system in which women have all the rights of men, or (to use your example) a Saudi ethics system in which women are property?
Most people in the real work are not so ridiculous that they allow themselves to be paralyzed by knowledge of ethical relativism.
The same reason that government shouldn't be legislating religion. I went to college to learn science, mathematics, liberal arts, so that I could learn to make decisions for myself-not to be indoctrinated with someone else's idea of what is right and wrong.
Edit: imagine a Saudi funded institution in the U.S. offering courses on ethics. Would you be ok with that? Why are you even sure that the particular ethics courses in Harvard agree with your own cultural norms? When you mix subjectivity with education, you get propaganda. Sure, some of it is unnavoiable because ultimately there are only so many topics one has time to learn and all educators/authors are biased humans, but the topic of ethics does not even allow for the possibility of objective treatment. It does not belong in school.
Here's a quick illustrative example: what do you think passed for ethical to the average citizen at the height of Nazi Germany? Or Cold War era Soviet Union? To CCP members during the great leap forward? Supporters of Dutuerte? Liberals vs Conservatives in the U.S.?
So which brand is your University picking and choosing to offer in class? The whole idea is dangerous - colleges should not be in the business of teaching ethics, because in order to do so they must decide on what is ethical.
That's bullshit. Just because some places have unethical norms, doesn't mean their norms are ethical. Just because some places have a norm to mistreat some people, doesn't mean those people suddenly don't feel mistreated.
I do agree with your pretense though that programmers have a tendency to not think about ethics and should be held to some sort of code in the same way doctors and lawyers are supposed to be.
It is worth noting that this is common to all first degrees, and may or may not be tailored to your particular subject area, so it's often up to the learner to work out apply the theory to their own scenarios.
I totally support the idea that all professionals need to be ethical and moral, and I definitely try to do so in my life, but I've given up hope that society at large is interested in this. I think individuals generally are, but any company, once it becomes powerful, seems to also become evil.
For instance, "don't build stuff that can be abused" is not a good framework for ethical decision-making. "Abuse" has no clear definition, and "can be" is an incredibly high standard. By this standard, the hammer should have have been invented. It's incredibly easy to hurt someone with one! There are no safeguards whatsoever!
Man if only we could make all our political science grads take an ethics course, then there'd be no more war!
If an ethics course teaches students which decisions actually have an ethical component in them, that's already a huge win: Whenever they encounter such a decision, they will be more likely to notice and ponder the ethical implications contained therein. This does not necessarily prescribe a particular set of ethical rules that they have to adhere to.
Edit: see also https://www.acm.org/code-of-ethics
So maybe it's best this way. Computer scientists will still do bad stuff, but at least they'll be embarrassed about it when caught, rather than come up with clever justifications.
I first thought it was a lot of philosophical bullshit about good and bad, but the history part of things were an eye opener.
Like the census data being abused by Nazis to exterminate jews. It always starts up as good intentions.
Our job is to prevent the worst case. The worst case eventually will happen.
Which is why it is crucial everyone realizes how easy it really is to create these fakes, before the masses are duped in favor of the next war or genocide by these techniques.
Obama talked about it this afternoon. He said "This is bad, blah blah oh no." Of course, you don't believe me because I made this up. That doesn't preclude you from believing written quotes, given the right chain of trust. It's been great to have formats like video that didn't require the chain of trust for a while, but if that time has passed, there's nothing we can do. It is hard, but in the context of text where quotes have been easy to fake for ages, we have dealt with it. It's good for everyone to be on the same page.
E.G: we all saw the close shots used journalists to magnify a so-so event and make it newsworthy. Yet, when seen one, many still consider it as "news". We all know which politician lied last year. Yet, when speaking again, many still listen. We all know which company abused consumers. Yet, when a new product is advertised, many still buy.
It's possible a video doesn't reveal the appropriate context. (e.g. what happened before the start of the video, and maybe what happened afterwards; or what's happening out of view).
That said, that isn't inherent to video. (And, sure, "swapping faces" doesn't lead to a more accurate portrayal).
In other words, it will happen so the best thing we can do is acknowledge this and prepare for it.
Check out the "CaptainDisillusion" channel on youtube, it's full of examples of people deceiving others using video editing software. To my knowledge none of the examples he's talked about have used face swapping.
I do believe that some people will fall for them, but the damage won't be any worse than when people fall for shopped images and phishing emails. It's just something we have to deal with.
I mean they can accept just about any kind of special effect in a movie as being possible, but editing videos to swap faces is a huge stretch?
This is never the case independent of context.
You can imagine in a major famine that people may start killing each other over scraps of food. In that context a kitchen knife becomes more likely to be used as a murder weapon than to prepare the food that nobody actually has.
But nothing about the knife has changed, it's the context that has changed. And you don't solve the problem by banning cutlery and every other thing with a point or some heft, you solve it by resolving the famine.
You don't solve deepfakes by restricting information, you do it by adapting to their existence. Because they're not going away.
Github has an infamous history with imposing their feelings on projects they don't like.
Can you elaborate on this part?
I don't remember seeing something like this before.
 - https://github.com/FeministSoftwareFoundation/C-plus-Equalit...
 - https://github.com/TheFeministSoftwareFoundation/C-plus-Equa...
“You agree that you will not under any circumstances upload, post, host, or transmit any content that:
is unlawful or promotes unlawful activities;
is or contains sexually obscene content;
is libelous, defamatory, or fraudulent;
is discriminatory or abusive toward any individual or group;
There's lots more examples of their employees getting triggered and offended by various things and then arbitrarily banning or censoring projects.
Honestly it's a lost battle to try to censor hurtful projects names. At best you can moderate US-centric ones.
The alternative meaning of "mentally-disabled person" is derived from this meaning, as their brain is "slow" / "delayed". That repo was absolutely using it in this latter sense - "WebM for retards".
Now, I daresay "retard" is considered worse than "idiot".
There is a major logical flaw here. When you provide a service without discrimination except so much as required by law, you are in no way connected to the usage of your product. You facilitate the usage as a business - end of story. It's only when you begin to selectively censor or target projects for subjective reasons of your choosing, that you end up tying yourself to the content of consumers. Because of your own actions, you now implicitly advocate or support everything which you don't discriminate against.
Imagine for instance a pizza delivery company started to discriminate against who they delivered to. This would be perfectly legal, so long as the discrimination was not based on the handful of protected classes. And so they generally decided to stop delivering to people they considered subjectively bad. Well now they have a huge problem - because anytime they delivered to somebody, who somebody else though was bad, it'd be an implicit endorsement of them.
This is why entering into the discrimination game to begin with is a fool's errand, even if you think things such as censorship are desirable. Keep in mind we're still in the baby steps of the internet and 'access theory'. For thousands of years we thought it was a good idea for reading and writing to be reserved exclusively for the elite of society - clergy and a handful of aristocracy. YouTube, by contrast, did not even exist a mere 15 years ago. I imagine the future will look back on the times of today with some degree of bemusement. Frankly it's quite hard to not be bemused while living through this mess!
Unlike, say, the phone network, it doesn't actually cost you anything more to throw up a git repository on any number of free-speech-supporting websites, including many that offer substantially the same features as GitHub. You can still reach the same people without dragging your own cable half way across the planet - you just don't get to steal someone else's reputation in order to make it easier to do so.
The other side of things is that many websites actually want (algorithmic) editorial control over what their users wind up seeing because that's more profitable for them, and if they want that, they're definitely in a position where choosing to promote content is a direct reflection on the company even by your standards. GitHub is, as far as I'm aware, not one of these companies though.
- You don't have a god-given right to take advantage of a company's food to make you more healthy[...]
This is basically the argument you're making. I agree with the premise: We don't have a right to take advantage of a company or forcing them to host anything, but we can also evaluate their quality and their level of professionalism, and choose one company or the other based on that. This thread is just pointing out that GitHub is not trustworthy and they lack professionalism when it comes to deciding what can be accepted or not in their platform.
If they thing OpenAI made isn't interesting enough of a discovery without its data (because it's all arbitrary anyway), but is very useful to spammers as a piece of code, OpenAI has truly achieved the 200% opposite goals they were looking for.
I mean the Faceswap people have the same problem. They could give less a shit about porn. But that's what people used it for.
Human ingenuity will not be contained like this. I'm almost certain that somewhere between 10-100 people who saw the OpenAI censored release saw it as a challenge for them to recreate it on their own.
This is fine. Maybe this makes things significantly more chaotic in the short term. But we have to take the long view on this. Ten years from now this tech will be seen as a joke compared to whatever they will have. It's time to start preparing for that.
There's probably a word for this sentiment that I'm not aware of.
What this might usher in is the era of cryptographically signed news articles. Not just credibility but verifiability. Blocking
Actually, how about cryptographically signing videos as they get written on the recording device?
Maybe there even are ways to sign data so that the integrity can get validated on shorter segments, so that clips can be cut. Write a signature every 5 seconds for the past 5 seconds?
Edit: This exists and the term for it is 'video authentication'.
That wouldn't prove much besides that the person sharing the video had access to the device's private key. I think the best you can do is timestamp the video by uploading a hash of it to a blockchain, but even then that only proves the video existed sometime before that instant.
Huh, I'd never even considered that you could do that.
- a fact that has been distorted to be interpreted in a 180 degree way (Americans paying tariffs to the US Gov for buying Chinese goods = Trump saying "China is finally paying us!"), or
- a total untruth slipped in between valid concerns (like the fake Russian Black Lives Matter pages piggybacking off of civil rights abuses mentioned by the American Black Lives Matters campaigns), or just
- incitement of uncertainty in more or less solved problem domains (anti-vaxxers)
If you are interested in learning about more (failed attempts at) verified news platforms, though, try looking up verrit, and pravduh
Proof: When searching "github code search login" on hn.algolia.com, it turns up this HN thread from September 2016, nearly 2 years before MS bought GitHub: https://news.ycombinator.com/item?id=12581068
> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.
Excerpt from the recent OpenAI blogpost about GPT2 text models. It seems valid since giving the code or probably a web app can make anyone easily create malicious intent content online
I feel these tools are worth having on their own and it seems widely accepted at this point that the tools themselves aren't at fault for their user's actions, even if those actions are the most popular use of the tools.
Personally I'm much more concerned about the ethical actions of internet advertisers and social media giants - those who are making direct ethical decisions that impact their users privacy and access to information.
As far as I know, at least Hex-Rays screens customers very carefully before selling IDA Pro.
Your stance against so called "dangerous knowledge" worries me - would you encourage banning books about cryptography and software development?
These technologies are to the detriment of enforcement. Because enforcement is far from a universal good, these technologies are far from a universal bad.
Contrast this with faceswapping, where the upside is far less clear. Same goes for e.g. Stuxnet. It is a beautiful piece of technology, but not really a force for good given that it is widely available.
A bio-weapon delivery vessel might be a great essential oil diffuser, but I'd argue that the tool still has an essential immoral quality by virtue of its specialised design.
Does that apply here? I don't think so, I don't think this tech was created for the purpose of fomenting unrest and committing frauds, but maybe I'll be corrected on that.
In fact, it's not so hypothetical. The vast majority of iron maidens ever created have only ever been used as novelties, not for torture.
Creating the improved capability for torture is itself immoral IMO.
Worth noting here that a torture device works not only by physical application, but working on the mind as a possibility - just the presence of such a device, or awareness of such a device can create palpable fear. (Like brandishing a weapon, which is often illegal, is still effective, the weapon didn't have to be fired to be used.)
I can agree with that. Creating is an act and acts can have morality, hence brandishing at somebody is immoral as well. The inanimate object you create is still amoral though. I might put you on trial for war crimes for advancing the state of the art of torture, then go on to use your torture device in a moral way, by putting it on display to serve as a warning to future generations.
To bring this discussion back to the topic of software, distribution of software is an act and can be judged as immoral or moral. Depending on the nature of your software, it may be immoral to distribute it indiscriminately to the general public. Some people will probably find that contentious, but I think the possibility is definitely there. On the other hand, depending again on the nature of your creation, it might be immoral to not give it to the general public. Imagine if Alexander Fleming was a misanthrope and took the secret of penicillin to the grave.
It seems like an extremely bad idea to me.
Clearly it is a dangerous tool that must be restricted to select users.
The morality and behavior come from the humans that use it.
It doesn't refer to who's actually physically using the tech.
That Life: The tech will get created by someone else so censoring does almost nothing. Better to put it out there so we can try to make defenses. Maybe make a bunch of fakes with famous people's permssion to spread the word how you can't trust video anymore
Dangerous: To take an extreme example imagine you figured out how to make some kind of E=MC^2 bomb simply such that anyone with the knowledge could make a device that could blow up a city for $100 and a few hours of time. Would it be ok to upload those instructions to the internet for any disgruntled teen to repo?
deepfakes are certainly not at that extreme but we can also clearly imagine the harm they could do as they progress.
There have been several examples recently of people seeming to react to arguably false perceptions. I'm actually thinking of ones in the last 2-3 days but I'm sure there are plenty of others.
- Community creates a project that makes it impossible to track faces in social media and anywhere online.
yghmmm, no, not that kind of AI
If they're not and they're hoarded by tech companies or intelligence agencies then we'll just have a lopsided system where people aren't aware of how capable such technologies are, what their limitations are, how to analyze them to spot issues, etc.
Imagine if only nationstates knew about these sorts of technologies and used them for war or if only certain elites in tech had access to them and used them to implicate competitors in crimes? The technology is out there now - at this point, public knowledge is our best defense - people always question if a contentious image is photoshopped, we want that same level of questioning to happen for videos.
In terms of this being used as an excuse to get someone out a criminal charge, it might make us take a better look at the chain of custody on video evidence but I don't think it would invalidate it completely.
This might seem nice against ever growing CCTV, but probably state security cameras will be "trustworthy" and all media evidence gathered by private persons will be dismissed...
The potential for manipulation is huge given how many people trust pictures. I know that I don't distrust most pictures I see.
What GP is describing is the long term consequence of not being able to trust video evidence. Now even if you film someone red handed, they can deny it.
Another dire consequence is that the entire archive of all videos filmed since the beginning are now tainted by doubt. Any past politician speech, any past horror caught on film, etc. can now be said to have been crafted recently.
Possibly it was only enforced for very generic search queries returning thousands of results but it has been around a long time, Github acquisition was only in October 2018.
!gh or !git anywhere in a ddg search will restrict it to github.
Censoring will just draw more attention and traffic. What’s really unsettling is that GitHub is playing politics with its users, without even informing them or communicating with them. You would think they would have the courtesy to tell the owner.
Hard to guess at the intention.
It's pretty stupid to make it only available to logged in users as all it's doing is annoying people. Hope this "feature" stays half-baked, don't want GitHub to become authwalled like LinkedIn has become.
I can show verifiable, witnessed audio recordings of a guy saying he likes to grab women by the pussy, but that won't stop that guy from becoming President. Powerful tools don't run societies, people do.
P.S. and yes, before the obligatory "it's a private business" comments come in, I know I can build my own Internet and avoid all this. Thanks for reminding.
1) One day somebody posts a handful of really obviously faked janky looking porn videos. We all have a good laugh, briefly imagine the possibilities, and then move on
2) Like 3 weeks later, every social media platform explicitly bans this dumb toy that wasn't even any good
3) a year or so passes
4) Now governments are passing dramatic legal bans on these things, and there's all kinds of shady things happening. Like, this is the first instance of this kind of public restriction I have _ever_ seen on github.
So: which major news events were completed fabricated?
Notice how that says "Application", not website. It amazes me how people want to make their WordPress site into an SPA simply because someone told them to do so or it was the next "hip" thing to do.
SPA have their place... migrating a desktop application to the web and making it a SPA makes perfect sense to me.
While I agree technology isn't inherently good or evil, this feels more harmful than helpful.
Why not change the license to enforce the use restictions?
When I think about people I know who have been long-time users of GitHub and how this kind of censorship resonates with them... Oh my.
These early adopters could migrate away very quickly.
I have no opinion about whether or not that is a better title, but I thought it should be known that it was modified from its original.
While censorship may not be an appropriate word, this is weird. Why would Github do something like that, except to force people accessing the repo to leave a trail leading to their PII?
Anyone can fork and mirror it where they want, and make it accessible to anonymous users. Sure, that would "inconvenience" some users, but so what? Github doesn't exist to please every single person out there.
Create your own mirror, and let us know the URL. Don't just whine and try to manufacture outrage if you aren't willing to do contribute resources required to host the code yourself.
I fully support Github's right to use their property (github.com) as they please, because I want the same right for myself.
— definitely Voltaire, for sure. /s
Works with clone though. Wonder how many more such repos exist?
Do they have a transparency report which includes such action?
As a Microsoft employee, it would be even more enormously disappointing if this were a top down rather than internal org decision.