Hacker News new | past | comments | ask | show | jobs | submit login

Removing these areas to mitigate misuse is a good thing and worth the trade off.

Companies like OpenAI have a responsibility to society. Imagine the prompt “A photorealistic Joe Biden killing a priest”. If you asked an artist to do the same they might say no. Adding guiderails to a machine that can’t make ethical decisions is a good thing.




This just means that sufficiently wealthy and powerful people will have advanced image faking technology, and their fakes will be seen as more credible because creating fakes like that "isn't possible" for mere mortals.


In my view, the problem with that argument is that large actors, such as governments or large corporations, can train their own models without such restrictions. The knowledge to train them is public. So rather than prevent bad outcomes, these restrictions just restrict them to an oligopoly.

Personally, I fear more what corporations or some governments can do with such models than what a random person can do generating Biden images. And without restriction, at least academics could better study these models (including their risks) and we could be better prepared to deal with them.


I think the issue here is the implied assumption that OpenAI thinks their guardrails will prevent harm to be done from this research _in general_, when in reality it's really just OpenAI's direct involvement that's prevented.

Eventually somebody will use the research to train the model to do whatever they want it to do.


Sure but does opening that level of manipulation up to everyone really benefit anyone either? You can't really fight disinformation with more disinformation, that just seems like the seeds of societal breakdown at that point.

Besides that these models are massive. For quite a while the only people even capable of making them will be those with significant means. That will be mostly Governments and Corporations anyway.


Oh, no, the society! A picture of Joe Biden killing a priest!

Society didn't collapse after photoshop. "Responsibility to society" is such a catch-all excuse.


You missed half of my note. An artist can say "no". A machine cannot. If you lower the barrier and allow anything, then you are responsible for the outcome. OpenAI rightfully took a responsible angle.


Yes, but who cares whose responsible? Are you telling me you're going to find the guy who photoshopped the picture and jail him? Legally that's possible, realistically it's a fiction.

They did this to stop bad PR, because some people are convinced that an AI making pictures is in some way dangerous to society. It is not. We have deepfakes already. We've had photoshop for so long. There is no danger. Even if there was, the cat's out of the bag already.

Reasonable people already know to distrust photographic evidence nowadays that is not corroborated. The ones who don't would believe it without the photo regardless.


In general under US law it wouldn't be legally possible to jail a guy for Photoshopping a fake picture of President Biden killing a priest. Unless the picture also included some kind of obscenity (in the Miller test sense) or direct threat of violence, it would be classified as protected speech.


there will and are million ways to create a photorealistic picture of Joe Biden killing a priest using modern tools, and absolutely nothing will happen if someone did.

We've been through this many times, with books, with movies, with video games, with Internet. If it *can* be used for porn / violence etc., it will be, but it won't be the main use case and it won't cause some societal upheaval. Kids aren't running around pulling cops out of cars GTA-style, Internet is not ALL PORN, there is deepfake porn, but nobody really cares, and so on. There are so many ways to feed those dark urges that censorship does nothing except prevent normal use cases that overlap with the words "violence" or "sex" or "politics" or whatever the boogeyman du jour is.


No. Russian society is pretty much collapsing right now under the weight of lies. Currently they are using "it's a fake" to deny their war crimes.

Cheap and plentiful is substantivly different from "possible". See for example, oxycontin.


You know what else is being used to deny war crimes? Censorship. Do you know how that's officially described? "Safety"


Russia has.. a history of denying the obvious. I come from an ex-communist satellite state so I would know. The majority of the people know what's happening. There's a rather new joke from COVID: the Russians do not take Moderna because Putin says not to trust it, and they do not take Sputnik because Putin says to trust it.

Do not be deluded that our own governments are not manufacturing the narrative too. The US has committed just as many war crimes as Russia. Of course, people feel differently about blowing up hospitals in Afghanistan rather than Ukraine. What the Afghan people think about that is not considered too much.


Society is going to utter dogshit and tearing itself apart merely through social media. The US almost had a coup because of organized hatred and lies spread through social media. The far right's rise is heavily linked to lies spread through social media, throughout the world.

This AI has the potential to absolutely automate the very long Photoshop work, leading to an even worse stat eof things. So, yes, "Responsibility to society" is absolutely a thing.


> The US almost had a coup because of organized hatred and lies spread through social media.

But notice how all of these deep faking technologies weren't actually necessary for that.

People believe what they want to believe. Regardless of quality of provided evidence.

Scaremongering idea of deep fakes and what they can be doing was militarized in this information war way more than the actual technology.

I think this technology should develop unrestricted so society can learn what can be done and what can't be done. And create understanding what other factors should be taken into account when assesing veracity of images and recordings (like multiple angles, quality of the recording, sync with sound, neural fake detection algorithms) for the cases when it's actually important what words someone said and what actions he was recorded doing. Which is more and more unimportant these days because nobody cared what Trump was doing and saying, nobody cares about Bidens mishaps and nobody cares what comes out of Putins mouths and how he chooses his greenscreen backgrounds.


Are you of the idea that we should let everyone get automatic rifles because, after all, pistols exist? Because that is the exact same line of thought.

> People believe what they want to believe. Regardless of quality of provided evidence.

That is a terrible oversimplification of the mechanics of propaganda. The entire reason for the movements that are popping up is actors flooding people with so much info that they question absolutely everything, including the truth. This is state sponsored destabilisation, on a massive scale. This is the result of just shitty news sites and text posts on twitter. People already don't double check any of that. There will not be an "understanding of assessing veracity". There is already none for things that are easy to check. You could post that the US elite actively rapes children in a pizza place and people will actually fucking believe you.

So, no. Having this technology for _literally any purpose_ would be terribly destructive for society. You can find violence and Joe Biden hentai without needing to generate it automatically through an AI


I'm sorry. I believe I wasn't direct enough which made you produce metaphor I have no idea how to understand.

Let me state my opinion more directly.

I'm for developing as much of deep fake technology in the open so that people can internalize that every video they see, every message, every speech should be initially treated as fabricated garbage unrelated to anything that actually happened in reality. Because that's exactly what it is. Until additional data shows up, geolocating, showing it from different angles and such.

Even if most people manage to internalize just the first part and assume everything always is fake news, that is still great because that counters propaganda to immense degree.

Power of propaganda doesn't come from flooding people with chaos of fakery. It comes from constructing consistent message by whatever means necessary and hammering it into the minds of your audience for months and years while simultaneously isolating them from any material, real or fake that contradicts your vision. Take a look no further than brainwashed Russian citizens and Russian propaganda that is able to successfully influence hundreds of millions without even a shred of deep fake technology for decades.

The problem of modern world is not that no one believes the actual truth because it doesn't really matter what most people believe. Only rich influence policy decisions. The problem is that people still believe that there is some truth which makes them super easy to sway to believe what you are saying is true and weaponize by using nothing more than charismatic voice and consistent message crafted to touch the spots in people that remain the same at least since the world war II and most likely from time immemorial.

And the "elite" who actually runs this world, will pursue tools of getting the accurate information and telling facts from fiction no matter the technology.


South Park creators have jumped on this occasion :

"Sassy Justice with Fred Sassy" (reporting on Deep Fakes) :

https://youtube.com/watch?v=9WfZuNceFDM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: