Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Did the 80% accuracy test results take 10 seconds of compute? 10 minutes? 10 hours? 10 days? It's impossible to say with the data they've given us.

The gist of the answer is hiding in plain sight: it took so long, on an exponential cost function, that they couldn't afford to explore any further.

The better their max demonstrated accuracy, the more impressive this report is. So why stop where they did? Why omit actual clock times or some cost proxy for it from the report? Obviously, it's because continuing was impractical and because those times/costs were already so large that they'd unfavorably affect how people respond to this report



See also: them still sitting on Sora seven months after announcing it. They've never given any indication whatsoever of how much compute it uses, so it may be impossible to release in its current state without charging an exorbitant amount of money per generation. We do know from people who have used it that it takes between 10 and 20 minutes to render a shot, but how much hardware is being tied up during that time is a mystery.


Could well be.

It's also entirely possible they are simply sincere about their fear it may be used to influence the upcoming US election.

Plenty of people (me included) are sincerely concerned about the way even mere still image generators can drown out the truth with a flood of good-enough-at-first-glance fiction.


If they were sincere about that concern then they wouldn't build it at all, if it's ever made available to the public then it will eventually be available during an election. It's not like the 2024 US presidential election is the end of history.


The risk is not “interfering with the US elections”, but “being on the front page of everything as the only AI company interfering with US elections”. This would destroy their peacocking around AGI/alignment while raising billions from pension funds.

OpenAI is in a very precarious position. Maybe they could survive that hit in four years, but it would be fatal today. No unforced errors.


i think the hope is by the next presidential election no one will trust video anymore anyway so the new normal wont be as chaotic as if the dropped in the middle of an already contentious election.

as for not building it at all its a obvious next step in generative ai models that if they don't make it someone else will anyway.


Wouldn’t it be nice if we came full circle and went to listen to our politicians live because anything else would be pointless.


I'd give it about 20 years before humanoid robots can be indistinguishable from originals without an x-ray or similar — covering them in vat-grown cultures of real human skin etc. is already possible but the robots themselves aren't good enough to fool anyone.


unfortunately that would mean two firstly things only swing states would get to hear what politicians are actually saying and secondly to reach everyone the primary process would have to start even earlier so the candidates would have a chance to give enough speeches before early voting


Even if Kamala wins (praise be to god that she does), those people aren't just going to go away until social media does. Social media is the cause of a lot of the conspiracy theory mania.

So yeah, better to never release the model...even though Elon would in a second if he had it.


Doesn't strike me as the kind of principle OpenAI is willing to slow themselves down for, to be honest.


But this cat run out of the bag years ago, didn't it? Trump himself is using AI-generated images in his campaign. I'd go even further: the more fake images appear, the faster the society as a whole will learn to distrust anything by default.


Personally I'm not a fan of accelerationism


Nothing works without trust, none of us is an island.

Everyone has a different opinion on what threshold of capability is important, and what to do about it.


Why did they release this model then?


Their public statements that the only way to safely learn how to deal with the things AI can do, is to show what it can do and get feedback from society:

"""We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.""" - https://openai.com/index/planning-for-agi-and-beyond/

I don't know if they're actually correct, but it at least passes the sniff test for plausibility.


Also the the sora videos are proven to be modified ads. We still need to see how it perform first


> Also the the sora videos are proven to be modified ads

Can't find anything about that, you got a link?



Oh, so not the actual demo videos OpenAI shared on their website and twitter.


We still need to see those demos in action though. That's the big IF where everyone is thinking about


Sure but "Also the the sora videos are proven to be modified ads" is demonstrably false, for the demos OpenAI shared and the artist made ones.


https://www.youtube.com/watch?v=9oryIMNVtto

Isn't this balloon video shared by openai? How is this not counted? For others I don't have evidences. But this balloon video case is enough to cast the doubts.


But there are lots of models available now that render much faster which are better quality than sora




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: