Hacker News new | past | comments | ask | show | jobs | submit login

I love that almost all the responses to your question are, "No! Bad idea!"

It's a great idea. We want more than an open-world. We want an open-story.

Open-story games are going to be the next genre that will dominate the gaming industry, once someone figures it out.




From 2018 - https://www.erasmatazz.com/library/interactive-storytelling/...

"There’s no question in my mind that such software could generate reasonably good murder mysteries, action thrillers, or gothic romances. After all, even the authors of such works will tell you that they are formulaic. If there’s a formula in there, a deep learning AI system will figure it out.

Therein lies the fatal flaw: the output will be formulaic. Most important, the output won’t have any artistic content at all. You will NEVER see anything like literature coming out of deep learning AI. You’ll see plenty of potboilers pouring forth, but you can’t make art without an artist.

This stuff will be hailed as the next great revolution in entertainment. We’ll see lots of prizes awarded, fulsome reviews, thick layers of praise heaped on, and nobody will see any need to work on the real thing. That will stop us dead in our tracks for a few decades."


there's only really like seven basic plots; man v man, man v nature, man v self, man v society, man v fate/god, man v technology so we should probably just stop writing stories anyway


If there's an AI that can reliably come up with interesting and true new things to say about the human condition, I'm throwing in the towel.

Until then, I'll stick with human art


It would not surprise me if most people could not tell whether some story about the human condition is human or AI generated. Excluding actual visual artists that have specific context of the craft, most people already can't tell AI art from human art when put to a blind test.


As far as I know know AI art can't really follow instructions so it's actually very, very easy to tell the difference if you aren't biasing the test by allowing vague instructions permitting random results to be considered acceptable.

"Here's a photo of me and my wife, draw me and my wife as a cowboy in the style of a Dilbert cartoon shooting a gun in the air" can't be done by AI as far as I know, which is why artist are still employed throughout the world.


Last time I checked GenAI it wasn't able to handle multiple people, but giving Midjourney a picture of yourself, and asking it to "draw me as a cowboy in the style of a Dilbert cartoon shooting a gun in the air" is totally a thing it will do. Without a picture of you to test on, we can't debate how well the image looks like you, but here's one of Jackie Chan: https://imgur.com/a/6cBrHWd


Are you saying you can upload a picture to mid journey that it will use as a reference?

Jackie Chan is not a good example because he's a famous person it may have been trained on. I used myself as an example because it would be something that is novel to the AI, it would not be able to rely on it's training to draw me, as I am not famous.


yes. here is a video tutorial where a cat is being used as a reference image

https://youtu.be/9dOECM76l_c?t=45


When AI can make a movie as good as Bottoms, Lady Bird, etc. I'll accept that we're beat.

For now though, it's very good at making thing similar to what's already made.


IMO this will be the differentiating feature for the next generation of video game consoles (or the one after that, if we’re due for an imminent PS6/Xbox2 refresh). They can afford to design their own custom TPU-style chip in partnership with AMD/Nvidia and put enough memory on it to run the smaller models. Games will ship with their own fine tuned models for their game world, possibly multiple to handle conversation and world building, inflating download sizes even more.

I think fully conversational games (voice to voice) with dynamic story lines are only a decade or two away, pending a minor breakthrough in model distillation techniques or consumer inference hardware. Unlike self driving cars or AGI the technology seems to be there, it’s just so new no one has tried it. It’ll be really interesting to see how game designers and writers will wrangle this technology without compromising fun. They’ll probably have to have a full agentic pipeline with artificial play testers running 24/7 just to figure out the new “bugspace”.

Can’t wait to see what Nintendo does, but that’s probably going to take a decade.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: