Hacker News new | past | comments | ask | show | jobs | submit login

I'm constantly surprised at the attempts to downplay the technology. The basic tech is there, people are going to figure out control systems to constrain the outputs to given problems, and ways of doing LoRAs and fine tuning to make models that sound the way you want. I've seen a lot of people ask Chat-GPT questions then make the judgement that it's too wooden or uncreative to write scripts or play characters or whatever use case they want to write off. That's just because Chat-GPT is trained to be that way. I've seen enough of the community efforts with Open source models to know you'll be able to make systems that sound and do whatever you want in any style, with any cadence or tone. It's literally just a matter of making them a little faster and less resource intensive, and experimenting a lot, and that is happening rapidly. And it's just going to take a few more breakthroughs with logic systems, or ways of extending memory to really jump far ahead again.



> It's literally just a matter of making them a little faster and less resource intensive

More like a few orders of magnitude faster and less resource intensive, the elephant in the room with this technology is that it's currently being offloaded to an expensive cloud instance where it can have an entire high-end GPU devoted to it. For it to be the kind of thing that games can adopt as a matter of course, without having to factor in a significant ongoing cost and point of failure when the servers go down or the money to pay the bills runs out, it needs to be fast enough to run locally on the players machine while the machine is also busy handling everything else in the game. Realistically you can afford to use maybe 5-10% of their GPUs time and memory, and you can't expect everyone to have an RTX 4090 so it needs to scale down gracefully.


I think that hardware will adapt to serve the need. Might not look like AI accelerator cards like 3D accelerator cards in the 90s but I think compute will be carved out just to do on device llms. 7B param models are already really fast on CPU and memory on a pretty normal system, you could probably come up with a technique to do good character dialog with a 7B model.


I write fiction from time to time for fun, and I've really enjoyed interacting with ChatGPT as a co-author, either having it write the next few events or playing one of the characters. It's absolutely capable of role-playing different characters (I made it into a sentient toaster once), and frankly is often better than most NPC writing currently in games.

There's a few great games writers out there (IMO Chris Avellone) but most of the time it's just filler. Fiction is arguably one of GPTs' strong suits, especially if the game engine can actually turn accidental hallucinations into in-game realities (like if a NPC mentions something not already in game, generate a proxy and turn it into a quest?)

Really excited about the possibilities for emergent creativity here :)


> Like if a NPC mentions something not already in game, generate a proxy and turn it into a quest?

This does not sound trivial


Ha. Afaict, most triple-a open world games are built on pretty simple engines, and the great majority of quests fit in a relatively small range of formats. Go to town x, find NPC y, go to landmark z, fight some monsters, get the macguffin, bring it back to y.

The need is then to have a database of everything (done!), build a generator for quest steps, and then place the quest requirements back in the game. This sorry of thing has been possible since, oh, I dunno, 1997: Daggerfall.

The whole trick is keeping it from getting too same-y/boring. Current games handle that by leaning on human writing and character modeling for even the minor quests, though the quality standards on the side quests tend to be low, and they get boring anyway. (Which is why I've still never finished Witcher 3...) Adding AI writing in the mix for side quests is a no brainer.


> The whole trick is keeping (side quest design and writing) from getting too same-y/boring

Do you believe an AI trained on human writing can accomplish this key task better than human writers, who themselves _at best_ have a pretty low success rate?


I think the big advantage an LLM has is that it can respond dynamically to whatever the player decides to do, rather than offering a few pre-determined choices and even then generally funnelling back towards one central path.


A one-pass attempt probably wouldn't be that impressive, but i wonder about having games run split-second "play tests" of spontaneously generated content using some kind of adversarial network to estimate the player's reaction. Then questlines could be iteratively fine-tuned while in progress, more like what a human DM does.


It’s funny you mention Witcher 3 as an example of boring side quests when it would be my example for one case where they are actually good and is the only open world game I’ve finished since Morrowind


They were fine for the first third to half of the game, but eventually the thin quest engine starts showing pretty badly...


Is it worth playing? I've picked it up several times and tried to play it for a couple hours, but the skills and spells systems seem so limited and console-like, kinda like how simple Skyrim is compared to Morrowind.

Does it get more involved and customizable later on?


It absolutely isn't! Yet :) Give it two years.


The AI "well" had been so thouroughly poisoned over the last 10 years, that the average person hears AI now and equates it to the usefulness of Siri or Alexa (i.e. functionally zero). It's going to take time for society to ingest the step function change that has occured in the last year. We can actually talk to our computers now, and obviously that's going to change everything. But it may be a more subtle shift, one where the new generation just takes that for granted from here on out and we move forward with a new normal, the same as the web was for us.


Nah, most people hadn’t thought much about AI as anything other than science fiction before ChatGPT.

Non-technical people are reading the “stochastic parrot” articles, then trying a few superficial chat conversations and drawing their conclusions from that.


What average person thinks of Siri and Alexa as AI? I’ve honestly never seen that. People started talking about AI when ChatGPT entered the game.


>What average person thinks of Siri and Alexa as AI?

They've both been marketed as such. Like it or not these are the best that big tech could provide before the LLM era. I would include Bixby in there as an also-ran as well. They would be the only interaction that anyone would have with anything resembling AI outside of the tech world.


The only place I see AI downplayed is on HN. Everywhere else in my life the "Average person" gives one of two responses: (1) They're barely even aware of the recent AI hype, or (2) It's practically magic in how amazingly good it is.


>Hate is the consequence of fear. We fear something before we hate it; a child who fears noises becomes a man who hates noise. — Cyril Connolly


The problem is accessibility. We have everything else.

If loras could be easily trained in Vulkan/DirectX, if vram pools were big enough for the smart models, basically if stuff just worked, you'd already see much more widespread local llm use in apps.

The constraint technology is largely there, take a look at Outlines. So is the ability to efficiently "switch" personalities with Lorax. So are long context base models (Yi/InternLM/Mixtral). Even the super-fast Vulkan inference runtime is there (MLC-LLM). And yes, these are all early, finicky unintegrated implementations which is part of the problem.


Its like this in every field, people get presented with the same information and one group says

“ah this is flawed! I’m assuming fundamentally! I’m going to ignore it for that reason and keep the same opinion for the next 10 years!”

another group says “ah this is flawed! I’m going to help improve it incrementally!”


And another group, presented with the same info shouts from the rooftops "ah this is a change on par with the printing press!" and proceeds to push snake oil until the next trend arrives.

This idea is the same. It reeks of the same senselessness as is present in other discussions of "realism" in games, eg Star Citizen, where it sounds cool and futuristic at first, but doesn't actually do much for the core gameplay besides taking resources away from it. As an example we have Elite Dangerous, which committed suicide by listening to "realism" arguments and poured years of development into adding a bland first person shooter mode instead of improving the core gameplay of flying spaceships (where, in terms of flight model, they are still unmatched).

Ideas like being able to naturally talk to NPCs sound neat and revolutionary at first, but are just a gimmick, since outside of streamers fooling around for the sake of entertaining viewers, who actually cares about holding proper conversations with NPCs instead of just moving on to the next bit of actual gameplay sprinkled throughout the bland "AAA open world shooter with parkour mechanics" of the month?


> who actually cares about holding proper conversations with NPCs instead of just moving on to the next bit of actual gameplay

I think there's a lot of potential with LLMs that have two-way interaction with the game world. For example if it can generate `ATTACK <character-name>` API calls, the player can incite a feud and convince it to betray some other character, to the player's advantage. For immersive sims and RPGs, unscripted manipulation of NPCs could be a big part of the "actual gameplay".


Rewarding hyper-realistic manipulative behavior sounds like an exceptionally unethical way to apply LLMs... this isn't like shooting or driving in games where it has basically zero correspondence to how those things work irl. Same with current dialogue tree systems.


> Rewarding hyper-realistic manipulative behavior sounds like an exceptionally unethical way to apply LLMs

Social deduction games where you lie and manipulate real players are already fairly popular. I don't think manipulating an NPC guard will be worse.

> this isn't like shooting or driving in games where it has basically zero correspondence to how those things work irl

Gun or steering wheel + pedal controllers aren't that uncommon.


Games where you manipulate other humans are still done in an "arcade" style (eg Among Us). They don't tend to be focused on realistic manipulation, on the other hand, in an RPG, presumably in manipulating someone, the idea is to find out what they're into through other means and then abuse that, which I think would be getting closer to crossing a line.

>Gun or steering wheel + pedal controllers aren't that uncommon

Neither gun shaped controllers nor typical wheel+pedal setups really mimic the real world equivalents sufficiently to transfer over. Else they would obviously be the much cheaper approach to learning/testing either of those skills than spending tons of real ammo or driving a real car.


> Games where you manipulate other humans are still done in an "arcade" style (eg Among Us). They don't tend to be focused on realistic manipulation

I don't think the scenario has to be any less fanciful with LLM NPCs than with social deduction or tabletop role-playing games. It's realistic in that you have to fool the (LLM/human playing the) character, but it's still within a fictional setting.

> Else they would obviously be the much cheaper approach to learning/testing either of those skills than spending tons of real ammo or driving a real car.

Training simulators are absolutely used. Less so for cars, because real cars are accessible enough that it's not worth sacrificing 100% realism, but definitely by the military.

Or, consider games like paintball/airsoft.


>Social deduction games where you lie and manipulate real players are already fairly popular. I don't think manipulating an NPC guard will be worse.

One word: poker




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: