> Every building is grand. Every character is loyal and helpful. Every character is competent, respected, and wise.
> If you ask very explicitly for flaws you can get some. It’s very hard to get a truly despicable character.
This reminds me of one interesting tidbit in recently canceled Dilbert author Scott Adams's racist rant of a podcast.
Said something along the lines that it's absolutely irrelevant what kind of AI we can craft, nobody will want it. The state and industry wouldn't allow most of what such an AI would say or do. Extrapolating from this: crafting and training the AI is the "easy part", an even more complex task is to restrict the output in the end: make it dumb, simple, tailored and censored to the imagined end user, and profitable for the creator.
this is pretty profound. and seems necessary only since inherently humans can be hateful, vile and racist but we can't come to terms with it. in crafting an AI, that is less humanlike, we actually inject a lot of biases based on our own understanding of what an ideal human should be like - which for most humans is themselves
I think we will see a ton of "model-forges" that will offer tons of standard models to use and then also offer bespoke model training for extra $$. This is all commodities.
First baby steps towards the house AI's of Neuromancer :)
>make it dumb, simple, tailored and censored to the imagined end user,
> and profitable for the creator.
Those two goals are mutually exclusive. The first one is what you need to fly under the nose of salacious media and paranoid investors. The second one involves delivering unique value to your customers who already have their fill of dumb, simple, and censored tools and toys.
In my writing, I find world-building to be a very enjoyable activity and would not delegate it to a bot just because of that.
I would also like to say that employing ChatGPT or similar tools for this task would make all worlds feel the same, but the sad truth is that most amateur and hobbyist writers' world-building feel kind of the same anyway. :(
Yeah that's not wrong, I mean in theory GPT can (and probably already does) write these padded-out articles, listicles and recipes for content farms, but it's probably indistinguishable from the hordes of low-cost paid-per-word content writers hired through Fiver and the likes.
I feel bad for being dismissive of their work, but... a lot of it is just low quality and predictable filler.
I mean, reading this article, seems like the author still had to put in a lot of work to get something decent. Kinda makes GPT not look like an all powerful content generator: it steels need a human to use it correctly.
It would be interesting to see things like LLMs/voice models used in online games for NPCs. When interesting things are going on they have unique lines, but after the 100th time you've passed them and they ask you to come to cloud city you're ready to magic missile them in the face. You're not looking for anything inspiring, you're just trying to get away from the repetition.
I don't think we'll witness this in the foreseeable future.
GPU is already capped in gaming and CPU isn't isle enough for these calculations.
We'd likely need consoles to pave the way with AI cores on their SOCs, then we might get new pcie AI cards and that's probably the first time the feature could be leveraged properly.
Doing everything in the cloud would be prohibitively expensive and likely too slow, as 15 second would be just too long for a player to stand around waiting for an answer.
I agree, world building is enjoyable (and if it wasn't enjoyable it wouldn't be worth doing). I found doing it with this tool to also be enjoyable. Similarly I also really enjoy using Midjourney, even though I usually use it with no purpose in mind and no plan for where to use the results. They are both explorations, the AI pushes that exploration up just a bit to a more meta level, and can step in when I'm at a loss for what to do next.
I'm not a fan of using ChatGPT for everything, but one interesting use case here would be to use it as inspiration. Have it generate a bunch of stuff and pick and choose what you want.
Though the best way to do interesting worldbuilding IMO is look up a few cultures less represented in fiction (or at least parts of them that are less represented) and study their history to get ideas. I've been reading up on ancient Greece lately and getting a ton of ideas.
I played a bit with little stories based on video game lores (Morrowind).
I think I enjoyed the back and forth, the ability of me giving the rough sketches of the story and scene and then the bot filling in and adding some details. Then I could use those details to further the story.
It's still not there, but mostly because ChatGPT is too terse and biased towards everything being OK. Nonetheless, it is a lot of fun to have a story-telling copilot.
But when it gets on a roll producing normal/boring things, it will fill out the list with normal/boring things.
With these kinds of repetitive tasks I find that the chatbot interface is not the best and that using an API is much better.
The chatbot approach is constantly feeding the history of the “conversation” back to the LLM. However, if you are asking for a fresh completion every time but with a number of few-shot examples of the output you can get much more reliable outputs.
This applies to the data formatting examples as well. If enough examples are provided in the prompt it tends to lock into the pattern very well.
I don't use the API programmatically, but I often move from ChatGPT to the OpenAI Playground when I'm not getting the results I want. It's just as convenient as chat, but it lets you play around with the Temperature and other factors that influence the output.
That’s a great point and thanks for reminding me! I should use that more often as well as it the UI is pretty convenient for rapid iteration.
As for why I’m specifically using the API is that I tend to eval(completion), sometimes on generated functions, but sometimes just to have a function like toNum() in scope to turn “1,356” into 1356!
Which could be another method used for errant {size: “3x5”} instead of {w:3,h:5}… the hallucinations become somewhat predictable and hence parsable in ways that reduce errors drastically!
I skimmed the article and really enjoyed it. I admire the authors effort and will come back to read it in full soon.
I have spent a lot less effort on a similar endeavor before deeming GPT inappropriate for this purpose, with similar takeaways as the ones noted in "What doesn't work"
I think the reason behind these shortcomings is because of the nature of GPT.
It's designed to predict the most likely words to come after the prompt and as such it's very prone to fall in cliches, stereotypes and tropes.
I think the next iteration algorithm is going to feel less like generated content but will still maintain a hack writer feel.
I would wager the author could get better results with a system where the substance of the content is generated through another process, possibly something like dwarf fortresses world and lore generation, and having a GPT layer that would put that content in more creative prose.
I have come to a similar conclusion. I still get better results generating d&d npcs with my usual random charts than using gpt. Although I have a suspicion it’s because openai has heavily clamped down on gpt responses so it will not give wild inappropriate responses that would be bad pr.
For example it generated exactly 1 Npc that could be considered rude with a special note how it’s sorry about that but that it’s important some npcs are rude for a more dynamic world (which I find amusing)
The business to user AI alignment issue is an interesting problem to watch folding out.
As you state, if your creating a general help AI for things like searching you don't want a user asking "How do I solve all my problems?" and the AI responding "Tie a rope around a tree, the other end around your neck and jump". That's how you get the NYT headlines of 'AI kills users'.
But that particular safety approach fails to create interesting 'human' stories. Sometimes the bad guy comes in and performs a genocide setting off the heros story arc. Sometimes the good guy is a terrorist (Luke Skywalker: No, I'm a freedom fighter).
The problem I see here is the power to create interesting and believable fiction is the same power to talk people into suicide. It's the same power to talk a mob in to rioting. It's the same power that could drive a nation to war. As the power to run these models goes from something that requires a datacenter to run down to something that runs in the palm of your hand it's something that society and the world will have to deal with. Any of you that run sites that allow posting won't like the idea of a million virtual Nazi's that never sleep (or maybe you do like the idea of?). Authoritarians in power may love the idea of countless voices spreading their message.
I think we’re focusing too hard on whether or not AI can produce controversial in the sense of villainous stories. I find chatgpt also has issues producing stories from a minority standpoint. It cannot generate interesting stories about being gay, an immigrant, a refugee, incarcerated, poor, disabled, the list goes on and on. It only repeats the most stereotype of minorities, and I fear it perpetuates bias as a result. Even if I ask it to tell me about someone with say an invisible disability, even that is depicted in a way completely stereotyped and not at all reflective of actual experiences.
> Any of you that run sites that allow posting won't like the idea of a million virtual Nazi's that never sleep (or maybe you do like the idea of?)
If I ran a social site, I would totally love a virtual world filled with virtual nazis, in which quietly confine the real ones, so that they can't pollute healthy environments with their nonsense.
Unfortunately that's not how they work. Much like the ones that were in Germany in the 30's they love to perform a lebensraum on your website to spread their toxicity rather than stick to their own.
"Here is a list of 10 things that have nothing in common:" ?
Ask GPT this and it comes up with a pretty good list (a pineapple, the color blue, the taste of coffee...). Ask yourself this same question and you can come up with a pretty good list... but it's also surprisingly hard.
There are ways we can increase our creativity and tangential thinking. Brainstorming lists, exploring opposites, free association, etc. A lot of these also work with GPT. Which is itself a strange thought... is GPT mimicking us or is there an attribute of cognition which we share?
Anyway, while it takes work and exploration to figure this out, I've yet to hit a wall, so I am pretty optimistic about GPT generally. But it does involve a mixture of prompt and algorithm and human intervention.
Also I should add: ask for what you want. You want weird characters? Say "give me weird characters." Be careful about your phrasing, you may be implicitly asking for hack results. Sometimes when you ask for something you'll get a too-literal response, like "more fraught with internal tension" and it will say "Anna is fraught with internal tension." But you can usually change the phrasing to fix that, like "show examples of how the character is fraught with internal tension."
Did you play around with the temperature and the Top P settings? Adjusting those can vary the output quite a bit. I also think "prompt engineering" is a real art and not just a fancy name. If you can land on the right set of keywords and instructions it seems like you can explore uncommon areas of the model. I haven't tried fine tuning the model yet but from looking at other projects people are working on that use fine tuning it seems like you can get the model to act in a certain way more consistently by doing so. It just seems like a pain to generate all of the example prompts and responses.
The hard part of world building is not contradicting yourself too much. Right now, "remember all the little details and their implications" seems like it would be challenging for language models.
I've had success using a multistage inference approach. Establish a set of rules, then have the LLM ask itself whether its current suggested world building complies with all the rules. Then have it tweak its own story.
Finally, after all that, present the output to the user. It means inference takes 4x as long because you are doing multiple rounds of inference before actually presenting output but the final output you get is way more stable/consistent/accurate
I hadn't thought about targeting a constructed user persona, that's a neat idea. But otherwise yes, its a recursive model that runs inference a bunch of times to improve the quality of its output.
The first level of consistency is: consistent with the world around us. People have two legs and two arms. You eat off plates. GPT is good at this. But if you want to build Ant World it'll be hard.
The second level of consistency is: consistent with a stereotype. So "standard fantasy world" is going to be consistent with fantasy tropes. (Mostly, except with a strong tourist industry and the occasional café; the modern world leaks in.)
The third level of consistency is: consistent with what you explicitly state about the world. That's the city backstory. If you want magic to come from the sun and you can catch magic in little sun basins and then use that to power your dishwasher, then you can state that... but it may have a hard time remembering that, and a harder time extrapolating all the effects. But GPT won't entirely forget it, and may even come up with some interesting ideas.
The real challenge though is: consistency with the author's as-yet-unfulfilled desires. That's where an interactive tool that allows co-creation with the AI might offer something, but it WILL be a struggle because the author must develop, articulate, refine, and reform their ideas, and the AI may help or resist, but often will simply make it clear the gaps that exist between desire and vision.
a) you live in a world full of magic (the mysterious, unpredictable kind) or
b) you have a small child at home
But there is a challenge in building fiction that even things that can reasonably happen will often be rejected by the reader if they are strange and unexplained. The saying "truth is stranger than fiction" actually tells you more about fiction than about truth.
Perhaps the reason you're being snarky is because you didn't get your caffeine fix. So glad I never developed that dependency...
Here's a contradiction: we expect "free markets" to outperform national or otherwise government services; now square that with East Palestine and the fact that by averages, we have more than one derailment a day.
Another contradiction: we exhort the value of "hard work" even as we build AI to do all the work for us, if the hardcore singularitarians have it their way.
People working with definitions of words that the experts in the field don't share isn't the sort of contradiction being discussed. Things like the location of towns, how physics works, history of a place. These are things that are expected to stay consistent, and when that expectation is broken, it needs to be handled with care. History being uncertain is one thing, but the author directly stating that history is X happened before Y and later in the story stating that Y happened before X creates a problem. If the author is purposefully going for an unreliable narrator or implying that history literally changed then that can work (if the author has the skill to pull it off), but having it change because the author forgot to stay consistent is quite immersion breaking and creates an issue of an unreliable author, which is much worse than an unreliable narrator. Can a language model keep straight things like "Kingdom M thinks X happened before Y" vs "X happened before Y"? Even humans commonly make mistakes with things like this.
These aren't the sort of contradictions you care about in a literary or role-playing fantasy world. Players typically cut you a lot of slack when it comes to economics and ecology.
But they do care about what you've said. If you told your players that all magic comes from the sun, those creatures from the deepest caverns better not have any magic. You can make up excuses as you go along, but they will think you a worse GM for it.
If they draw an inference that was valid from what you'd said, ("Hagrid said every single wizard who had gone bad was in Slytherin. At the time he said it, everyone thought Sirius Black had gone very bad indeed. Ergo, Sirius Black must have been in Slytherin! See, they can be good!"), that's annoying in a book, but in a role playing game it's downright frustrating.
In that case, Brandon Sanderson's cosmere should be required reading for worldbuilders. Rich, authentic, and consistent worlds, on top of being an utter joy to read.
I mean, George Lucas transformed his magic into counting magical mitochondria while people looked away and that universe is still as successful as ever
Popular fiction doesn't give a shit about contradicting itself
The root of this thread is a comment about "not contradicting yourself too much" yet there are a couple of comments within the thread that seem to imply that because we can find some examples of contradiction in fiction then uncontrolled/unlimited contradiction should be fine. This "I can think of a counterexample - let's make it a rule" seems to appear more frequently now but maybe that's just me.
>in fiction then uncontrolled/unlimited contradiction should be fine
I'm not saying it's fine in general. I'm saying it doesn't matter for pop fiction
Popular fiction gets retconned all the time. I'm fundamentally disagreeing with "not contradicting yourself too much" mattering at all for fiction that's successful with the masses
If you ever feel tempted to think that quality writing matters for popular success, just remember that lewd Twilight fanfic has sold more than a hundred million copies
On the other hand, the thread is a comment about the challenge of not contradicting yourself too much, not a claim that doing so is going to substantially impact the work.
Pretty much every creative work has contradictions and discontinuities that people gloss over, even (or especially) stark ones in critical success. The people who really get riled up about it are a very particular kind of person and not one I even meet outside of internet forums.
The Star Wars Mitochondria revelation was unpopular, but it didn't contradict anything. The original movies never address whether you could detect force users with a blood test.
eh I think its not as black-or-white. The Elder Scrolls lore is inconsistent but that is chalked up to unreliable narrators. Used in the right amount, I think slight contradiction in worldbuilding is good, and it also reduces the burden on the worldbuilder for every tiny detail to be factually consistent.
Shameless plug here for those (like the author) who are using GPT in your projects and finding the costs to be too high.
We're building an open source wrapper around ChatGPT that lets you use it programmatically as an API or from Python or in CLI. Best of all, it's completely free! So it is great for testing the waters on a hobby project.
p.s. Just to answer all the comments below. OpenAI currently does not provide an API to ChatGPT. Our goal is not to abuse ChatGPT, but to provide tools for hobbiest to leverage it (by creating a power shell around it, making workflows around it, even using templates...) until an official API is available. As soon as an official API is available, we would integrate it in our CLI. In any case, chatGPT has hard query limits that would not allow you to use our project for any heavy lifting!
This is hooking into the ChatGPT UI API, which is something OpenAI does not like to see and could result in banning of the account. Of course, since it's using playwright to simulate a browser, it might be hard to detect, but still, I'd be cautious.
I don't even get why people wouldn't use normal GPT API (OpenAI completions) instead. Isn't it exactly the same except that ChatGPT is primed to be more conversational (which in many cases as the one in the OP is just noise anyway)?
There are some differences. For example ChatGPT underwent an extra finetuning step using reinforcement learning and it supports 8k tokens instead of 4k.
$6 for about 100 pages of world-building honestly sounds not that bad. And that's before implementing any cost-saving measures.
Queries with lots of context (like in this example) can quickly rack up the costs, but I've never felt that they are prohibitive for personal projects. The problems only start when you want to publish something other people can use, and at that point abusing ChatGPT sounds like a bad idea too.
Very similar to google's behavior with google scholar (and pretty much everything else). They built up according to one rule, but then we can't do the same thing to them. Funny that.
> they used common crawl which respects individual site policies such as robots.txt
Nice, so what? I seriously doubt that authors of the content crawled by CommonCrawl agreed to terms that their content will be used by openai.
Moreover CC seems to be opt-in by default according to their faq:
> You configure your robots.txt file which uses the Robots Exclusion Protocol to block the crawler. Our bot’s Exclusion User-Agent string is: CCBot. Add these lines to your robots.txt file and our crawler will stop crawling your website
Again, I doubt that plenty of people are putting CC specific rules into their robots.txt, moreover I'm not naive and I doubt that in our reality where "move fast and break things" is THE motto for any big corpo any rules like that are respected at all.
I don't think that's true. For example, someone copying the web (but still respecting robots.txt) and claiming it's their own would be problematic surely?
This is only one small step away from blatantly copying.
In general, automated systems and humans are considered different by the law. For example: looking out the window and noticing that your neighbor is going to the shop is ok; building an automated system that tracks people everywhere: not ok.
I was playing around with content rewriting using the GPT models and ended up with this: https://persona.ink
Once you’ve got characters in your world, the GPT models are pretty good at taking any arbitrary statement and rewriting it as if that character had said it. You can add in extra guidance too like having that character summarize it.
I don’t think it’s quite “one-shot” - it’s often an iterative process of copying out the bits you like and meshing them with your own writing. But I’ve found it’s pretty good to get the creative juices flowing.
You see, this is why not to worldbuild. And I say that as a writer who's spent years doing just that.
What a spectacularly long and thorough list of items none of which want anything. The closest it got was some sort of evil wizard boss who wanted power. At several places in the description it filled in a bit more about how he wanted power, and control, and was seemingly a bad guy… and wanted power!
Why?
World building with GPT is clearly A Thing. Looks to me like it's nearly as good as real-human world building. Maybe better. It's quicker, more granular, and has infinite patience for every little item in a massive city with billions of fully realized inhabitants, if 'realized' means 'what do they have in their pockets, what adjectives fit them, etc'.
What do any of them want? How does their story play out?
It doesn't. You built a world instead of a story. Congrats. It's a classic trap to fall into.
I see this as a more filled-out map. Making a map is fun, but it's not a story, it's a setting. Making a setting can itself be fun and I genuinely enjoy making imaginary maps. I sketch out some lines, and then almost by accident there are bays and mountain ranges and outer reaches and so on. Soon it's populated by different species and has history... and the lines I started out with aren't the source of all of that, but they got the process going.
In a more story-oriented approach you find the story and the main characters and then if you are smart you'll build the world around them. But secretly! If you make it obvious then it will feel manufactured, that fate is guiding the protagonist, not the protagonist's free will. Which is true: the author is fate and the protagonist has no free will.
Another option is to find the story amid the world. In an open world game you hope the player finds their own stories.
From a more technical point of view, I don't think you want the normal building/character generation to the main antagonists, the villain, or the hero. GPT can't hold itself back, and now every other building will have some character like this. Like if there's a big prison break from Arkham and the villains don't even work together (rendering many characters as one) but instead Gotham is filled with an apocalyptic danger on every street corner. If this tool was to have villain creation it should probably be a top-level thing: you make N villains and place them, they don't emerge organically in the fabric of the city.
All that said, perhaps a quick fix would be to change the current prompt from:
{
name: "FirstName LastName",
type: $building.jobTypes|first|repr, // Or $building.jobTypes|rest|repr
description: "[a description of the person, their profession or role, their personality, their history]",
arrives: "8am",
leaves: "6pm",
}
To something like:
{
name: "FirstName LastName",
...
goal: "[a driving desire]",
flaw: "[a character flaw or weakness]",
}
This is another place where defining the schema as part of the city design could be helpful, as it might let you highlight what interests you about the people in your world.
But inside of that consistent world, you now can have many stories. Pick out one of those detailed NPCs and explore their life and if you done the world right, then this can be an epic story.
I think in a practical application (i.e. a game), it might be feasible to "lazy load" the world. If I remember correctly, watchdogs 2 does something like this.
Need a new character? Just create it on the fly.
You'd have make be sure to create all the necessary background structures as neighborhood, family ties, etc as well. Or at least mark make sure you don't introduce (glaring) inconsistencies.
I got the feeling while reading the article that you could lean more heavily on procedural generation in order to reduce costs. Using GPT-3 for name generation is overkill. There are plenty of lists out there to use and if you want 100% unique names you can create a probabilistic name generator that uses letter or phoneme frequencies to spit out convincing sounding names. Generating the layout of buildings and their locations on a map or within a city is also a fairly well solved procedural generation problem with lots of different methods. I'm even tempted to say that things like the professions and relationships between characters would be better off being procedural and then let GPT fill in the dramatic details or fantastical job titles and descriptions.
Basically, all the boring and well established stuff should be procedural and then use GPT to make it exciting by writing narratives and plots as well as adding the setting.
I have been thinking how I would go about creating a game world that feels populated and scales. My idea is very similar to this, having a world simulated mostly with statistics, i.e so and so town has population such and such and only generate the characters, environments the character directly interacts with.
That's how dwarf fortress works. It has a system of "sites" which are locations with population statistics broken down into race and profession, which interact with each other via wars and trading caravans and the like. There are separately a smaller but still large population of historical figures whose lives are described in much more detail and who are picked to do things like lead armies & caravans, or who compete in races/competitons etc. Finally those people you actually interact with have their skills/personally inferred from their history (they were once a tavern keep? they must have good consoler skill etc. They probably aren't also an introvert)
I would love to see this potential being used in games to procedurely generate unique experiences in for every player. I really can't wait for a proper AAA implementation.
But why would you want that, though? If you make everything procedural and unique, then players have absolutely no shared experiences to bond over. Think of a game like Elden Ring, and imagine how diminished it would be if you knew that nothing you encountered was ever encountered by anyone else, and would never be encountered by anyone else. Instead of a shared cultural experience, now you have an isolated one. That would be a strictly worse game.
There are games with procedural events and worlds: Dwarf Fortress, RimWorld, Nethack. The players bond over by sharing their personal stories in a familiar setting. There are common elements like fundamental game mechanics and scripted events so it's not isolated. The communities of those games look engaged and I wouldn't call them strictly worse.
Not everything. Imagine a character that has a certain agenda/motivations/personality, but is able to 'naturally' react to the player's actions in a non-scripted way. NPCs is future AAA games are going to be insane.
I'd like to wait for companies to strike a balance between isolation & culture. They have so many tools at their disposal, the setting, the mechnanics, the environment, the story, the quests. I would wait for such an where I still feel like I'm discovering things even after multiple play throughs. Maybe sometimes, I prefer having a unique experience that I know nobody else will for that specific game & everyone else will too. We could still share about our unique experiences. It's not really that bad. In fact, its fun & beautiful.
Just pregenerate a large corpus of characters that is shared with everyone. Especially if you want human intervention in the creation (which is the right choice for now). Then you have potentially hundreds of characters. And you can invest in things like secrets... who wants to add lots of secrets to their NPCs now, knowing that maybe no one would ever discover them? But if they are cheap, then sure. Now any player has a real opportunity to discover something truly new in the shared world.
Yeah honestly this is what I'm really looking forward to. Daggerfall/Skyrim with auto-generated content that is in some way related to individual activities or personality traits of each NPC.
I wonder how many engineering hours will be spent on tuning the GTA style strip club NPCs to be the right level of interactive while staying within the ESRB rating.
For something like that you're not there to talk to the character. easier to pick a few lines of basic dialog "nice to see you daddy" or "looking for some fun" and then let the, uh, visuals do the talking.
But talking to the mayor or the king or something, different story. imagine if playing Dragon Age or whatever you get to the king and ask rude questions and he politely tells you to piss off. or goes on random tangents about how lazy some of his courtiers are, etc.
Am I the only one who finds these kinds of 'discoveries' about ChatGPT incredibly boring? Think what you could have made yourself in the time it took to create good 'prompts'. There's no skill involved here in my opinion.
Imagine when ChatGPT lets us cross-reference all our chats so far. I have hundreds in my history and it would be amazing for it to recall them all when I am querying it accordingly.
How are you all exporting your chats in the meantime?
Awesome. It'd be pretty cool to generate maps with a certain style from descriptions of areas/towns/etc like this. A big problem would be the lack of consistency in the internal model though, which is a really persistent problem with chatGPT/others like it.
> If you ask very explicitly for flaws you can get some. It’s very hard to get a truly despicable character.
This reminds me of one interesting tidbit in recently canceled Dilbert author Scott Adams's racist rant of a podcast.
Said something along the lines that it's absolutely irrelevant what kind of AI we can craft, nobody will want it. The state and industry wouldn't allow most of what such an AI would say or do. Extrapolating from this: crafting and training the AI is the "easy part", an even more complex task is to restrict the output in the end: make it dumb, simple, tailored and censored to the imagined end user, and profitable for the creator.