I would say writing and game design are parts of a game that humans enjoy recognizing the human artistry in, and automating either degrades the artform.
ie, these are the _fun_ parts, not the boring glue code. these are the parts that humans are and should be reticent to give up.
so, to that end, what is your motivation for automating these systems?
You are exactly right, I don't want to create just an LLM experience. I want to create a system where the artist/game-developer creates a world, and the main threads of a story(think of it like a prophecy) which constraints the player to a world/setting and a set of abilities/actions, but other than that the player should be able to say and do whatever they want.
> If you keep vibe-adding features, and somehow keep getting customers to pay for this thing, what happens once the codebase becomes so complex that an LLM cannot fit it inside its “brain”?
you realize this point is well, well beyond what a human can "fit" in their brain as well? you start making shorthands and assumptions about your systems once they get too large.
This is true, but it ignores the fact that claude constantly pushes the code toward more complexity.
Any given problem has a spectrum of solutions, ranging from simple and straightforward, to the most cursed rube goldberg machine you've ever seen. Claude biases toward the latter.
When working on larger code bases, especially poorly factored ones (like the one claude tends to build unsupervised), it's default mode of operation is to build a cursed rube goldberg machine. It doesn't take too long before it starts visibly floundering when you ask it to make changes to the software.
Complexity management is something human software engineers do constantly. Pushing back against complexity and technical debt is the primary concern for a developer working on a brownfield project. Everything you do has to take this into account.
One of the main weaknesses with current AI is they don't know how to modularize unless you explicitly say it in their prompt, or they will modularize but "forget" they included a feature in file B, so they redundantly type it in file A, causing features to break further down the line.
Modularizing code is important and a lot of devs will learn this, I once had 2k-line files at the beginning of my career (this was before AI) and I now usually keep files between 100 and 500 lines (but not just because of AI).
While I rarely use AI on my code, if I want to type my program into a local LLM that only has between 8-32k context (depends on the LLM), I need to keep it small to allow space for my prompt and other things.
Even as a human it's much easier to edit the code when it's modular. I used to like everything in one file but not anymore, since with a modular codebase you can import a function into 2 different files, so changing it in one place will change it everywhere.
TLDR: Modularizing your code makes it easier for both you (as a human) and an AI assistant to review your codebase, and reduces the risk of redundant development, which AI frequently does unknowingly.
I think definitionally "vibe coding" means you feel out of control, in fact I would say Karpathy is deliberately trying to bring these feelings out in people.
If you are using an AI assistant with your feet on the ground, like as a coding buddy that you pair with, you're not "vibe coding"
reply