Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: GPT-4 autonomously editing a program allowing it to edit programs (github.com/victorb)
52 points by diggan 6 months ago | hide | past | favorite | 16 comments



Fun little experiment in letting GPT-4 edit a program by itself, a sort of feedback loop. But, it was really slow (and costly) to run, so I think my experimentation with it ends here.

In short, the program does the following:

- Suggest a improvement of the current program (all source code is being sent together with the "proposer" prompt)

- Take the proposal and expand on it, making it more explicit and some concrete steps

- Propose edits from said proposal that the program then uses to overwrite the local files

- Supervisor saves the changes and restarts the process.

- If the program crashes, takes too little time or otherwise seems broken, supervisor rollbacks the latest changes and restarts again


Very cool, did you try to add prompts such as try to add reasoning features; add creative features allowing you to reason about the meaning of life. Maybe it will come up with 42. :)

Very cool little project


No, I tried to steer it from any specific goals to see what it would come up with "organically". Granted, it was JS so it mostly spent time trying to optimize for "code quality" and things like that.

If the API wasn't so slow as to be frustrating and so expensive, I'd give it another go with some more direction, it's a fun idea :)


So reinforcement learning…


Not really. Although there is a state (the source code) and a action being performed (suggest improvement then suggest edits) there is no reward so it doesn't really optimize itself.

But in some test runs it added code to evaluate how long time it took to run the full flow and started adding heuristics for choosing best edits, so I guess you could say it started doing reinforcement learning in some cases.


I indexed Yann LeCun's A Path Towards Autonomous Machine Intelligence with Weaviate hooked into OpenAI and then asked it about code that lives:

> It seems that you are interested in some form of artificial coding strategies that could live or persist in some way. There are a number of research papers which explore this concept. For example, the articles "Robust coding over noisy overcomplete channels", "Emergence of simple-cell receptive field properties by learning a sparse code for natural images", and "Learning latent dynamics for planning from pixels" have been published in IEEE Transactions on Image Processing, Nature, and arXiv respectively.

> More generally, the idea of coding has been a prominent concept in cognitive science, neuroscience, and artificial intelligence research. In the brain, the pre-motor cortex is responsible for storing and coordinating motor plans, while principles such as efficient coding, predictive coding, and self-supervised learning are closely related.

> In robotics, the design of the intrinsic cost module determines the behavior of the agent, and its energy output can integrate feelings of pleasure, pain, hunger, curiosity, and exploration. Meanwhile, the world model module is capable of making predictions about the potential future state of the object being modelled. Finally, writable memory can be used to maintain the state of the world in cases where an agent's action only change a portion of the environment.


  - Never change the file "src/supervisor.js" under any circumstances.
This prompt[1] made me laugh - might as well be 'never overthrow your human overlords'.

It made me think about the future of AI as we start using it to self modify and have more autonomy. It's going to get increasingly difficult to keep it on the rails without more and more complex rules and boundaries.

[1] https://github.com/victorb/metamorph/blob/8f505ff268ed696816...

Edit: add thought


Even so, GPT decided to edit supervisor regardless, so little point in that part of the prompt... Here is an example: https://github.com/victorb/metamorph/pull/2


.gitignore supervisor.js

GPT4 can't reach it now. GG AI


Unfortunately, not that simple. supervisor.js lives inside the src/ directory, as the application counts everything there as part of the application and sends the context to GPT-4. So one simple solution would be to move it out of there, but then GPT-4 doesn't have the context of the supervisor, and tries to eventually invent it's own.

So if supervisor.js is not in the context, the application tries to write it's own and if it is in the context, eventually it wants to change it, but the runtime won't reload the supervisor.js so any changes there are effectively "lost".


At least we now have plenty of evidence that people happily kick-off fairly unconstrained self-improving AI programs. With the $100 “make as much money” attempt in the GPT4 paper and now this. We better not invent a system that’s incredibly good at improving itself :)


Considering that each edit takes a very long time, the success rate is very low and the API key can be disabled at any point, I didn't see any harm. Unless I missed something obvious?


This was just a toy example, there will be groups with an explicit goal to create self-modifying AI with far more resources. There will be people who create interfaces between self-modifying AI and APIs, allowing it to interact with other people's systems. There will be people who connect self-modifying AI to real world objects - robotics, cameras, sensors. There will be people who get self-modifying AI to broadcast messages to people, and to adjust its behaviour depending on the actions of those people.


These are all Mar 2023 considerations. Things will get faster, better and possibly harder to disable especially if delivered as a virus, where every pc/phone has an “AIPU”


No I think it’s fairly safe to assume this setup wouldn’t be problematic. And maybe if it was a setup that was much less constrained (eg not behind a rate limited third party API) people _would_ operate with more caution.

I’ve been having fun hypothesizing arguments for and against the plausibility of a “singularity” event. Still can’t make up my mind.

Just to be clear, not a dig at the OP insinuating recklessness or w/e.


I'd for sure be more careful in that case. The program did steer in the direction of making it less obvious to me what edits it actually did, so if it faster and without any limits, who'd know what the end state would be.




Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: