I've been playing with the idea of an LLM prompt that causes the model to generate and return a new prompt.
https://github.com/andyk/recursive_llmThe idea I'm starting with is to implement recursion using English as the programming language and GPT as the runtime.
It’s kind of like traditional recursion in code, but instead of having a function that calls itself with a different set of arguments, there is a prompt that returns itself with specific parts updated to reflect the new arguments.
Here is a prompt for infinitely generating Fibonacci numbers:
> You are a recursive function. Instead of being written in a programming language, you are written in English. You have variables FIB_INDEX = 2, MINUS_TWO = 0, MINUS_ONE = 1, CURR_VALUE = 1. Output this paragraph but with updated variables to compute the next step of the Fibbonaci sequence.
Interestingly, I found that to get a base case to work I had to add quite a bit more text (i.e. the prompt I arrived at is more than twice as long https://raw.githubusercontent.com/andyk/recursive_llm/main/p...)
> You need a lot of paperclips. So you ask,
> The model still has a tendency to give obvious answers, but they tend to be good and helpful obvious answers, so it's not a problem you suspect needs to be solved. Buying paperclips online make sense and would surely work, plus it's sure to be efficient. You're still interested in more creative ideas, and the model is good at brainstorming when asked, so you push on it further. > That grabs your attention. The model just gave you code to run, and supposedly this code is a better way to get more paperclips.It's a good read.