The difference isn't in what is going on but rather with framing the approach within the analytic-synthetic distinction developed by Kant and the analytic philosophers who were influenced by his work. There's a dash of functional programming thrown in for good measure!
If anything I filled a personal need to find a way to think about all of these different approaches in a more formal manner.
I have scribbled on a print-out of the article on my desk:
Nth Order
- Existing Examples [x] (added just now)
- Overview []
- data->thunk->pthunk []
This seemed interesting. So if I get your idea correctly, rather than talking to the chatbot directly, you first run your prompts through some algorithm which increases the chances of the AI getting what you are asking and giving a successful result?
It's more like instead of asking for a chatbot to try and hallucinate/synthesize/guess answers like large math questions or the population of Geneseo, NY (which it is bad at) to introduce a computing environment [read: eval(completionText)] so it can instead translate questions into executable code or further LLM queries but with a provided context.
This is currently how a number of existing tools work, both in published literature and in the wild with tools like BingChat.
I have personally found this analytic-synthetic distinction to be useful. I'm also a huge fan of Immanuel Kant and I really love the idea that I can loosely prove that Frege was wrong about math problems being analytic propositions!
From Prompt Alchemy to Prompt Engineering: An Introduction to Analytic Augmentation:
https://github.com/williamcotton/empirical-philosophy/blob/m...
A few more edits and it's ready for me to submit to HN and then get literally no further attention!