Hacker News new | past | comments | ask | show | jobs | submit login

I've been developing a methodology around prompt engineering that I have found very useful:

From Prompt Alchemy to Prompt Engineering: An Introduction to Analytic Augmentation:

https://github.com/williamcotton/empirical-philosophy/blob/m...

A few more edits and it's ready for me to submit to HN and then get literally no further attention!




How is this different from e.g. a python agent like https://github.com/hwchase17/langchain-hub/blob/master/promp... ?


Using the terminology that I'm working with this is an example of a second-order analytic augmentation!

Here's another approach of second-order analytic augmentation, PAL: https://reasonwithpal.com

Third-order, Toolformer: https://arxiv.org/abs/2302.04761

Third-order, Bing: https://www.williamcotton.com/articles/bing-third-order

Third-order, LangChain Agents: https://langchain.readthedocs.io/en/latest/modules/agents/ge...

The difference isn't in what is going on but rather with framing the approach within the analytic-synthetic distinction developed by Kant and the analytic philosophers who were influenced by his work. There's a dash of functional programming thrown in for good measure!

If anything I filled a personal need to find a way to think about all of these different approaches in a more formal manner.

I have scribbled on a print-out of the article on my desk:

  Nth Order

  - Existing Examples [x] (added just now)
  - Overview []
  - data->thunk->pthunk []


This seemed interesting. So if I get your idea correctly, rather than talking to the chatbot directly, you first run your prompts through some algorithm which increases the chances of the AI getting what you are asking and giving a successful result?


It's more like instead of asking for a chatbot to try and hallucinate/synthesize/guess answers like large math questions or the population of Geneseo, NY (which it is bad at) to introduce a computing environment [read: eval(completionText)] so it can instead translate questions into executable code or further LLM queries but with a provided context.

This is currently how a number of existing tools work, both in published literature and in the wild with tools like BingChat.

I have personally found this analytic-synthetic distinction to be useful. I'm also a huge fan of Immanuel Kant and I really love the idea that I can loosely prove that Frege was wrong about math problems being analytic propositions!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: