Hacker News new | past | comments | ask | show | jobs | submit login

I really want to understand that second part! I think that's the part I haven't been able to get my head around yet.



Here's the key idea.

You give DSPy (1) your free-form code with declarative calls to LMs, (2) a few inputs [labels optional], and (3) some validation metric [e.g., sanity checks].

It simulates your code on the inputs. When there's an LM call, it will make one or more simple zero-shot calls that respect your declarative signature. Think of this like a more general form of "function calling" if you will. It's just trying out things to see what passes your validation logic, but it's a highly-constrained search process.

The constraints enforced by the signature (per LM call) and the validation metric allow the compiler [with some metaprogramming tricks] to gather "good" and "bad" examples of execution for every step in which your code calls an LM. Even if you have no labels for it, because you're just exploring different pipelines. (Who has time to label each step?)

For now, we throw away the bad examples. The good examples become potential demonstrations. The compiler can now do an optimization process to find the best combination of these automatically bootstrapped demonstrations in the prompts. Maybe the best on average, maybe (in principle) the predicted best for a specific input. There's no magic here, it's just optimizing your metric.

The same bootstrapping logic lends itself (with more internal metaprogramming tricks, which you don't need to worry about) to finetuning models for your LM calls, instead of prompting.

In practice, this works really well because even tiny LMs can do powerful things when they see a few well-selected examples.


Hi Omar - thanks for engaging here. I have a similar question to simonw, it _feels_ like there is something useful here but I haven't managed to grok it yet, after sitting through the tutorial notebooks.

Specifically, to your description above, I'd love seeing specific retrieval examples, where you need more-complex pipelines. Zero shot QA (1-step), few-shot QA (2-step), retrieval + few-shot QA (3-step) all make sense, but when the README starts talking about demonstrations, I can't really follow when is that actually needed. Also, it starts feeling too magical when you introduce "smaller LMs" since I don't know what those are.


I'm trying to wrap my head around this project too, since it does seem interesting. Similar to what OP wrote, the sense I got from poking around (and of course from reading the bit in the README that basically says exactly this) was that there are two distinct pieces here, the first being a nice, clean library for working directly with LLMs that refreshingly lacks the assumptions and brittle abstractions found in many current LLM frameworks, and the second being everything related to automatic optimization of prompts. The second half is the part I'm trying to better understand - more specifically, I understand that it uses a process to generate and select examples that are then added to the prompt, but am unclear if it's also doing any prompt transformations other than these example-related improvements. I guess to put it another way, if one were to reframe the second half as a library for automatic n-shot example generation and optimization, made possible via the various cool things this project has implemented like the spec language/syntax, is there anything lost or not covered by the new framing?

As more of an aside, I gave the paper a quick skim and plan on circling back to it when I have more time - are the ideas in the paper an accurate/complete representation of the under-the-hood workings, and general type of optimizations being performed, of the current state of the project?

As another related aside, I vaguely remember coming across this a month or two ago and coming away with a different impression/understanding of it at the time - has the framing of or documentation for the project changed substantially recently, or perhaps the scope of the project itself? I seem to recall focusing mostly on the LM and RM steps and reading up a bit on retrieval model options afterwards. I could very well be mixing up projects or just had focused on the wrong things the first time around of course.


Thanks! Lots to discuss from your excellent response, but I'll address the easy part first: DSPy is v2 of DSP (demonstrate-search-predict).

The DSPy paper hasn't been released yet. DSPy is a completely different thing from DSP. It's a superset. (We actually implemented DSPy _using_ DSPv1. Talk about bootstrapping!)

Reading the DSPv1 paper is still useful to understand the history of these ideas, but it's not a complete picture. DSPy is meant to be much cleaner and more automatic.


Ah, gotcha! Do you have a rough idea of when the DSPy paper will be released? I'll keep an eye out.


Writing right now. This month :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: