Hacker News new | past | comments | ask | show | jobs | submit login

Gwern points out that "prompt engineering" is very important in GPT-3 (https://www.gwern.net/GPT-3#prompts-as-programming), and it's entirely possible that will be more pronounced in GPT-4.

If that's the case, there's an obvious moat (perhaps not an incredibly deep one) in being better at prompt engineering than your competitors, dedicating R&D effort to discovering new prompt engineering tricks/principles, etc.

I could see this as being kind of like an advanced form of SEO.




I agree prompts make a huge difference in how well GPT-3 performs. The relevant questions are:

1) how easy is it to replicate best practices? If it's simple reformatting and everyone knows how to do it, then the playing field levels off quickly.

2) what does the improvement curve look like - how far can you push performance through better prompts before you get diminishing marginal returns?

It's too early to tell and we'll need more samples to figure this out.


I don't think it's going to be about a single prompt; reverse engineering multiple prompts interacting with themselves is hard. There's a lot of cool things to be done with:

(a) creating a pipeline of prompts that combine outputs of previous prompts into new prompts in a predefined manner

and (b) designing prompts to generate other prompts


With the right type of online learning and possibly some of the weights frozen, GPT-3 could gain an unlimited memory instead of the fixed 2048 token memory.


True, but unless there is a clear leader in your market - lots of good enough products will appear with GPT-3 and to compete with them you will need at least 5-10x better product. 2x won't suffice. So probably it will go down to who has a bigger marketing budget.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: