Hacker News new | past | comments | ask | show | jobs | submit login

Sorry for the late reply and thanks for taking the trouble to try this.

My intuition from a quick scan of your second experiment above is that, in a sense, you used the scaffold of the original prompts and dressed it up in a different context. GPT-3 and other Large Language Models are very good at discerning patterns in text and filling them in with new tokens, but this still doesn't tell us much about their ability to understand the text.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: