Hacker News new | past | comments | ask | show | jobs | submit login

In general, no they can't:

https://gwern.net/gpt-3#bpes

https://paperswithcode.com/paper/most-language-models-can-be...

The appearance of improvements in that capability are due to the vocabulary of modern LLMs increasing. Still only putting lipstick on a pig.




I don't see how results from 2 years ago have any bearing on whether the models we have now can generate haikus (which from my experience, they absolutely can).

And if your "lipstick on a pig" argument is that even when they generate haikus, they aren't really writing haikus, then I'll link to this other gwern post, about how they'll never really be able to solve the rubik's cube - https://gwern.net/rubiks-cube




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: