Hacker News new | past | comments | ask | show | jobs | submit login

I'd argue the GPT-3 results were really cherry picked by the few people who had access, at least if the old versions of 3.5 and turbo are anything to go by. The hype would've died instantly if anyone had actually tried them themselves and realized that there's no consistency.

If you want to try out GPT-2 to refresh your memory, here [0] is an online demo. It's bad, I'd say worse than classical graph/tree based autocomplete. I'm fairly sure Swiftkey makes more coherent sentences.

[0] https://transformer.huggingface.co/doc/gpt2-large






Open AI when they gave press access to gpt said that you must not publish the raw output for AI safety reasons. So naturally people self selected the best outputs to share.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: