Hacker News new | past | comments | ask | show | jobs | submit login

I don't know if anyone remembers Mindpixel. It was a crowd-sourced project to collect common-sense data that could be used to drive an AI. But for each prompt, the only response stored was a 1 or a 0, indicating that the input made sense to a human or not. It never amounted to much, but it seems like GPT is missing the core idea implemented in that 20-year-old project: that some inputs are just not meaningful.



ah but what does having emitted a meaningless signal communicate about the internal state of the participant? might the occurrence and nature of the signal itself carry the message? maybe GPT-3 is tapped into a deeper level than we are




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: