Hacker News new | past | comments | ask | show | jobs | submit login

The word "knowledge" as used in TA shouldn't be taken as philosophically defined knowledge (i.e. a justified true belief).

Language models might indeed help us find potential sources of validation for various hypotheses, but deciding whether a hypothesis is true is another matter entirely.




This is an extremely important point.

These are very large, very complex fuzzy language maps, and are best used like a crystal ball for autocompletion suggestions, not definitive answers.

I’m unsurprised but also very disappointed by people’s desire to give these fuzzy answers final word. The desire to cede final authority on truth to these machines just because they’re giant and impressive is incredibly misguided and a major step backwards.

These machines are literal embodiments of group think.

They can only add value if people understand their limitations. If people get carried away with them and start assuming they’re authoritative we’re in for serious trouble.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: