Hacker News new | past | comments | ask | show | jobs | submit login

If you're using the same model for all of the analyses would it not give reasonably consistent results?



Who cares if the results are "consistent"? Whatever that means. The question is whether the results are true, correct. If they are erroneous, consistency is useless.


If they are consistently wrong within a given margin of error, then they are still very useful. I agree that we'd need to define our terms here to have a meaningful debate.


I mean, they're not useful at all, let alone "very useful". Can you explain what the use of this article is assuming the classification of movies in various categories (e.g. future, present, past; world better, same, worse) is consistently wrong? To me it seems clear we can throw it all away, as it is now just a bunch of meaningless words.


I think my concern is less about consistency, and more that I don't know how the LLM is deciding what is, for example, a "dystopian setting." Sure, it could be true that the percent of sci-fi movies since the 1950s that have a dystopian setting has increased from 11% to 46%. An alternate explanation is that the LLM is trained on more modern language, and so is better at detecting modern dystopian storylines. And with this method, we cannot know.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: