i'm curious, how did you arrive at "40-50%" possible human performance?
the task of "predicting the next word" can be understood as either "correctly choosing the next word in the hidden context", or "predicting the likelihood of each possible word".
the quiz is evaluating against the former, but humans are still far from being able to express a percentile likelihood for each possibility.
i only consciously arrive at a vague feeling of confidence, rather than being able to weigh the prediction of each word with fractional precision.
one might say that LLMs have above human introspective ability in that regard.
> "Researchers claim it is the first time an LLM has made a novel scientific discovery"
LLM didn't make discovery on its own in that work, it played role in one step of unknown importance, other steps were lots of manual coding and lots of CPUs to brute force solution.
If you have strong opinions, you're dividing people. If you're Bezos, you've got a lot of influence. If you then dump them out of opportunism, you're ignoring any morality.
I agree, there are hundreds of comments in this post, which is not at all unusual for these kinds of societal topics, yet very few actually clarified the basics.
The goals, desires, motivations, assumptions, etc., of the various parties and groups involved.
Without this, the vast majority of effort seems to be spent going in circles. Some might be convinced from position A to B, some might be convinced from position B to C, and some might be convinced from position C to A.
i'm reminded of the short story "unwirer" by charlie stross and cory doctorow, which imagines a counter factual universe in which the internet was captured by corporate intentions much earlier
> but this would require people to be able to manage their own private keys
a few generations ago social networking would have seemed infeasible because it would require wide spread literacy (along with many other reasons of course). widespread private key management doesn't seem that infeasible to me.
the task of "predicting the next word" can be understood as either "correctly choosing the next word in the hidden context", or "predicting the likelihood of each possible word".
the quiz is evaluating against the former, but humans are still far from being able to express a percentile likelihood for each possibility.
i only consciously arrive at a vague feeling of confidence, rather than being able to weigh the prediction of each word with fractional precision.
one might say that LLMs have above human introspective ability in that regard.