Hacker News new | past | comments | ask | show | jobs | submit login

No not better than the top 3.

> But for sure models can solve more math limits then the average person who probably can't solve a single one.

Some people are domain experts. The pretrained GPTs are certainly not (nor are they trained to be).

Some people are polymaths but not domain experts. This is still impressive, and where the GPTs fall.

The final conclusion I have is this: These models demonstrate above average understanding in a plethora of widely disparate fields. I can discuss mathematics, computation, programming languages, etc with them and they come across as knowledgeable and insightful to me, and this is my field. Then, I can discuss with them things I know nothing about, such as foreign languages, literature, plant diseases, recipes, vacation destinations, etc, and they're still good at that. If I met a person with as much knowledge and ability to engage as the model, I would think that person to be of very high intelligence.

It doesn't bother me that it's not the best at anything. It's good enough at most things. Yes, its results are not always perfect. Its code doesn't work on the first try, and it sometimes gets confused. But many polymaths do too at a certain level. We don't tell them they're stupid because of it.

My old physics professor was very smart in physics but also a great pianist. But he probably cannot play as well as Chopin. Does that make him an idiot? Of course not. He's still above average in piano too! And that makes him more of a genius than if he were just a great scientist.






agree, there are usages for LLMs

My point was about Singularity, what i t means and why LLMs are not there.

So you missed my point? Was I not clear enough what I was talking about?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: