Hacker Newsnew | past | comments | ask | show | jobs | submit | empiko's commentslogin

I'm with you. I hope that the other posts will also pop up here.

> So it is just a case of older people pulling the ladder up behind themselves.

Is it though? I have a feeling that previous generations were simply happy with less. Now we are so connected and everybody wants what they consider the standard according to social media: huge house in the most prominent city in their country, N exotic vacations every year, meaningful job, etc. But this would be a pretty tall order even 20 or 40 years ago.


I too feel this is a huge part of it, coupled with the fact that "basics" of last generation (a home you own, a stable job that doesn't overwork you on evenings and weekends, affordable options to have a family) are also being priced out of many peoples lives. You feel like you're not matching what your parents and cultural artifacts tell you you should be achieving at your age, and at the same time you're flooded with influencers on ski trips to Japan or snorkelling in Jamaica every other weekend, and it's a perfect recipe for feeling bad about your life no matter how well off you're doing compared to yesterdays median statistic.

> I have a feeling that previous generations were simply happy with less.

They could also afford to buy houses on minimum wage salaries.

I have a much higher salary than my parents, and I'm theory I can get more things from further afar, but I still live in a much more precarious situation.


If this was the case, we would see R&D costs dropping for OpenAI. Not sure if that is the case.

They do still train models from scratch, and they are still making larger models.

You would expect that to use a lot more resources.

It seems to be that GPT 5.x are all likely to be extensions of the gpt5 base model with similar numbers of parameters.

The money spent on extending base models would be dwarfed by the scale increase of the next major number.


I don't think the math is correct here. The 25% of positions that have been seen more than once represent more than 25% of the occurrences. Even if all of them would be seen only twice, you should already see them in 40% occurences.

HackerNews - hnrss.org


Just a side note, but for most of history, art was participatory. People participated in rituals, songs, dances, etc. It's only a recent idea that art is about consumption of what others have created.


I think it's a question of how polished the process is. Imagine a new fully functional app popping up in your phone after a single prompt. Then, imagine you can edit this app while you are using it by saying what you wish it should do.

There might be a lot of people interested in all kinds of small apps for their particular hobbies, interests, jobs, etc


Nobody in ML or AI is verifying all your references. Reviewers will point out if you miss a super related work, but that's it. This is especially true with the recent (last two decades?) inflation in citation counts. You regularly have papers with 50+ references for all kinds of claims and random semirelated work. The citation culture is really uninspiring.


I do not disagree with the post, but I am surprised that a post that is basically explaining very basic dataset construction is so high up here. But I guess most people just read the headline?


Yep, it is a thing in a handful of Western European countries.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: