The survey (https://career-research.mynavi.jp/reserch/20241003_86953/) which is the basis for the 1 in 5 claim seems sus, I would bet it's not true. (I don't read Japanese, but had Claude read it for me; if someone who does read Japanese could confirm, that would be interesting.)
They say 16.6% of people who changed jobs last year used these services, but only 23.2% of companies report having any employees use them. If 16.6% is correct, the % of companies number should be much higher, supposing companies have multiple resignations per year.
The method for the first number on how they found their respondents is described just as "internet survey," with no further info. There are a lot of ways to do this that would over-sample people who use these services.
I think there is a possibility that this year's figures will be significantly higher.
The resignation agency service was widely reported in Japanese media in the first half of this year, which made its existence known to the public. This trend can also be confirmed on Google Trends[1].
In fact, the representative of 退職代行モームリ (Taishoku-daikō Mōmuri), the largest resignation agency company, stated in an interview that the number of users during this year's onboarding season was ten times that of last year[2].
Same, except I mostly got cats and some stuff about video games and book quotes. There was another thread here the other day where someone was complaining that the first things they saw were furry drawings. It seems like there's a variety of first things people see on Bluesky.
My point is that it should not be that vitriolic as soon as someone joins. When I joined Twitter, I got actually interesting content, but of course that was way before Musk and "the algorithm" now. Sad to see that Bluesky is basically replicating the same thing that Twitter is doing currently, instead of how Twitter was in the beginning.
As I mentioned above, I have 0 following and followers, so it takes the discover tab content to show on the following tab, even though the correct behavior would be to show nothing. Therefore, since Bluesky deliberately does this, I don't see it as being better than Twitter at all, engagement chasing wise. After all, it is VC backed and will need to make money somehow.
> even though the correct behavior would be to show nothing
I think I see the problem. There's room for disagree about what the "correct" behavior is in this edge case of a user making unusual decisions about how to use the app. I can see it either way and there are probably pros and cons but "show them whatever bullshit" is not obviously incorrect compared to "show them nothing at all."
If you're refusing to use even the most basic tools to shape your feed or give it any clues about what to show you you're kind of in UI UB territory.
I understand your point but I find it interesting that others cannot.
IMHO, online bickerers don't care for understanding. Understanding requires responding, not reacting. If we take a step back to contemplate your words, we lose the stimulation in the exchange.
I've thought about that for myself: missing the other person's point and debating imaginary outrages isn't actually fun or even winning. It's just stimulation for an internet addiction.
My guess is they just trained gpt3.5-turbo-instruct on a lot of chess, much more than is in e.g. CommonCrawl, in order to boost it on that task. Then they didn't do this for other models.
People are alleging that OpenAI is calling out to a chess engine, but seem to be not considering this less scandalous possibility.
Of course, to the extent people are touting chess performance as evidence of general reasoning capabilities, OpenAI taking costly actions to boost specifically chess performance and not being transparent about it is still frustrating and, imo, dishonest.
> Luminaries in the field such as Demis Hassabis and Yann LeCun believe that AI can turbocharge scientific progress and lead to a golden age of discovery. Could they be right?... Such claims are worth examining, and may provide a useful counterbalance to fears about large-scale unemployment and killer robots.
But will we maintain control of the above-human-ability, autonomous AI systems these companies are racing to build? This is the AI control problem.
If not, then "AI can automate science" isn't much of a counterpoint or reason to be optimistic -- science may be automated, but not under any human's control and not for any human's benefit. In fact, if we're in this situation, the ability of AI systems to automate science is worse news than otherwise, in the same way that the invention of science by humans was bad (or at best, very mixed) news for the animals of Earth.
I'm pretty sure Yann LeCun at one point didn't really care if AIs replaced humans, but I think people got through to him that what happens after AIs take over would almost certainly look really boring to a hypothetical intelligent human observer, even assuming the AI system survives into the long term. In the set of possible AIs that kill all humans, I'd suggest that almost all of them are not properly aligned to their own long-term survival either.