isn't it a bit unprecedented and a bit strange for companies to measure the world's economic progress due to their own products? imagine Amazon doing this.
OTOH isn't the revenue of a company itself indicative of the productivity growth it adds on the economy?
The companies in the stock market are not primarilay a jobs program. It is not the primary role for companies to pay their workers. Such a system would never work and would collapse.
Virtue signalling about "treating employees well" is shortermist and doesn't consider the higher order effects.
these corporations would never work because they would optimise for the wrong thing - they would get their face eaten by other more efficient and ruthless corporations
These corporations exist and do work. Worker owned companies have their own challenges and their own advantages.
For example they tend to be more stable during crisis, because workers tend to vote for lowering salaries/benefits temporarily rather than doing layoffs. So they retain talent better. But they also tend to have difficulty to grow quickly, for obvious reasons.
Besides full on coops, there are also plenty of examples that are hybrids (partially worker owned).
> they would get their face eaten by other more efficient and ruthless corporations
You're possibly of assuming that a company needs to have an adversarial relationship to their workers in order to be competitive. I don't think that's generally true. This approach has advantages in specific situations, but disadvantages in others.
1. If AI is like other technologies, there will be job displacement and temporary upheaval after which new jobs will be created and prosperity increases - this is by far the only good way to increase prosperity
2. If AI is so good that it is a proper superset of humans and can do all jobs humans can do, this is a huge deal and we don’t even have the vocabulary to express what would happen
They could though. They could see job creation occurring AS the internet grew. I saw Netscape become an actual company. Saw CISCO grow. Saw tons of startups that employed people and saw the kinds of jobs it was bringing.
AI proponents says 'jobs appeared in the past after X, therefor they will magically appear in the future after Y' ignoring the industrial revolution started in the late 1700s and the lifestyle they brag about it delivering didn't come along until the late 1940s/1950s.
> LLMs operate in the plane of words, not in the world of physical phenomena that science investigates. They don’t reason, synthesize evidence, or draw upon the previous literature. They can generate text that looks like a paper but mistaking this for science is a cargo-cult fallacy.
I’m genuinely interested in someone countering the following evidence that supports the authors.
Plane of words: broadly correct. Everything is flattened to tokens and token sequences, and the training data is dominated by text tokens.
Reasoning: CoT tokens are mostly just tokens, more appropriately called intermediate tokens, and are largely disconnected from the end result. Including them improves the end result (user satisfaction), but does not imply reasoning. See for example Turpin 2023, Mirzadeh 2024, Pournemat 2025, Palod 2025.
Synthesising evidence: You can achieve SOTA summaries with LLMs, but this involves, for example, using a harness to generate dozens of summaries with different models, separately using some kind of vector embedding model to compare results to the original, and selecting the best match. This is not how most people are using LLMs for summaries. While this is being slowly RLVR’d in post-training, a one-shot naive summary underperforms more complex methods significantly.
I think I know the examples you’re talking about. They don’t show much in terms of reasoning.
The Erdős problems have turned out to be largely brute force or finding older results.
The Feb 2026 GPT-5.2 theoretical physics paper was a result of “dialogue between physicists and LLMs”, called “grad student level” by experts in the field, used a “custom harnessed” “internal OpenAI” model with “20 hours of reasoning”. Quotes from OpenAI blog.
The Matthew Schwartz physics paper with Claude this March involved “51,248 messages across 270 sessions, producing over 110 draft versions and consuming 36 million tokens”, and the actual contribution was Schwartz finding an error in Claude’s solution.
The second two are typical conservative tech-bro "the past was always better" type bs.
The first two are actual real effects of the complex world that we live in. Go back 150 years or so ago and most jobs were not bullshit jobs. That is, humanity spent most of its time trying to feed and clothe ourselves and if you weren't one of the few people with money then "not starving next winter" was pretty high on your list of priorities in working.
With the rise of industrialization, mechination, and transportation most of our needs can be met pretty easily (if society optimizes itself for that is a totally different story). It is highly that your job at this point has anything to do with continued human survival and instead you're working on some kind of revenue generation for some company.
This couples well with enshittification. It took a good part of said industrial revolution to learn how to make things of all kinds and make them reliably. But it turns out too much reliability isn't profitable over the long term. Getting your customer on an upgrade treadmill where they constantly give you more money makes you huge. You'll be able to get huge loans and buy up your reliable competition.
Anti AI cope is unreal, the comparisons to smoking won't stop lol. The mental model of such people (like you) will be studied. LLM's won't go anywhere, keep dreaming.
> "We're making great strides in AI" and "We need to cut 20% of people" are simply two statements without any connection aside from the fact that they are next to each other in the sentence.
Huh? How is it not connected? More productivity means fewer people are required. I'm not sure how you are not able to connect these obviously connected statements.
There’s an optimal number of employees required at any productivity point.
Why don’t Google hire 3 times the number of developers? They have the money right? What’s your logic for not hiring more?
Hiring and firing people aren't symmetric actions.
They're asymmetric because hiring more people costs more than just the salary. For example, some folks' entire jobs are to recruit and hire people. Once they are hired, you have to onboard them, etc. So the more you hire, the more you have to pay the folks with supporting roles (either directly or by way of them not having infinite time/capacity).
Firing people isn't free, either. It comes at the cost of bad PR and severance, but the latter is voluntary and calculated by the company, and the former is quickly forgotten by anybody that matters to a publicly traded company (investors).
That means not hiring those two people in the first place is usually cheaper than firing them later.
To the original point: Cloudflare isn't hiring fewer people; they are firing people. If they are trying to grow (like every single investor is counting on them to do), then why would they fire people (the cheaper action) now when they would likely need to hire people (the more-expensive action) later in order to meet that increased growth?
The charitable answer would be that the people they are firing were deemed unable to adapt to using AI for all of this supposed increased productivity. But Cloudflare aren't saying that. In fact, they're saying the opposite by stating it's not about individual performance.
your's is a caveat against my larger more correct point: there's an optimal number of employees needed at any given productivity point.
its true that hiring and firing are asymmetrical, and CF has shown that they are willing to bear the brunt of the asymmetry and fire people despite the downsides.
that asymmetry lies doesn't disprove the original point: cloudflare simply doesn't require the _same_ number of people to work for them with AI.
if you disagree with this then you believe that companies should only have monotonically increasing number of employees which is quite ridiculous a claim
OTOH isn't the revenue of a company itself indicative of the productivity growth it adds on the economy?
reply