Hacker Newsnew | past | comments | ask | show | jobs | submit | simianwords's commentslogin

isn't it a bit unprecedented and a bit strange for companies to measure the world's economic progress due to their own products? imagine Amazon doing this.

OTOH isn't the revenue of a company itself indicative of the productivity growth it adds on the economy?


The companies in the stock market are not primarilay a jobs program. It is not the primary role for companies to pay their workers. Such a system would never work and would collapse.

Virtue signalling about "treating employees well" is shortermist and doesn't consider the higher order effects.


these corporations would never work because they would optimise for the wrong thing - they would get their face eaten by other more efficient and ruthless corporations

These corporations exist and do work. Worker owned companies have their own challenges and their own advantages.

For example they tend to be more stable during crisis, because workers tend to vote for lowering salaries/benefits temporarily rather than doing layoffs. So they retain talent better. But they also tend to have difficulty to grow quickly, for obvious reasons.

Besides full on coops, there are also plenty of examples that are hybrids (partially worker owned).

> they would get their face eaten by other more efficient and ruthless corporations

You're possibly of assuming that a company needs to have an adversarial relationship to their workers in order to be competitive. I don't think that's generally true. This approach has advantages in specific situations, but disadvantages in others.


Im literally saying the opposite that no adversarial relationship needs to exist.

That’s exactly why you don’t need worker owned companies


What's new here? It looks good - accessing connectors using Claude but not sure whether there's something fundamentally novel


Looks useful, so they are new plugins. But what are plugins vs skills vs connectors?

A plugin is just a bundle of MCPs, skills and templated prompts.

A skill cannot provide MCPs and can't provide custom template prompts, each skill is it's own slash command.

A plugin you can define N number of custom slash commands, and you can define MCPs as well as skills. So it bundles like all the things together.

By installing a plugin, you are basically installing a bunch of MCPs, skills and custom slash command prompts.


1. If AI is like other technologies, there will be job displacement and temporary upheaval after which new jobs will be created and prosperity increases - this is by far the only good way to increase prosperity

2. If AI is so good that it is a proper superset of humans and can do all jobs humans can do, this is a huge deal and we don’t even have the vocabulary to express what would happen

I don’t foresee a third option.


the fact no one can actually describe these "new jobs" makes #2 appear increasingly more probable.

You can’t have predicted that an SDE 6 in Google working on ad tech using Google spanner and optimising for SEO at 1989.

That’s why you can’t say how jobs can be created now.


They could though. They could see job creation occurring AS the internet grew. I saw Netscape become an actual company. Saw CISCO grow. Saw tons of startups that employed people and saw the kinds of jobs it was bringing.

AI proponents says 'jobs appeared in the past after X, therefor they will magically appear in the future after Y' ignoring the industrial revolution started in the late 1700s and the lifestyle they brag about it delivering didn't come along until the late 1940s/1950s.


This website is not worth your time.

> LLMs operate in the plane of words, not in the world of physical phenomena that science investigates. They don’t reason, synthesize evidence, or draw upon the previous literature. They can generate text that looks like a paper but mistaking this for science is a cargo-cult fallacy.

This is clearly wrong


I’m genuinely interested in someone countering the following evidence that supports the authors.

Plane of words: broadly correct. Everything is flattened to tokens and token sequences, and the training data is dominated by text tokens.

Reasoning: CoT tokens are mostly just tokens, more appropriately called intermediate tokens, and are largely disconnected from the end result. Including them improves the end result (user satisfaction), but does not imply reasoning. See for example Turpin 2023, Mirzadeh 2024, Pournemat 2025, Palod 2025.

Synthesising evidence: You can achieve SOTA summaries with LLMs, but this involves, for example, using a harness to generate dozens of summaries with different models, separately using some kind of vector embedding model to compare results to the original, and selecting the best match. This is not how most people are using LLMs for summaries. While this is being slowly RLVR’d in post-training, a one-shot naive summary underperforms more complex methods significantly.


What? Reasoning models are inventing proofs for unsolved open problems in mathematics. That is my benchmark for reasoning.

Is symbol manipulation reasoning? If so, machines have always been capable of reasoning, we just instructed them with a language other than English.

I think I know the examples you’re talking about. They don’t show much in terms of reasoning.

The Erdős problems have turned out to be largely brute force or finding older results.

The Feb 2026 GPT-5.2 theoretical physics paper was a result of “dialogue between physicists and LLMs”, called “grad student level” by experts in the field, used a “custom harnessed” “internal OpenAI” model with “20 hours of reasoning”. Quotes from OpenAI blog.

The Matthew Schwartz physics paper with Claude this March involved “51,248 messages across 270 sessions, producing over 110 draft versions and consuming 36 million tokens”, and the actual contribution was Schwartz finding an error in Claude’s solution.


It is clearly correct, but it sometimes works anyway. Which is the thing that one needs to accept even as not a fan of LLMs.

Been noticing this new phenotype of tech bro who writes with an air of superiority, subtly belittling all those beneath him. Also ardently believes in

- bullshit jobs

- enshittification

- kubernetes being a psyop

- tech landscape was best exactly during his career peak and has gone down since


The second two are typical conservative tech-bro "the past was always better" type bs.

The first two are actual real effects of the complex world that we live in. Go back 150 years or so ago and most jobs were not bullshit jobs. That is, humanity spent most of its time trying to feed and clothe ourselves and if you weren't one of the few people with money then "not starving next winter" was pretty high on your list of priorities in working.

With the rise of industrialization, mechination, and transportation most of our needs can be met pretty easily (if society optimizes itself for that is a totally different story). It is highly that your job at this point has anything to do with continued human survival and instead you're working on some kind of revenue generation for some company.

This couples well with enshittification. It took a good part of said industrial revolution to learn how to make things of all kinds and make them reliably. But it turns out too much reliability isn't profitable over the long term. Getting your customer on an upgrade treadmill where they constantly give you more money makes you huge. You'll be able to get huge loans and buy up your reliable competition.


What are your opinions on the first two?

Yes, give or take a couple of those points.

Some additional ones:

- believes SWE is now fundamentally different because of AI

- repeatedly belittles people who aren't on the AI hype train

- believes slop bans and distaste of AI art is gatekeeping


Domain specific models will never be a thing. You don't get generalised intelligence with that.

https://simianwords.bearblog.dev/why-domain-specific-llms-wo...


Anti AI cope is unreal, the comparisons to smoking won't stop lol. The mental model of such people (like you) will be studied. LLM's won't go anywhere, keep dreaming.

Studied by whom? Your virtual AI concubine who has you under her thumb? I thought human thinking is obsolete, as can be seen by your comments.

Sure. Let’s see whether ai sticks or not. Till then whine about it in the internet. Maybe a few would care

> "We're making great strides in AI" and "We need to cut 20% of people" are simply two statements without any connection aside from the fact that they are next to each other in the sentence.

Huh? How is it not connected? More productivity means fewer people are required. I'm not sure how you are not able to connect these obviously connected statements.


> More productivity means fewer people are required.

Required for what? If your goal is growth, and AI really is improving productivity of every employees that uses it, then why would you fire anyone?


There’s an optimal number of employees required at any productivity point. Why don’t Google hire 3 times the number of developers? They have the money right? What’s your logic for not hiring more?

Because firing is not a zero-sum for hiring.

Hiring 1 developer instead of 3 is not the same cost as firing 2 developers.


why is it not? if google can make more money by hiring 3x the developers, why didn't they do it? just explain that

Hiring and firing people aren't symmetric actions.

They're asymmetric because hiring more people costs more than just the salary. For example, some folks' entire jobs are to recruit and hire people. Once they are hired, you have to onboard them, etc. So the more you hire, the more you have to pay the folks with supporting roles (either directly or by way of them not having infinite time/capacity).

Firing people isn't free, either. It comes at the cost of bad PR and severance, but the latter is voluntary and calculated by the company, and the former is quickly forgotten by anybody that matters to a publicly traded company (investors).

That means not hiring those two people in the first place is usually cheaper than firing them later.

To the original point: Cloudflare isn't hiring fewer people; they are firing people. If they are trying to grow (like every single investor is counting on them to do), then why would they fire people (the cheaper action) now when they would likely need to hire people (the more-expensive action) later in order to meet that increased growth?

The charitable answer would be that the people they are firing were deemed unable to adapt to using AI for all of this supposed increased productivity. But Cloudflare aren't saying that. In fact, they're saying the opposite by stating it's not about individual performance.


your's is a caveat against my larger more correct point: there's an optimal number of employees needed at any given productivity point.

its true that hiring and firing are asymmetrical, and CF has shown that they are willing to bear the brunt of the asymmetry and fire people despite the downsides.

that asymmetry lies doesn't disprove the original point: cloudflare simply doesn't require the _same_ number of people to work for them with AI.

if you disagree with this then you believe that companies should only have monotonically increasing number of employees which is quite ridiculous a claim


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: