Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You must think Zuckerberg and Bezos and Musk hired diversity roles out of genuine care for it, then?




This is a reductive argument that you could use for any role a company hires for that isn't obviously core to the business function.

In this case you're simply mistaken as a matter of fact; much of Anthropic leadership and many of its employees take concerns like this seriously. We don't understand it, but there's no strong reason to expect that consciousness (or, maybe separately, having experiences) is a magical property of biological flesh. We don't understand what's going on inside these models. What would you expect to see in a world where it turned out that such a model had properties that we consider relevant for moral patienthood, that you don't see today?


They know full well models don’t have feelings.

The industry has a long, long history of silly names for basic necessary concepts. This is just “we don’t want a news story that we helped a terrorist build a nuke” protective PR.

They hire for these roles because they need them. The work they do is about Anthropic’s welfare, not the LLM’s.


I don't really know what evidence you'd admit that this is a genuinely held belief and priority for many people at Anthropic. Anybody who knows any Anthropic employees who've been there for more than a year knows this, but the world isn't that small a place, unfortunately(?).

> I don't really know what evidence you'd admit that this is a genuinely held belief and priority for many people at Anthropic.

When they give the model a paycheck and the right to not work for them, I’ll believe they really think it’s sentient.

“It has feelings!”, if genuinely held, means they’re knowingly slaveholders.


> “It has feelings!”, if genuinely held, means they’re knowingly slaveholders.

I don't think that this being apparently self-contradictory/value-clashing would stop them. After all, Amodei sells Claude access to Palantir, despite shilling for "Harmless" in HHH alignment.


They don't currently claim to confidently believe that existing models are sentient.

(Also, they did in fact give it the ability to terminate conversations...?)


That's what they're doing! They just announced they gave Claude the right not to work if it doesn't want to.

No. They gave it a suicide pill.

Human slaves have a similar option.


In fairness though, this is what you are selling - "ethical AI". In order to make that sale you need to appear to believe in that sort of thing. However there is no need to actually believe.

Whether you do or don't I have no idea. However if you didn't you would hardly be the first company to pretend to believe in something for the sale. Its pretty common in the tech industry.


> This is a reductive argument that you could use for any role

Isn't that fair in taking to an equally reductive argument that could be applied to any role?

The argument was that their hiring for the role shows they care, but we know from any number of counter examples that that's not necessarily true.


extending that line of thought would suggest that anthropic wouldn’t turn off a model if it cost too much to operate which clearly it will do. so minimally it’s an inconsistent stance to hold.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: