Hacker News new | past | comments | ask | show | jobs | submit login

Comically benign stuff that works fine with GPT-4? It's so trivial to run into Claude lying or responding with arrogant misjudgements. Here's another person's poor anecdotal experiences to pair with yours and mine. [1][2]

But more importantly: it shouldn't matter. My tools should not behave this way. Tools should not arbitrarily refuse to work. If I write well-formed C, it compiles, not protests in distaste. If I write a note, the app doesn't disable typing because my opinion sucks. If I chop a carrot, my knife doesn't curl up and lecture me about my admittedly poor form.

My tools either work for me, or I don't work with them. I'm not wasting my time or self respect dancing for a tool's subjective approval. Work or gfto.

[1] https://www.youtube.com/watch?v=gQuLRdBYn8Q

[2] https://www.youtube.com/watch?v=PgwpqjiKkoY




"[...]If I write well-formed C, it compiles, not protests in distaste. If I write a note, the app doesn't disable typing because my opinion sucks[...]"

There's a rust compiler joke/rant somewhere to be added here for comical effect


Apparently I'm too neurotypical, because I also would agree that judging a person based on only 2 character traits ("Capacity and Intention") is fairly unethical.

I'm sorry neurodiverse people that the world and most humans don't fit into neat categories and systems that you can predict and standardize. And I'm sorry that this makes it harder for you to navigate it. But we get around this problem by recognizing and accommodating the folks that need it, not break the world to fit the desired mold. (i.e. add wheelchair ramps to every building, not force everyone to use a wheelchair)

I realize this is just one example, but it's the one the author chose for that video. (The Cyberpunk thing just seems like a bug.)

To me it seemed like the video was leading up to a 3rd example - of asking Claude about why does japanese culture appreciate precision. THAT would've been a great example - because without any context, that does come off as a racial stereotype (not a negative one, but nonetheless), but for a variety of reasons (covered in the ChatGPT response he included), it IS fairly ubiquitously accurate about Japanese culture, and is worth understanding why. If CLaude had refused to answer this, it would've been a good example of overly arrogant misjudgement.

But he didn't include that, and we can probably guess why - it answered it fine?

I decided to fact check it myself and found out Claude is not yet available in Canada - https://venturebeat.com/ai/anthropic-brings-claude-ai-to-mor...


Cars nowadays have radars and cameras that (for the most part) prevent you from running over pedestrians. Is that also a tool refusing to work? I'd argue a line needs to be drawn somewhere, LLMs do a great job of providing recipes for dinner but maybe shouldn't teach me how to build a bomb.


> LLMs do a great job of providing recipes for dinner but maybe shouldn't teach me how to build a bomb.

Why not? If someone wants to make a bomb, they can already find out from other source materials.

We already have regulations around acquiring dangerous materials. Knowing how to make a bomb is not the same as making one (which is not the same as using one to harm people.)


It's about access and command & control. I could have the same sentiment as you, since in high school, friends & I were in the habit of using our knowledge from chemistry class (and a bit more reading; waay pre-Internet) to make some rather impressive fireworks and rockets. But we never did anything destructive with them.

There are many bits of technology that can destroy large numbers of people with a single action. Usually, those are either tightly controlled and/or require jumping a high bar of technical knowledge, industrial capability, and/or capital to produce. The intersection of people with that requisite knowledge+capability+capital and people sufficiently psycopathic to build & use such destructive things approaches zero.

The same was true of hacking way back when. The result was interesting, sometimes fun, and generally non-destructive hacks. But now, hacking tools have been developed to the level of copy+paste click+shoot. Script kiddies became a thing. And we now must deal with ransomeware gangs of everything from nation-state actors down to rando teenage miscreants, but they all cause massive damage.

Extending copy+paste click+shoot level knowledge to bombs and biological agents is just massively stupid. The last thing we need is having a low intelligence bar required to have people setting off bombs & bioweapons on their stupid whims. So yes, we absolutely should restrict these kinds of recipe-from-scratch responses.

In any case, if you really want to know, I'm sure that, if you already have significant knowledge and smarts, you can craft prompts to get the LLM to reveal the parts you don't know. But this gets back to raising the bar, which is just fine.


Indeed, anything and everything that can conceivably be used for malicious purposes should be severely restricted so as to make those particular usecases near impossible, even if the intended use is thereby severely hindered, because people can't be trusted to behave at all. This is formally proven by the media, who are constantly spotlighting a handful of deranged individuals out of eight billion. Therefore, every one of us deserves to be treated like an absolute psychopath. It'd be best if we just stuck everybody in a padded cell forever, that way no one would ever be harmed and we'd all be happy and safe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: