Hacker News new | past | comments | ask | show | jobs | submit login

Because it's a lot easier to rationalize not doing your job when it is (or at least seems to be) being done OK. There's a lot of rationalizing of decisions, when you've spent your life saying you're the expert helping people. Also, a lot easier for management to hand out a LOT more work since it looks like it's being done right.

It's like the lawyer using ChatGPT to file legal briefs... he'd feel bad and be obviously incompetent, if he didn't file anything at all, but didn't feel bad (or feel the need to check anything) once it "looked right".




I don't mind if lawyerGPT is used to help my one-off case, time enough to get that right for human eyes.

However, since I'm the final set of eyes judging my care I'll be asking to see a doctor who has never been assisted by AI and is in fact not a mind-atrophied cyborg. Who knows when this opinion will morph into harmspreading and wrongthink that gets you escorted to the schizo wing instead. AI might yet be paraded as safe and effective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: