Hacker News new | past | comments | ask | show | jobs | submit login

> There's no point for you to use ChatGPT to do what you already know how to do.

If it were more intelligent of course there would be. It would catch mistakes I wouldn't have thought about, it would output the work more quickly, etc. It's literally worse than if I'd assigned a junior engineer to do some of the legwork.

> ChatGPT says: In short, the feedback likely stems from the implicit expectation of S3 API standards, and the discrepancy between that and the multipart form approach used in the code. > In summary, the expectation of S3 compatibility was a bias, and he should have recognized that the implementation was based on our explicitly discussed requirements, not the implicit ones he might have expected

Now who's rationalizing. I was pretty clear in saying implement S3.






> Now who's rationalizing. I was pretty clear in saying implement S3.

In general, I don't deny the fact that humans fall into common pitfalls, such as not reading the question. As I pointed out this is a common human failing, a 'hallucination' if you will. Nevertheless, my failing to deliver that to chatgpt should not count against chatgpt, but rather me, a humble human who recognizes my failings. And again, this furthers my point that people hallucinate regularly, we just have a social way to get around it -- what we're doing right now... discussion!


My reply was purely around ChatGPT's response which I characterized as a rationalization. It clearly was following the S3 template since it copied many parts of the API but then failed to call out if it was deviating and why it made decisions to deviate.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: