Hacker News new | past | comments | ask | show | jobs | submit login

Great improvements and all, but they are still no closer (as of 4o regular) to having a system that can be responsible for work. In math problems, it forgets which variable represents what, in coding questions it invents library fns.

I was watching a YouTube interview with a "trading floor insider". They said they were really being paid for holding risk. The bank has a position in a market, and it's their ass on the line if it tanks.

ChatGPT (as far as I can tell) is no closer to being accountable or responsible for anything it produces. If they don't solve that (and the problem is probably inherent to the architecture), they are, in some sense, polishing a turd.




> They said they were really being paid for holding risk.

I think that's a really interesting insight that has application to using 'AI' in jobs across the board.


This is underdiscussed. I don't think people understand just how worthless AI is in a ton of fields until it is able to be held liable and be sent to prison.

There are a lot of moral conundrums that are just not going to work out with this. Seems like an attempt to just offload liability and it seems like pretty much everybody has caught onto that as being it's main selling point and probably main thing that will keep it from ever being accepted for anything important.


> ChatGPT (as far as I can tell) is no closer to being accountable or responsible for anything it produces.

What does it even mean? How do you imagine that? You want OpenAI to take on liability for the kicks of it?


If an LLM can't be left to do mowing by itself, but a human will have to closely monitor and intervene at every its steps, then it's just a super fast predictive keyboard, no?


But what if the human only has to intervene once every 100 hours, that’s a huge productivity boost.


The point is you don't know when of those 100 hours that is, so you still need to monitor the full 100 hour time span.

Can still be a boost. But definitely not the same magnitude.


And one might also wonder still if we need a general language model to mow the grass or just a simpler solution towards to problem of driving a mower over a fixed property line automatically. Something you could probably solve with wwii era technology, honestly.


Obviously not. I want legislation which imposes liability on OpenAI and similar companies if they actively market their products for use in safety-critical fields and their product doesn’t perform as advertised.

If a system is providing incorrect medical diagnoses, or denying services to protected classes due to biases in the training in the training data, someone should be held accountable.


Personal responsibility, not legal liability. In the way a child can be responsible for a pet.

Chatgpt was trained on benchmarks and user opinions - "throwing **** at the wall to see what sticks".

Responsibility means penalties for making mistakes, and, more importantly, having an awareness of those penalties (that informs its decision-making).


They would want to, if they thought they could, because doing so would unblock a ton of valuable use cases. A tax preparation or financial advisor AI would do huge numbers for any company able to promise that its advice can be trusted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: