> I honestly can't imagine this. If the AI says "However, a downside of approach B is that it takes O(n^2) time instead of the optimal O(nlog(n))", what do you think the odds are that it literally made up both of those facts? Because I'd be surprised if they were any lower than 30%. It's an extremely confident bullshitter, and you're going to use it to talk about engineering tradeoffs!?
Being confidently incorrect is not a unique characteristic of AIs, plenty of humans do it too. Being able to spot the bullshit is a core part of the job. If you can't spot the bullshit from AI, I wouldn't trust you to spot the bullshit from a coworker.
Being confidently incorrect is not a unique characteristic of AIs, plenty of humans do it too. Being able to spot the bullshit is a core part of the job. If you can't spot the bullshit from AI, I wouldn't trust you to spot the bullshit from a coworker.