If you had a piece of code or software that sometimes produces totally wrong output we would consider that a bug.
Yet it seems like with AI all the investors/founders/PMs don’t really care and just ship a broken product anyway
I feel like I’m going crazy seeing all the AI stuff ship in products that gives straight up wrong outputs
It’s like a big collective delusion where we just ignore it or hand wave that it’ll get fixed eventually magically
Once I started seeing these behaviors in our robots, their appearance became much more pronounced every time I dug deeply into proposed ML systems: autonomous vehicles, robotic assistants, chatbots, and LLMs.
As I've had time to reflect on our challenges, I think that neural networks very quickly tend to overfit, and deep neural networks are incomparably overfitted. That condition makes them sensitive to hidden attractors that cause the system to break down when it is near these areas - catastrophically.
How do we define "near"? That would have to be determined using some topological method. But these systems are so complicated that we can't analyze their networks' topology or even brute-force probe their activations. Further, the larger, deeper, and more highly connected the network, the more challenging these hidden attractors are to find.
I was bothered by this topic a decade ago, and nothing I have seen today has alleviated my concern. We are building larger, deeper, and more connected networks on the premise that we'll eventually get to a state so unimaginably overfitted that it becomes stable again. I am unnerved by this idea and by the amount of money flowing in that direction with reckless abandon.