I get that we expect more from the tools we use. But honestly.. I'm so fed up with AI at this point.
Every new model I've used, I thought it was amazing for a few days. Until... it stopped being useful and hallucinating and giving me non-working and terrible code. I feel like I always start giving it simple/rote problems, and when I'm fed up with something difficult I really wish I could turn to an AI and have it help me..
The latest of this fad is o1-preview. At first, I thought wow. This thing is amazing. It could "come up" with answers for me, and for the first time in many months I felt like I was going to use AI tools again.
Until...
Yesterday I was struggling with some event handler hell-mode (lots of event synchronization on a very complex UI) on a web app I'm working on. It is pretty complex, and one thing I do appreciate about these AI tools is that I have a chance to describe my issue in detail which helps me understand my own problem better. But, I was stuck and I gave as much detail as possible.
I ran out of credits trying to see if the AI could help me (I hate JS, sorry) but it kept giving me, in order:
1. Code that didn't work at all.
2. Code that didn't work at all.
3. Completely new code that didn't work at all.
4. The previous code that didn't work at all.
5. Completely new code that didn't work at all.
6. The same code that didn't work at all, with phantom variables this time that weren't used..
7. Completely new code that didn't work at all, again with phantom variables.
I gave up at this point, and decided f this, I will do it myself.
I woke up the next morning (today), had my morning coffee and exercised a bit, and just took out a couple sheets of paper and diagrammed my problem out and solved it myself. It took me about 40 minutes to get a completely working solution.
Why does every AI model inevitably lead to this? Am I expecting too much?
What I have found it is great for, and what I am using in my own product, is generating database queries. It seems great for this.
But for solving actual, complicated problem? I think that is something humans will always have to do.
When people hype up that the AI solved something for them, I wonder were they lazy like me working with something complex, or were they lazy and didn't even try on something simple?
For this reason the current AI trend will have a bigger impact on creative tasks, rather than critical technical ones. It's already great for generating art quickly, at least for the concepting phase, and other creative assets that can afford to be generic. Solving technical problems, on the other hand, requires reasoning beyond what can be extracted from training data.
We'll need a new paradigm of AI in order to have a chance at creating models that properly reason. Even without detail knowledge of the brain, we can safely speculate that the reason and language areas are extremely efficient compare to cutting edge LLMs, which means there are algorithms more complex and efficient than simple artificial neural connections that just sum weights with a bias.