I'm not angry, I'm just extremely skeptical. If a programming language varied from version to version the way LLMs do, to the extent that the same input could have radically different consequences, no one would use it. Even if the "compiled code" of the LLM's output is proven to work, you will need to make changes in the "source code" of your higher level natural language. Again it's one thing to divorce memory management from logic; it's another to divorce logic from your desire for a working program. Without selecting the logic structures that you need and want, or understanding them, pretty much anything could be introduced to your code.
The point of coding, and what developers are paid for, is taking a vision of a final product which receives input and returns output, and making that perfectly consistent with the express desire of whoever is paying to build that system. Under all use cases. Asking questions about what should happen if a hundred different edge cases arise, before they do, is 99% of the job. Development is a job well suited to students of logic, poorly suited to memorizers and mathematicians, and obscenely ill suited to LLMs and those who attempt to follow the supposed reasoning that arises from gradient descent through a language's structure. Even in the best case scenario, edge case analysis will never be possible for AIs that are built like LLMs, because they demonstrate a lack of abstract thought.
I'm not hostile to LLMs so much as toward the implication that they do anything remotely similar to what we do as developers. But you're welcome to live in a fantasy world where they "make apps". I suppose it's always obnoxious to hear someone tout a quick way to get rich or to cook a turkey in 25 minutes, no knowledge required. Just do be aware that your intetnet fame and fortune will be no reflection on whether your method will actually work. Those of us in the industry are already acutely aware that it doesn't work, and that some folks are just leading children down a lazy pied piper's path rather than teaching them how to think. That's where the assumption comes from that anyone promoting what you're promoting is selling snake oil.
This is the disconnect, no where do I say use them to make apps, in fact I am strongly opposed to their use for automation, they create Rube Goldberg Machines. But they are great advisors, not coders, but critics of code and sounding boards for strategy, that one when writes their own code to perform the logic they constructed in their head. It is possible and helpful to include LLMs within the decision support roles that software provides for users, but not the decision roles, include LLMs as information resources for the people making decisions, but not as the agents of decision.
But all of that is an aside from the essential nature of using them, which far too many use them to think for them, in place of their thinking, and that is also a subtle aspect of LLMs - using them to think for you damages your own ability to critically think. That's why understanding them is so important, so one does not anthropomorphize them to trust them, which is a dangerous behavior. They are idiot savants, and get that much trust: nearly none.
I also do not believe that LLMs are even remotely capable of anything close to what software engineers do. That's why I am a strong advocate of not using them to write code. Use them to help one understand, but know that the "understanding" that they can offer is of limited scope. That's their weakness: they can't encompass scope. Detailed nuance they get, but two detailed nuances in a single phenomenon and they only focus on one and drop the surrounding environment. They are idiots drawn to shiny complexity, with savant-like abilities. They are closer to a demonic toy for programmers than anything else we have..
The point of coding, and what developers are paid for, is taking a vision of a final product which receives input and returns output, and making that perfectly consistent with the express desire of whoever is paying to build that system. Under all use cases. Asking questions about what should happen if a hundred different edge cases arise, before they do, is 99% of the job. Development is a job well suited to students of logic, poorly suited to memorizers and mathematicians, and obscenely ill suited to LLMs and those who attempt to follow the supposed reasoning that arises from gradient descent through a language's structure. Even in the best case scenario, edge case analysis will never be possible for AIs that are built like LLMs, because they demonstrate a lack of abstract thought.
I'm not hostile to LLMs so much as toward the implication that they do anything remotely similar to what we do as developers. But you're welcome to live in a fantasy world where they "make apps". I suppose it's always obnoxious to hear someone tout a quick way to get rich or to cook a turkey in 25 minutes, no knowledge required. Just do be aware that your intetnet fame and fortune will be no reflection on whether your method will actually work. Those of us in the industry are already acutely aware that it doesn't work, and that some folks are just leading children down a lazy pied piper's path rather than teaching them how to think. That's where the assumption comes from that anyone promoting what you're promoting is selling snake oil.