That's an illogical counterargument. The absence of published research output does not imply the absence of intelligent brain patterns. What if someone was intelligent but just wasn't interested in astronomy?
Yes but this was just to make a blatant example. The questions still stands. If you feed a LLM certain kind of data is it possible it strays from it completely - like we sometimes do in cases big and small when we figure out how to do something a bit better by not following the convention.
The same thing applies to undecidable problems. Yes, the halting problem is undecidable in the general case. But most real life programs are expected to always halt, or to repeatedly run a procedure that always halts (event loops, servers).
A trivial way to make sure a program halts is to add an upper bound on how much time it takes to run. A more complex way is using theorem provers. Turing completeness is usually not required for real life programs.
Typically you'll also see people chip away at Turing completeness. There are programs that in theory are correct, but aren't allowed by the rules of the environment which cannot tell your correct solution from a slightly different and wholly nefarious one.
We don't always need to solve the general problem, and in cases where the general problem is undecidable, you're foolish to even try. Your boss needs to be set down and explained that in this house we obey the Laws of Thermodynamics, Lisa.