I studied computer science, and during our data structures and algorithms course must have "discovered" a dozen or more algorithms that I'd then find treated or discredited a chapter or two later in our course material.
Sometimes someone out of nowhere benefits from not having been explained why a specific idea "can't" work, but much more often they end up wasting their time on it. And some of them go on to think they have valid, significant results.
E.g., there are regularly people that are sure they have solutions to problems that are reducible to solving the halting problem, but that don't have the theoretical background to realize it is reducible to the halting problem or why that means their "solution" doesn't work.
I think you might have gotten this the wrong way: All decidable problems are trivially reducible to the halting problem (just decide them and then map to a halting/non-halting Turing machine). On the other hand, if you can reduce the halting problem to a problem, it means it's undecidable.
I would be curious though which "natural" undecidable problems people claim to solve regularly.
The most common variations are simply obfuscated variations where it is not clear that a sub-part of the process is turing complete, and where that sub-part can determine whether or not a specific state is reached.
Quite a lot of software have subsystems that turn out to be Turing complete, whether through the explicit inclusion of scripting support, or through more convoluted means.