The problem is not the value 0.05 but researchers cheating with their data. If researchers were honest, by the definition of statistical test 5% of papers should turn out to be actually false. If that number is more like 30% it only means that some 25% of researchers were not honest, did selection, sanitized their data, or were guilty of some other form of data dredging/p-hacking. Switching p-value requirement to 0.005 will only mean that these dishonest researchers will now have to spend a bit more time fishing for data that matches their preconceived claim. Statistics will always remain only statistics and it heavily depends on whether people participating have enough discipline to not cheat. With current structure of incentives in science I suspect many will still be tempted (perhaps even subconsciously) to p-hack.
I've seen researchers trying different statistical tests until they get a "significant" result. I was a bit shocked; it's essentially unobjective and dishonest. The other one was removal of "obvious outliers" which is equally unobjective. It's part of the distribution of your data whether you like it or not. Add more replicates or do a better experiment, but don't manipulate the data and/or use inappropriate tests when the best one for your data gives you a P-value you don't like.
While this topic should be well understood already by most HN readers, I upvoted this because it's a pretty clear layperson's explanation that deserves a wider audience.
Not publishable at all may be too hard a requirement. But I approve if we make it less strict:
"not replicated/replicatable" should give a paper similar bad reputatation or worse than "not peer reviewed"