Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The missing text problem from NLU is an example of a problem that is thought to be impossible for Turing machines/algorithms but is trivial in most cases for humans.

Not thought to be impossible by people who believe in scientific materialism, and current mainstream ideas in theoretical physics (like the Bekenstein bound and such), and who have thought carefully about the issue.

The laws of physics are believed to be computable, and the information content in a bounded region of space, finite. Therefore, it is believed that, in principle, a Turing machine could run an accurate physical simulation of a person, and could therefore do any cognitive task (as far as input/output correspondence goes) that a human can.

If you’d like to explicitly reject scientific materialism though, I’d have no complaints about you doing so.



Physics models are models.

All models are wrong, some are useful.

I am not claiming that useful models need to be computable, in fact the problem with the MTP is that it induces cycles into something that needs to be recursively enumerable to be decidable.

"The trophy wouldn't fit in the suitcase because it was too [large,small]" is a nice toy case to consider how NLP can deal with that easily but NLP would have issues.

It all relates to VC dimensionality and decidablity in the end.

But the math is hard to demonstrate without actually using math.


If you would like to reject scientific materialism (and the empiricism and the dispassionate utilitarianism that go with it), go ahead.

Just don't try to frame your conclusions as objective truth, or known scientific results like the GP.


> Just don't try to frame your conclusions as objective truth, or known scientific results like the GP.

To clarify this objection is to @nyrikki's (incorrect) claim that "NLU is an example of a problem that is thought to be impossible for Turing machines/algorithms" and not to anything that @drdeca said right?

I think you are agreeing emphatically with @drdeca but it's possible to read this comment as an objection to @drdeca (NLU may not be as trivial as claimed - ha).


Yes, I completely agree with drdeca.


Right, or..

Ok, uh, I don’t think one has to really reject empiricism to reject scientific materialism?

Or, err, by “empiricism” do you mean like, “support for doing experiments, and keeping track of the results and what models work good to explain them, etc.”, or do you mean stuff like “rejecting anything that doesn’t have good scientific evidence behind it”? One can do the former without doing the latter.

When I express a belief I have that doesn’t fit with scientific materialism, I make sure to mark it as such, so that people can take that into account. I don’t anticipate any clear externally-verifiable refutation of scientific materialism within my lifetime, and so I don’t anticipate predictions that follow from it to be refuted anytime soon. And I definitely wouldn’t present those beliefs of mine as being the scientific consensus.

I suppose one might accuse me of having a “belief in belief”, seeing as I don’t expect these supposed “beliefs” of mine to be predictively useful any time soon.

But I think it is right that there are goals/values that I place higher than pure predictive accuracy. And beliefs about purpose, and meaning, and what is good, etc. fall into that.

(You mentioned utilitarianism. I’m not a utilitarian, but I do think it is often a very good heuristic, and in many contexts it would be good for it to be used more.)


> Or, err, by “empiricism” do you mean like, “support for doing experiments, and keeping track of the results and what models work good to explain them, etc.”, or do you mean stuff like “rejecting anything that doesn’t have good scientific evidence behind it”? One can do the former without doing the latter.

Honestly, I added "keep your models as simple as you can" into it. But any way you cut empiricism, it's actually utilitarianism that can be seen, so it's the one where the correct fine-cutting is important (hum... well, if you keep an utilitarian point of view). Anyway, utilitarianism tends to align with the version of empiricism biased into getting computable models.

And, of course, none of those deal with purpose questions.

Anyway, your comment there is great. What I disagree is on conceding space to something like the one above yours, because it's a misleading text that implies something very different from what it says.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: