Hacker News new | comments | ask | show | jobs | submit login

I don't understand how a result concerning infinities can apply to a finite number of data points represented by finite (bounded - probably by 2^32) integers processed in finite time.





i think you're alluding to an interesting question: in what cases does this "undecidability of learnability" result apply?

the paper seems to talk about binary classifiers `h : X --> {0, 1}`, where each classifier h belongs to some family `F` of functions subseteq `{h : X --> {0, 1}}`, and where observed samples are drawn from some unknown distribution P over X.

so, it probably depends upon the choice of `X` and `F`, but i dont have enough of a handle on the theory to understand this.


The reals are infinite. Float/double

So, while I think your question is interesting and good, I don’t think integer size is a strong counterexample in support of a different conclusion.


Float/doubles are finite. They only approximately model the real. And that's key. Because the fact that approximation works for a problem means it must be reasonably smooth, which means everything will be more well behaved and rules out the worst pathologies.

float/double can store a subset of the rationals plus some other values which aren’t found in the reals

And those other values are finite (NaNs, infinities, unequal zeros). I don't know why you're downvoted. What you said is absolutely correct.

It's an absurd thing for GP to say integers aren't infinite (apparently talking about fixed width integer types) and then turn around and say float/double are infinite (talking about single precision and double precision floating point types).


Can you elaborate on this? Specifically, why isn’t integer size a good counterexample?



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: