Hacker News new | past | comments | ask | show | jobs | submit login

That's true but this sort of problem would normally not happen with a standard AD library or subsystem, whatever language you're using. Such systems don't approximate functions then differentiate their approximations.



Well, all functions that use finite precision arithmetic deal with approximations. Errors of 1/2 ulp like trig functions are rather the exception. And machine learning is keen on using low precision floating point type, so errors are relatively large.


I think the example is supposed to be indicative of larger functions that users define into the system. That is, that this was shown with sin/cos was simply an example. Right?


Correct. sin and cos was merely an example.

Every real AD system has a primative for sin and cos. but they do not have every single operation you might ever implement (else what would be the point), and it is unlikely they will have every operation that involves any approximation. (Though there is a good chance the might have every named scalar operation involving an approximation that you do, depends how fancy you are.)

I mean I maintain a library of hundreds of custom primatives. (one of the uses of this post was so i can point to it in response to people saying "why do you need all those primatives if AD can just decompose everything into + and *)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: