Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Maybe because it came from Microsoft, NIH syndrome.

No it is because you still need to get the size calculation correct, so it doesn't actually have any benefit over the strn... family other than being different.

Also a memcpy that can fail at runtime, seems to be only complicating things. If anything it should fail at compile time.





If the optimizer cannot see the sizes, it has to defer the error to run-time. If it sees it (as with _FORTIFY_SOURCE=3) it fails at compile0-time already. The difference to _FORTIFY_SOURCE is that it guarantees to fail, whereas with _FORTIFY_SOURCE you never know.

In C I am responsible to tell the compiler where my arrays end. How is it supposed to know how many arrays there are in an allocation? Why should the compiler trust one expression about the size, but not the other? If I would want to limit memcpy by the size of the destination, I could write memcpy(dest, src, MAX(dest_size, ...)) instead, but I don't want that most of the time.

The compiler knows about the sizes by either statically allocated sizes (_FORTIFY_SOURCE=2, __builtin_object_size) or malloc'ed sizes (_FORTIFY_SOURCE=3, __builtin_dynamic_object_size). See e.g. https://developers.redhat.com/articles/2022/09/17/gccs-new-f...

Since the user is mostly wrong with memory bounds, the compiler checks it also. And with clang even allows user-defined warnings.

We all known that C programmers know it better, and hate bounds-checks, that's why there are so many out-of-bounds errors still.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: