Hacker News new | past | comments | ask | show | jobs | submit login

Yes! I've been thinking the same thing for years. The hardening stuff has gone too far. I'm so happy to hear someone who worked on Project Zero speak up about it. The worst reforms in my opinion have been the ones that obfuscate memory and introduce nondeterminism, like aslr. It makes C programs impossible to debug. It's everywhere now. One basically needs to toss out half of libc and use MAP_FIXED for everything to get back normal behavior.

On the other hand, stuff like ASAN is great. I don't hear many people talking about it, sadly enough, outside Chrome team. Operating systems should enable that systemically. Sadly they never will. Because it actually finds real bugs, which take effort to fix. It's much lower effort to push non-breaking changes that make code slower and development less pleasant so you can claim victory at raising the iq bar for hypothetical bogeymen.




As someone who's been on de developing side of memory corruption exploits, ASLR is effective in several scenarios.

It raises the bar considerably when exploiting a remote system. Without ASLR, DEP is worthless, since reliable tools exists to produce ROP chains. I think I remember seeing a fully fledged compiler somewhere, that can take high level C code and 'compile' it into offsets on the stack given a target program.

Just ASLR for executable memory is not enough. In a local EOP situation it's very likely an attacker can find where specific modules are loaded in memory. On Windows this isn't even a secret, since modules are loaded in the same memory range for performance reasons.

Here you really also need heap layout randomization, the latest version of which shipped in windows 10, and hasn't been successfully attacked as far as I know. That security mitigation has prevented me from building a stable exploit primitive in the past. And reduced the risk rating of a buffer overflow that I found from trivially exploitable, to not exploitable.

These mitigations are worth it, IMO.


Security for Microsoft's ambitions to turn PCs into iPhones maybe. Security for us? There's no compelling empirical evidence to support the claim that any of those designs offer benefits to developers or the user. If Turing Completeness wasn't discovered until 2020 then folks like you would probably call it a security vulnerability.


ASAN is great for development, but a less cynical reason for not enabling it at an OS level is that it would add quite a bit of overhead: "Typical slowdown introduced by AddressSanitizer is 2x." (clang docs)

There is also some memory overhead:

* AddressSanitizer uses more real memory than a native run. Exact overhead depends on the allocations sizes. The smaller the allocations you make the bigger the overhead is.

* AddressSanitizer uses more stack memory. We have seen up to 3x increase.

* On 64-bit platforms AddressSanitizer maps (but not reserves) 16+ Terabytes of virtual address space. This means that tools like ulimit may not work as usually expected. Static linking of executables is not supported.

It's not as bad as something like valgrind, but it's certainly not something you'd want enabled on every single process.


2x is very reasonable compared to alternatives like full emulation and electric fencing. It is production worthy. I use -fsanitize-address with static linking all the time. You just have to roll your own runtime to do that. All it entails is a little glue to veneer mmap() and malloc() so it calls memset() to poison (ADDR>>3)+0x7fff8000 appropriately.


> The worst reforms in my opinion have been the ones that obfuscate memory and introduce nondeterminism, like aslr. It makes C programs impossible to debug.

Please could you explain how ASLR makes your programs impossible to debug?

I'm surprised to hear this, as I work on such executables in my debugger every day, with no problems whatsoever caused by them having ASLR enabled.

> On the other hand, stuff like ASAN is great.

It is, but it's not intended as an exploit mitigation. You typically wouldn't apply it to your binaries running in production, due to the performance hit. Its purpose is to help detect memory safety bugs during development and testing.


> The worst reforms in my opinion have been the ones that obfuscate memory and introduce nondeterminism, like aslr. It makes C programs impossible to debug.

This is why personality(2)'s ADDR_NO_RANDOMIZE exists, and both GDB and LLDB use it. And, like, if you have are attaching to an executable that was already launched both debuggers are more than competent enough at reading /proc/pid/maps and rebasing debug symbols to match.

> On the other hand, stuff like ASAN is great. I don't hear many people talking about it, sadly enough, outside Chrome team.

Every major browser engine tests with address sanitizer, as does the Linux kernel and many other projects that are concerned about security–it catches real bugs! But it could always see wider adoption, of course.

> Operating systems should enable that systemically.

But it's not really a security feature, nor is it something that an operating system implements (it's a dynamic library and an instrumented binary). However, memory tagging may bring something similar at the hardware level, and we may actually see real usage of this in the coming years rather than the perpetual soon™.


Serious question: how do you debug C programs? ASLR is inconvenient sometimes, but I've never felt it makes debugging anywhere near impossible.


ASLR is a legitimate mitigation for real attacks and a very effective one that has no impact on debuggability. ASAN is not suitable for any production code due to its overhead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: