Hacker News new | comments | ask | show | jobs | submit login
Show HN: Stensal SDK: Retrofitting C/C++ code with quasi-memory-safety (stensal.com)
18 points by cubidudu 3 days ago | hide | past | web | favorite | 18 comments





Looks interesting. I have some questions:

(1) Where can I find a minimal code example where Stensal SDK finds a bug that could not be found with Valgrind, Sanitizers (i.e. Asan, UBSan) nor with MSVC debug mode?

(2) From the Get Started guide it looks like Stensal SDK is a custom compiler. If this is correct: which versions of C++ does it support? Does it fully support C++17? Is it based on another compiler (GCC/Clang)?

(3) As your pricing seems to be on a 'per user'-basis, does the CI server count as a single user?

(4) What is the observed slowdown in real-world applications? Or benchmarks like these: https://github.com/google/sanitizers/wiki/AddressSanitizerPe...


(1) It is not designed to find more memory errors than said X tool. It is designed to find complete runtime memory errors that can cause security concerns, more specifically, like remote code execution and information leakage (like heartbleed) with very high probability. The probability is determined statically by source code pattern instead of the permutation of malicious input. In theory, you can intentionally write code that cause the runtime checking fail to catch a runtime error.

The minimal code example: Try to iterate the CLI argument of this code snippet https://stensal.com/a/demo_reporting, Valgrind and Asan will not report the overrun when the input is larger than certain numbers.

(2) It currently supports C++14. It will be upgraded in the future releases to support C++17. It is based on the great LLVM/Clang compiler framework.

(3) CI server needs a group license with discount. If all individual developer already has license, there is no need for extra license.

(4) Slowdown can be optimized based on user requirements.


I think (1) is the wrong question to ask, because none of these tools are suitable for usage in a production deployment, so you will only find those bugs with them that can be triggered with your test suite, which is necessarily incomplete.

A better question is, is this new tool intended to be used in production deployments, and what, if any, new attack surface does it introduce?


As you have rightly pointed out the intended use case. I want to add more constraints to it. It's intended to be used in production deployments that are security critical, constant applying security patches is not an optimal solution, and slowdown is bearable.

It will introduce denial of service attack. QMS executables can operate in two modes: warning mode and crash mode. Both will reduce the level of service, but neither will allow memory accession violations to happen.

Edit: added more constraints to the suitable production deployments.


Could you compare your approach to SoftBound (https://www.cs.rutgers.edu/~santosh.nagarakatte/softbound/) ?

I want to avoid to do comparison with the said X tool without fully understanding their constraints because different approaches have made different design trade off and each has its own strength. That having been said, the academia often publish some survey papers about the state of art of memory error detection tools. There is a recent paper "SoK: Sanitizing for Security"(https://arxiv.org/abs/1806.04355) that has categorized most of the known dynamic memory error finding tools very well. They analyzed the different detection mechanisms and collected empirical data with their best efforts in their categorization. It's highly recommended for anyone who wants to understand the difference among these detection tools.

IMHO, If you want to increase your detection coverage. You might need multiple tools to fulfill your code quality requirements. If you cannot use multiple tools, pick the one that is readily available.


Using memory and address sanitization goes very far, without the need for special libraries.

gcc/g++ and clang/clang++ both have full support at this point.


I'm the founder and built the SDK.

Agreed, ASAN and Valgrind cover the most memory errors and are readily available for almost all platforms.

The SDK is designed for a different purpose -- to catch all runtime memory issues with a very high probability. The generated executables can be deployed and used by the end users who don't want to upgrade just for the fixes of memory errors. There are many cases the memory errors are never exploited. By using the pre-built QMS executables, you know when you are exploited, and the exploit will be stopped by the runtime checking.

Edit: Even though I have been reading HN for a quite long time. This is my first post, I just registered recently. I could not figure out how to post Show HN and had to ask a friend to post it for me.

Added more details.


I couldn't figure out much from the site as to how this works. Can you go into some detail?

I guess you ask how this dynamic type system works. It basically gives each valid pointer a type, which includes the size information and liveness information of memory pointed by the pointer. When a pointer is dereferenced, the runtime checking will verify whether the dereference is valid. Null (and uninitialized) pointer does not have a type.

It uses a completely different mechanism from the one used by ASAN and Valgrind.

Edit: delete a wrong answer.


Interestingly, SillyMUD and its descendant Phoenix at one point had similar memory checks that occurred at runtime, as well as shared memory with reference counting and a paging virtual memory system.

Unfortunately, even with that and it having been run with purify (a commercial predecessor to valgrind), i was still able to find some severe memory issues that didn’t manifest by compiling with address and memory sanitization about 25 years later.


I didn't know SillyMUD. But I did read the paper about an interpretation based memory checking tool many years ago. Unfortunately, I forgot the name. I believe the interpretation based memory checking tool assigns each pointer a type. So the idea is not new.

ASAN is great because of its compatibility with existing libraries, as you have pointed out, is available in both gcc/clang. Valgrind is in the same league. Both are fantastic tools even though they might miss some errors.


OK, Thanks. Now, how does the run-time determine the type and size of an object for which the application has a pointer? The type, presumably, from static typing, but then you have to be careful with type punning. The size, presumably, from the allocation. This doesn't sound that different from ASAN and valgrind. What is different?

Yes, you have stated it correctly. The size and liveness information are from allocations (either stack or heap). ASAN and Valgrind do not associate these information with pointers, but add redzones (extra paddings with special bit patterns) before and after each legit memory block. It only triggers error reporting if a pointer happens to point to one of the redzones. However, redzones always have limited sizes. They cannot cover the ranges of out-of-bound pointers.

It's dynamic typing.


Are pointers in this environment bigger than usual? Or do they point at a descriptor that then points at (or contains) the real object?

How does type punning work? Does you compiler scribble the types a pointer has been cast to in the object descriptor?


No, the pointer always has the native pointer size. I think the type in my use of "dynamic type system" confused you. The type has a very narrow meaning, it has only two values. It is either POINTER or NON_POINTER. A POINTER type is like a tuple of size and liveness. It does not track the static types like integer, float, or struct, etc. Hope this clarify it. Perhaps, I should call it 'dynamic pointer tracking system'. Does it sound more clear?

That helps.

In my mind I was picturing a double-wide pointer that (assuming 48-bit address spaces) carries 20 bits of type information, a pointer to an allocation descriptor, a pointer to / into the object, and 16-bit CRC of the allocation descriptor. The allocation descriptor itself would have a base address and size, a generation number, and maybe some bits for something else (what? perhaps another CRC?).

However, this would change the ABI...

And yes, I thought you were encoding the C type of the object pointed to in the pointer, which is why I was thinking wide pointers.

Thanks for the clarification!


I changed it to "dynamic pointer typing". Thanks for your questions.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: