1) Runtime overhead for some form of GC (D, Lisp, etc)
2) Rephrasing a program to satisfy a memory constraint checker (Rust)
3) Disciplined memory usage (i.e. Nasa C coding guidelines)
We don't have enough experience with 2 to indicate whether it will create new classes of bugs. We also don't understand the knock-on effect of managing memory differently - will functionally identical programs require more or fewer resources, more or fewer programmers hours, etc.
Rust may very well be the future, but we don't know for sure yet.
One thing we do know: options 1 and 3 have been available for years, but not widely utilized. What lessons can we learn from this fact to apply to Rust?
What classes of security bugs could possibly arise from Rust's ownership discipline?
Not all security bugs are related to memory. Many are related to improperly written algorithms (most crypto attacks), or improperly designed requirements (TLSv1).
Even Heartbleed was primarily due to a logic bug (trusting tainted data) instead of an outright memory ownership bug.
Does Rust automatically zero out newly allocated memory? Honest question, I don't know the answer.
Oh, also: If you're implying that Rust's ownership discipline can create security bugs where there were none before, I consider that a real stretch. I'd need to see an actual bug, or at least a bug concept, that Rust's borrowing/ownership rules create before accepting this.
Nobody is saying that Rust eliminates all security bugs. Just a huge number of the most common ones.
> Does Rust automatically zero out newly allocated memory? Honest question, I don't know the answer.
This is a problem that will be there equally in all languages
Perhaps less so in languages with a better type system, but that doesn't affect Rust since there aren't any _systems_ languages with a better type system.