Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Heck, if you ask some people, Rust is less secure than a GC’ed language for web apps if you use any crates that have unsafe code - which includes Actix, the most popular web framework, because unsafe code allows things like deferencing raw pointers.

The presence of _unsafe_ is okay, it just means that this is a part of code, that the compiler cannot verify itself. Usually the unsafe part (verified by human) is wrapped in a API that is safe to use.

PS: You can still have memory leaks in GC-d languages by reference cycles.



In general I found it weird to have a whole conversation about "security" without mentioning that Rust is statically typed, unlike Node/Python, and has a great deal more null-safety.

Now, I am not sure if memory bugs are "worse for security" than type bugs or null reference bugs (in the wild or in theory). Certainly many of the more notorious major exploits in recent years come down to memory errors. But of course, the major infrastructural software affected in bugs like Heartbleed wouldn't be written in a dynamic interpreted language anyway. More generally, a language being safe against C-style memory bugs does not mean a (reasonably well done) implementation is actually safer than a (reasonably well done) implementation in C. That statement might be true for C#/Java/Haskell, but not for JavaScript/bash/Python/etc.

It just seemed odd to not mention typing discipline at all. Presumably the author has experience with frustrating type errors in Python or "object of type None does not have method xxx" - these can be really bad for security in a Flask app unless you have careful exception handling ! And, unlike Rust, Python offers very little to actually help you with the exception handling.


Both Typescript and Mypy have first class nullable types...


> You can still have memory leaks in GC-d languages by reference cycles.

    public class T {
      Object ref;

      @Override
      protected void finalize() {
        System.out.println("finalize");
      }

      static void leak() {
        T a = new T(), b = new T();
        a.ref = b;
        b.ref = a;
      }

      public static void main(String[] as) throws Throwable {
        leak();
        System.gc();
        Thread.sleep(10000);
      }
    }


A full GC will count the a and b objects as garbage once leak returns.


Yes it will. The snippet counters the statement.


> The presence of _unsafe_ is okay

I don’t think so. It explodes the attack surface. In safe languages like C# or Java there’s no unsafe anywhere, not even in standard libraries. These runtimes are safe all the way down. The only attack surface is the VM itself, but these are very small only a handful of instructions, and are tested and audited really well.

> You can still have memory leaks in GC-d languages by reference cycles.

No, you can’t. All modern garbage collectors collect reference cycles just fine. They don’t just count references; they actually traverse these graphs.


C# actually has unsafe too. And it's pretty common in both of these languages for there to be C libraries involved somewhere in the stack.


> C# actually has unsafe too.

An optional feature. Very useful for embedded and similar, but for general-purpose stuff like the web you never need anything unsafe.

> it's pretty common in both of these languages for there to be C libraries involved somewhere in the stack.

Negative. The complete stack is open source. You can browse the source code of the standard library at https://source.dot.net/ You only gonna find unsafe/dllimport for IO parts of that library where they integrate with file systems and such. All their core components are 100% managed code.


Filesystems, networking, ... Basically anything a program needs to interact with things outside its address space requires C interop and thus unsafe code. I sure hope your web server is listening on a socket because it's not going to be much of a web server otherwise.

Then there's crypto, for which the corelib defers to the OS (CNG / openssl) because doing crypto in managed code is hard.

But forget all that; even pure C# code isn't safe because it occasionally needs to hook into the CLR for unsafe things. Eg https://source.dot.net/#System.Private.CoreLib/Dictionary.cs... - unsafe code called from Dictionary.Remove


All modern OSes are written in C and/or C++, unsafe code is inevitable there. Some people tried to fix that a few times (hardware java machines, singularity/midori, etc.), wasn't good enough.

> unsafe code called from Dictionary.Remove

For creating, and checking for, empty reference.

Rust standard library uses unsafe way more. That design has consequences, e.g. check this https://medium.com/@shnatsel/how-rusts-standard-library-was-...

It's somewhat harder to screw up when you can rely on VM guarantees, and don't need that level of trust in the libraries you consume.


You misunderstand. I have nothing against the standard libraries of a language using unsafe code. However you are the one that said:

>but for general-purpose stuff like the web you never need anything unsafe.

... and:

>> it's pretty common in both of these languages for there to be C libraries involved somewhere in the stack.

>Negative. [...] All their core components are 100% managed code.

... so I posted a correction.


> In safe languages like C# or Java there’s no unsafe anywhere, not even in standard libraries

My (limited) understanding of the unsafe keyword in rust is that it’s just indicating that the compiler cannot guarantee that the block is safe, not that it’s necessarily dangerous. By this standard, every single line of any C program is unsafe. Depending on the guarantees of other languages, this is true to varying degrees. I’m not familiar enough with Java and C# to comment on them specifically, but I’m not sure the try to provide the same guarantees that rust does.

I think all python, ruby, is, etc. would be considered unsafe. The unsafe keyword in rust seems more like the equivalent of saying “my static analyzer couldn’t guarantee the safety of this bit; it may or may not be totally fine”.

That said, I do agree that unsafe blocks can be red flags. At least they can help guide you in where to start looking for potential issues.


That is not true about C#, it has an unsafe keyword for using raw pointers. Also, the Unsafe class in Java does a bunch of unsafe stuff as well not to mention that Java can call out to C code as well.

A garbage collector won't help you avoid a very common memory leak of having an unbounded cache.


And using it taints the produced binaries.

You can configure the compiler, runtimes or web servers to refuse to load tainted unsafe binaries.


> it has an unsafe keyword for using raw pointers

It has, but you don‘t need it to implement data structures. Standard library developers don’t need it either, they all implemented in safe code.


But it is almost never used (recently the unsafe package even got deleted). And Java has basically everything else written in Java, so it is not as often used as Rust’s unsafe.

Java will have a safe foreign memory API soonish.


I think C# has something similar and Java has all kind of extensions which do dynamic code generation and similar. This is not exactly the same attack surface but one which is similar bad (depending on usage of rust unsafe and Java features it can be anything from less worse to way worse).

EDIT: I didn't list Java calling C/C++ because I think? it's not very common in Java libraries (through it's in JVM).


> which do dynamic code generation

C# has a lot about that, but whatever code you going to generate is CIL code. Runs within the same VM with strong safety guarantees.

> but one which is similar bad

Not anywhere close. Rust is unsafe all over these crates, both standard library and third-party ones. That’s not just theoretical stuff, it has quite a history, see e.g. https://medium.com/@shnatsel/how-rusts-standard-library-was-...


> Runs within the same VM with strong safety guarantees.

You still generate code at runtime and there had been multiple Java vulnerabilities where features like "runtime code loading/generation" and "reflections" lead to RCE vulnerabilities.

And sure you can RCE bytecode but so what, it's still code which can do anything your application can do. I.e. the same as you have with "unsafe" attack vectors. And while you could try to use the VM as a security sandbox it requires additional work and at lest for Java is known to not work well and if you do so you can also spend additional work to sandbox binaries...

And then C#/Java libraries still do bind to C/C++ code, and their VM is still implemented in C/C++ and neither VMs are meant to be a sandbox to allow running untrusted code (both had some features for this, in both cases but especially Java it didn't work out well and by now both have dropped/deprecated it).

> https://medium.com/@shnatsel/how-rusts-standard-library-was-...

Sure, there had been a single bad security vulnerability in the standard library on stable. But so what. There had been more then one or two in fundamental parts of the JVM and probably for C#, too (IDK).

In the end neither rust's unsafe nor Java's/C#'s VMs/GC are meant as security protection mechanisms. They are tools to make it easier to write correct code. And more correct code also means less security vulnerabilities.

If you rely on any of them for security you already have lost.

Which doesn't mean you can't design VM's for safety purposes, e.g. most JavaScript Browser VM's are designed to safely isolate JS and even then there where cases of VM escapes. In even then you might want to add at least one additional layer of protection and generally run all from the outside reachable services on properly locked down systems.


> They are tools to make it easier to write correct code.

I agree with that. Still, being able to implement data structures without relying on unsafe features helps a lot with the correctness.

It’s possible to write code in safe Rust, but it’s hard to implement data structures in it. At least not if one wants performance. That rust’s borrow checker is simply too limiting for them. All basic data structures like linked lists, trees, graphs, LRU caches, use tons of unsafe internally.

That’s not the case with Java or C#. These garbage-collected VMs are memory safe all the way down. It’s usually possible to implement arbitrarily complex data structures entirely in safe managed code with minimal performance penalty.

Even their standard libraries are made that way now. AFAIK that wasn’t always the case, older versions of .NET framework did more in C++ for performance reasons, but they improved performance of JIT over time, and gradually reworked the standard library to almost 100% managed code, probably for portability reasons. Vast majority of third-party libraries in their respective ecosystems don’t use unsafe either, e.g. many asp.net web hostings are configured to forbid any unsafe libraries from being loaded.


> The presence of _unsafe_ is okay,

It is OK in the same way C/C++ code is OK. IMO there is absolutely no point in writing Rust if you are going to use unsafe code.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: