
Heartbleed in Rust - glass-
http://www.tedunangst.com/flak/post/heartbleed-in-rust
======
MichaelGG
As the offending commentor, I apologize. Particularly to the Rust team for
generating this negative publicity, and to the person I replied to, for
asserting a lie.

I misunderstood Heartbleed, exactly as Ted summarizes. I've no excuse other
than commenting when I shouldn't. I am happy though to have my idiocy
corrected as I'll comment better in the future.

The rest of the original thread does point out that I _did_ examine every
security advisory published by Microsoft over a year or two span, and that,
from the descriptions, Rust would have prevented basically every serious (code
exec) one. (Notable exceptions being failures in the sandboxed code loading,
similar to the various Java in browser bugs.)

~~~
legulere
Edit: What I said earlier is actually wrong. The problem is that a too big
uninitialized buffer could be allocated and thus memory from previous
allocations could be read. This isn't possible in Rust because you can't read
uninitialized data.

Of course reusing buffers can be dangerous and lead to information leakage,
but it's not what happened with heartbleed, and the possibilities to exploit
are smaller.

Old text: Actually heartbleed is a buffer over-read vulnerability that would
have been prevented by rust's out of bounds checking. Of course you could
allocate one huge buffer that contains sensitive data and is also used as an
output buffer but this seems terribly unlikely to me.

~~~
akerl_
Heartbleed occurred because the size of the buffer was based on the size
provided by the malicious packet, the buffer was not zeroed, and then the
user-provided data was written to the buffer. If user-provided-data size was
less than what you said it was, the rest of the buffer contained whatever it
had previously contained.

~~~
MichaelGG
And since people were able to recover SSL keys, does this not mean that this
buffer was used for... everything? Having a non zeroing allocator for an
entire library seems rather ambitious. It's significantly worse then just
having a buffer pool for, say, incoming packets or something.

~~~
akerl_
Oh yes, using buffers without zeroing them is a terrible idea, and sharing
those buffers among different types of things is a terrible idea.

I was specifically commenting on the fact that what the parent comment
described as "terribly unlikely" is in fact what happened.

------
nikomatsakis
I don't know that anyone claimed that a bug similar or analogous to heartbleed
couldn't be reproduced in Rust. If they did, that was certainly an
overstatement. I think more concretely people claimed that unreachable code
yields a warning in Rust, which is absolutely true, but certainly not
equivalent to saying something like a heartbleed bug would not happen.

In general, Rust is fairly aggressive about linting for "small" details like
unused variables, unreachable code, names that don't conform to expected
conventions, unnecessary `mut` annotations, and so forth. I've found that
these lints are surprisingly effective at catching bugs.

In particular, the lints about unused variables and unreachable code regularly
catch bugs for me. These are invariably simple oversights ("just plain forgot
to write the code I meant to write which would have used that variable"), but
they would have caused devious problems that would have been quite a pain to
track down.

I've also found that detailed use of types is similarly a great way to ensure
that bugs like heartbleed are less common. Basically making sure that your
types match as precisely as possible the shape of your data -- with no extra
cases or weird hacks -- will help steer your code in the right direction. This
is a technique you can apply in any language, but good, lightweight support
for algebraic data types really makes it easier to do.

~~~
nikomatsakis
I hadn't actually followed the link in the original post. I see that the
claims there were slightly different than what I was thinking of. Nonetheless,
I stand by what I wrote above.

In particular, while I of course agree with the author that one can write
buggy code in any language, I also have found that following Rust's idioms
leads to code that is less buggy. This is not unique to Rust: I've also had
similar experiences in Scala and Ocaml. What Rust brings to the table is that
it supports zero-cost-abstractions, doesn't require a virtual machine, and
guarantees data-race-freedom (a rather useful propery).

------
simias
I mostly agree with the premise: logic errors are always going to be there, at
least until the compiler is an IA strong enough to catch them for us (and by
then we probably won't need coders anyway...). There's no silver bullet, bad
coders are always going to produce. And I also don't like it when people claim
that bug X or vulnerability Y wouldn't have happened if they had been
technology Z, they're just begging for that type of post.

That being said I'm a bit more skeptical of this part: "code no true C
programmer would write : heartbleed :: code no true rust programmer would
write :: (exercise for the reader)"

If I look at the examples in the acticle, the C version doesn't look that
terrible and contrieved to me. I wonder what the author means by "Survey says
no true C programmer would ever write a program like that, either." That looks
like a lot of C code I've read, there's nothing particularly weird about it.

On the other hand the rust version looks very foreign to me (and I've been
writing quite a lot of rust lately). You basically have to go out of your way
to create the same issue.

I guess my point is that while it's true that as long as there'll be coders
there'll be bugs and security vulnerabilities it doesn't mean we shouldn't try
to make things better. And in my opinion Rust makes it much more difficult to
shoot yourself in the foot than plain C.

~~~
unfamiliar
>I wonder what the author means by "Survey says no true C programmer

I think he is being sarcastic. I.e. the idea that "no true C programmer" would
write code like that is nonsense, since we have all seen C code like that.
Therefore the idea that "no true Rust programmer" would write the Rust snippet
is not a valid defence, because bad programmers gonna program.

~~~
qznc
Yes. It's an instance of the No True Scotsman Fallacy.

[https://en.wikipedia.org/wiki/No_true_Scotsman](https://en.wikipedia.org/wiki/No_true_Scotsman)

------
Torgo
Here is what I noticed about this, sorry if it is considered too off-topic:

There was an argument, about something specific and technical; It was refuted
without singling out a specific person by name; without using humiliation or
insults; using code to do so ("show me the code!"); and there was a polite
acknowledgement and resolution.

This is an example of an interaction in a community that I think anyone would
want to be a part of. Thank you.

~~~
steveklabnik
I'm glad you appreciate it. :) We're far from perfect, but we're trying to
build a great community, not just great software.

~~~
twic
Did Torgo mean the Rust community or the HN community?

~~~
Torgo
I was thinking HN-affiliated, but it applies to the Rust community as well I
think. I have seen very positive things on their IRC. Also would like to say
that I am loving everything Ted Unangst-related, the OpenBSD community has an
(often undeserved imo) bad rap and he is a great ambassador for that community
as well, in addition to his amazing technical prowess. With all the talk about
toxic communities lately I just want people to recognize positive examples
that exist that could be used as guides without being preachy about it. Again
sorry for OT, disengaging.

------
geofft
I'm very confused at the argument here. The C code looks remarkably close to
idiomatic. Not "good," mind you, but "idiomatic". The Rust code looks
significantly more contrived to my eyes. I'm reading the blog post as arguing
that they're equally contrived.

It's true that you can do terrible things in any language, but the test of a
language is _how easy_ it makes it to do the right thing in the common case
(plus _how possible_ it makes it to do the thing you want in the uncommon
case, without these goals compromising the other).

Is there a reason that reusing the buffer makes sense in Rust? (Zero
allocation?)

Also, is it not true that Rust lends itself well, probably better than C, to
abstractions like bounds-checked substrings within a single buffer? BoringSSL
has been doing this in C, and this _definitely_ would have stopped Heartbleed:

[https://boringssl.googlesource.com/boringssl/+/master/includ...](https://boringssl.googlesource.com/boringssl/+/master/include/openssl/bytestring.h)

------
steveklabnik
This is why I get a little uncomfortable when people suggest Rust fixes tons
of security issues. Yes, it will fix some of them. No, just because a Rust
program compiles doesn't mean that it won't have problems.

Rust is _memory safe_. Nothing more, nothing less.

~~~
jp_rider
I feel like you're undervaluing memory safety. Memory safety prevents most
(all?) exploits that lead to remote code execution. There can still be high
level vulnerabilities, but guaranteed memory safety is a huge improvement.

Rust's type system can be used to prevent high level attacks too. For
instance, if an sql library is set up properly, it can prevent sql injection
by requiring inputs be properly sanitized.

~~~
tptacek
Memory safety prevents 3 vulnerabilities that lead to remote code executions,
not "most" or "all" of them. They're 3 very common and important
vulnerabilities, though.

~~~
pcwalton
Based on incidence in Gecko, it is indeed most of them. It depends on your
project, of course.

~~~
tptacek
It's a bit tautological to suggest that fixing the most common RCE flaws in
C/C++ programs by replacing the language is the same as fixing all of the most
common RCE flaws. The clear point here is that memory corruption is an
affliction of C/C++ programs, but that other languages have other RCE-breeding
flaws.

~~~
MichaelGG
What are the other, common, RCEs? Command and SQL injection, upload and
execute, etc. -- all those would apply to any language, right?

Eval()/dynamic loading and little custom languages (like perhaps some
"business rules" type systems) probably aren't as common in C/C++ eh?

Same for overzealous serialization systems (like Ruby's YAML issues, and I
think .NET's binary serialization)?

What other kinds of things lead to RCE that don't or rarely occur in C/C++?

~~~
tptacek
You just hit a bunch of them.

The C/C++ RCE bugs are buffer overflow (heap, stack, heap/stack via integers,
&c), UAF (and double free), and uninitialized variables. It looks like there's
a whole menagerie of different C/C++ RCE flaws, but they really just boil down
to bounds checking, memory lifecycle, and initialization.

Metacharacter bugs apply to all languages, but since Rust doesn't eliminate
them --- virtually nothing does, with the possible exception of very rigorous
type system programming in languages like Haskell --- the metacharacter bugs
rebut the parent commenter's point.

Eval() is an RCE unique to high-level dynamic languages. Taxonomically, you'd
put serialization bugs here too (even the trickiest, like the Ruby Yaml thing,
boil down to exposing an eval-like feature), along with the class of bugs best
illustrated by PHP's RFI ("inject a reference to and sometimes upload a
malicious library, then have it evaluated").

Those are just two bug metaclasses, but they describe a zillion different RCE
bugs, and most of them are bugs that are not routinely discovered in C/C++
code.

~~~
MichaelGG
If you remove custom software like Intranet apps and focus more on products
that have near ubiquitous deployment (like common desktop programs, OSes,
basic server-level code), how do you think the come out? What about by number
of people impacted?

------
nickik
Some people wrote a completly new TLS Stack in Ocaml to combat this problem:

[http://openmirage.org/blog/introducing-ocaml-
tls](http://openmirage.org/blog/introducing-ocaml-tls)

Here a Video about Mirage OS and this TLS Stack from the 31C3.

Trustworthy secure modular operating system engineering -
[http://media.ccc.de/browse/congress/2014/31c3_-_6443_-_en_-_...](http://media.ccc.de/browse/congress/2014/31c3_-_6443_-_en_-
_saal_2_-_201412271245_-_trustworthy_secure_modular_operating_system_engineering_-
_hannes_-_david_kaloper.html)

There goal is to reduce the trusted computing base to a minimal.

Rust could deliver some of the same benefits to writing highperformance low
level code.

~~~
MichaelGG
That page makes the same mistake I did, which caused Ted to write the article
in the first place. There's no memory safety issue at play, at least not in
the way memory safety is usually referred to. As the TFA shows, the problem is
explicitly reusing the same buffer. I don't think there's a general way to
prevent this kind of code.

I guess more than just me _assumed_ Heartbleed was a typical blindly allocate
and read, going past the buffer bound. But that's not what happened. Writing
the same thing is totally possible in OCaml. And in a safe language with GC,
it's not unheard of to reuse objects for performance. So in fact it's perhaps
even somewhat probable to end up with a Heartbleed-like bug.

~~~
nickik
True, I still wanted to get the information outthere.

Also I think, if you watch the Q&A at the end of the talk, they clame that the
way you write and abstract code is diffrent and leads to saver code as well.

I dont want to clame that it is true, just pointing it out.

------
pacala
If I'm reading the blog code correctly, the error is trusting user input:

    
    
        // Rust
        let len = buffer[0] as usize;
        // C
        size_t len = buffer[0];
    

I'm no Rust hacker, but can I expect the Rust type system to be able to encode
some form of tainting? Making the leaky sequence illegal:

    
    
        let len = buffer[0] as usize;
        // ERROR ERROR ERROR using unscrubbed user input ERROR ERROR ERROR
        buffer[0 .. len]
    

How exactly to encode tainting is left as an exercise to the reader :) But
ideally it should be able to identify that the buffer is reused between 2
different requests, and that data tainted from second request is used to index
an array tainted with data from first request. This seems eery up Rust's
alley, given the concurrency / allocation disambiguation support I've read
(alas superficially) elsewhere.

~~~
kibwen
You can absolutely express tainting via the type system. I have seen this done
before in Rust code in order to express functions that can only accept strings
that have been properly sanitized, along with a function that takes an
unsanitized string and returns a sanitized one. This particular example was
using phantom types, though you could obviously also define wholly separate
types for this sort of thing.

------
ajanuary
Wasn't the heartbleed issue that you could trick it into reading past the
memory it had allocated? That's different to explicitly reusing memory you've
allocated without clearing it in between.

The original claim was that rust would prevent the class of errors that caused
Heartbleed. No one claimed rust would prevent you from writing a program with
a different bug that just happens to exhibit similar behavior.

Buffer overruns are tricker to spot than explicitly reusing a buffer.

[Edit] An example of an actual buffer overrun, with no changes to pingback.

C:

    
    
        $:/tmp # cat bleed.c
        #include <fcntl.h>
        #include <unistd.h>
        #include <assert.h>
    
        void
        pingback(char *path, char *outpath, unsigned char *buffer)
        {
                int fd;
                if ((fd = open(path, O_RDONLY)) == -1)
                        assert(!"open");
                if (read(fd, buffer, 256) < 1)
                        assert(!"read");
                close(fd);
                size_t len = buffer[0];
                if ((fd = creat(outpath, 0644)) == -1)
                        assert(!"creat");
                if (write(fd, buffer, len) != len)
                        assert(!"write");
                close(fd);
        }
    
        int
        main(int argc, char **argv)
        {
                unsigned char buffer2[10];
                unsigned char buffer1[10];
                pingback("yourping", "yourecho", buffer1);
                pingback("myping", "myecho", buffer2);
        }
        $:/tmp # gcc bleed.c  && ./a.out && cat yourecho myecho
        #i have many secrets. this is one.
        #i know your
         one.
        Æ+x-core:/tmp #
    

Rust:

    
    
        C:\Users\ajanuary\Desktop>cat hearbleed.rs
        use std::old_io::File;
    
        fn pingback(path : Path, outpath : Path, buffer : &mut[u8]) {
                let mut fd = File::open(&path);
                match fd.read(buffer) {
                        Err(what) => panic!("say {}", what),
                        Ok(x) => if x < 1 { return; }
                }
                let len = buffer[0] as usize;
                let mut outfd = File::create(&outpath);
                match outfd.write_all(&buffer[0 .. len]) {
                        Err(what) => panic!("say {}", what),
                        Ok(_) => ()
                }
        }
        
        fn main() {
                let buffer2 = &mut[0u8; 10];
                let buffer1 = &mut[0u8; 10];
                pingback(Path::new("yourping"), Path::new("yourecho"), buffer1);
                pingback(Path::new("myping"), Path::new("myecho"), buffer2);
        }
        
        C:\Users\ajanuary\Desktop>hearbleed.exe
        thread '<main>' panicked at 'assertion failed: index.end <= self.len()', C:\bot\slave\nightly-dist-rustc-win-64\build\src\libcore\slice.rs:524

~~~
ksherlock
Openssl uses their own memory allocator (since malloc is slow on big-endian
x86 xenix or something) so they _do_ reuse memory without clearing it in
between. Had they used the system malloc, it wouldn't have been vulnerable (on
OpenBSD and probably elsewhere). Can rust prevent you from implementing your
own (buggy) memory allocator?

~~~
IshKebab
Standard `malloc` doesn't zero memory either. The problem was not _caused_ by
their custom allocator. It was exacerbated by it because it allocated
everything really close together.

~~~
masklinn
Well it's not really that it allocated everything really close together in
memory-space, but since it would try to reuse existing memory from the
freelist before asking the OS for more memory, you were more or less certain
to get memory openssl had previously used as scratch space to, say, store a
private key for temporary operations.

------
leovonl
I fail to see the point of this whole discussion.

The code reflects exactly what the program is doing, and there's no undefined
behaviour anywhere. There's no way to access anything outside the very
delimited scope of "buffer" memory area, like stack variables or any other
part of the program.

What's the point of using a high-level language for re-defining basic low-
level operations on buffers and recreating everything using those low-level
constructs without the proper boundary checks?

Of course, you can simply define a huge "unsafe" block and program everything
inside it, but what's the point? That you have a language powerful enough to
shoot yourself?

Compare that to C or C++: the unsafe block is always on. Any code block can
have unsafe properties anywhere. Not only that, but you have ZERO guarantees
on memory safety and other general operations. Summarizing, high-level and
low-level totally mixed and no way to isolate them.

Sorry, but if you can't see how Rust avoids a "Heartbleed" or any other kind
of similar issue, you have no understanding of programming or no experience
debugging anything.

And yes: security != safety, but please note you are the one mixing both
concepts.

------
kaoD
Slightly OT: while trying to understand the vulnerability I came across a Rust
question.

Why can you do this?

    
    
        let mut outfd = File::create(&outpath);
        match outfd.write_all(&buffer[0 .. len]) { ... }
    

According to `old_io::File`'s doc[0] it returns an `IoResult<File>` which is
an alias `type IoResult<T> = Result<T, IoError>` i.e. `Result<File, IoError>`.
How come you can do `write_all` directly on a `Result<File, IoError>` without
unwrapping the `File` first?

The example in the docs does something similar:

    
    
        let mut f = File::create(&Path::new("foo.txt"));
        f.write(b"This is a sample file");
    

So I guess I'm missing something here.

[0] [http://doc.rust-
lang.org/std/old_io/fs/struct.File.html#meth...](http://doc.rust-
lang.org/std/old_io/fs/struct.File.html#method.create)

~~~
masklinn
The explanation is at [http://doc.rust-lang.org/std/old_io/#error-
handling](http://doc.rust-lang.org/std/old_io/#error-handling): IoResult
implements a bunch of IO traits so you don't need to unwrap it before using
it:

> Common traits are implemented for IoResult, e.g. impl<R: Reader> Reader for
> IoResult<R>, so that error values do not have to be 'unwrapped' before use.

------
tormeh
My take-away: Low-level code will burn you eventually, and unnecessarily low-
level code will burn you unnecessarily

~~~
MichaelGG
It's not low level, though, that was my original misunderstanding. Heartbleed
was not a memory safety issue like I incorrectly assumed. It could happen in,
say, C#, or Java. In fact, there's probably existing code with the same bug.
It's not uncommon to reuse objects in managed code as a performance hack.

~~~
tormeh
I'd argue that that's a bit low-level, actually. If you're starting to manage
your own memory like that then you're still at a higher level than C, but not
as high-level as the functional languages, for example.

Honestly, if the end of your collection is beyond the addresses of that
collection's valid data then you've just malloc'ed. Is it worth the bugs?

How to malloc in a high-level language(assuming a typecast always succeeds):
1. Make a reference in main (this way it's object will never be garbage
collected) 2. Make this reference to an array of objects(hereafter referred to
as the "block"), where each object holds n integers. 3. Whenever you wish to
save an object to the block, cast it to the class which the block contains and
cast from that class when you want to retrieve one.

Is the above idea good for performance? Possibly. Does it belong in a
cryptography library/program? Nope.

------
stevejones
No true blogger would wilfully misunderstand a buffer overrun vulnerability in
order to score some cheap pageviews.

To put it simply, his examples are the equivalent of doing this:

    
    
        unsigned char data[4096];
        #define X (*(int *)(&data[0]))
        #define Y (*(int *)(&data[4]))
        ...
    

Basically, he's explicitly re-using a buffer, no buffer was overrun. In Rust
you will not read something out of a buffer you didn't put there first, in C
you can, and you might even read several GB out of a 256 byte buffer.

~~~
masklinn
> No true blogger would wilfully misunderstand a buffer overrun vulnerability
> in order to score some cheap pageviews.

You may want to read up on Ted, and realise that when he writes

> if we don’t actually understand what vulnerabilities like Heartbleed are

he's probably talking about you.

> Basically, he's explicitly re-using a buffer, no buffer was overrun.

Which is essentially what happened in heartbleed. Heartbleed was _not_ a
buffer overrun at any point.

Here's the tl;dr: during heartbeat, OpenSSL would malloc both input and output
buffers at the caller-specified size (65536 bytes), copy a caller-provided
input (1 byte) to the input buffer then copy the whole input buffer to the
output buffer.

Anything beside the overwritten byte would likely be previously written data
since neither malloc nor free zero out their stuff by default[0], essentially
leaking 65k of random data every time.

This was compounded by OpenSSL doing its own memory management via freelists,
making it even more likely interesting data would be present in the input
"garbage" _and_ precluding OS mitigations (such as BSD's malloc.conf
framework[1]), not to mention the unmitigated (no freelist) codepath had
bitrotted and didn't actually work even if you knew how to enable it[2]. Note
that [1] and [2] are by TFAA, and that he's an OpenBSD and LibreSSL core
contributor.

[0] [http://www.seancassidy.me/diagnosis-of-the-openssl-
heartblee...](http://www.seancassidy.me/diagnosis-of-the-openssl-heartbleed-
bug.html)

[1] [http://www.tedunangst.com/flak/post/heartbleed-vs-
mallocconf](http://www.tedunangst.com/flak/post/heartbleed-vs-mallocconf)

[2] [http://www.tedunangst.com/flak/post/analysis-of-openssl-
free...](http://www.tedunangst.com/flak/post/analysis-of-openssl-freelist-
reuse)

~~~
IshKebab
What he's done is actually very different to heartbleed. The heartbleed flaw
was made much worse by the custom allocator used, but that wasn't the source
of the flaw. The source was the fact that dynamically allocated memory in C is
not bounds checked.

That isn't true in Rust, and he had to basically implement a deliberately
unsafe memory allocator to show this flaw. His argument that you can't say "no
rust programmer would write this code" is flawed. Of _course_ any programmer
can write insecure code in any language. The point is that Rust makes it far
less likely.

If he had ignored the custom allocator and used the defaults in both languages
(e.g. malloc in C, whatever it is in Rust), then you would have seen the
difference.

~~~
iso8859-1
> The source was the fact that dynamically allocated memory in C is not bounds
> checked.

When you say "bounds-checked", what are you talking about? To me, it means
that "x[somenumber]" make the program abort if somenumber is out of range.
However, as you can see, the reads and writes were never out of range. As I
see it, the issue is that uninitialized memory is being read. This is not
"unsafe", cause it will never crash your program. I don't know the exact
definition of "undefined behaviour", but since we are using a custom
allocator, even if reading from freshly malloc'ed memory is undefined, it may
not be in this case.

Rust doesn't let you get uninitialized memory without using "unsafe", so to
construct a program with the issue, he had to reuse the buffer. I think it's a
_lot_ less likely to happen with Rust, since it is visible to anyone that it
is the same buffer.

> If he had ignored the custom allocator and used the defaults in both
> languages (e.g. malloc in C, whatever it is in Rust)

How do you know the default allocator isn't using memory that was previously
used for the private key? And why do you think the default allocator wasn't
used in the Rust code?

PS: Taint analysis also has a much better chance of working in Rust, since we
are not working with libraries, but standard language constructs. In C, your
taint analysis would have to taint every byte straight from malloc.

------
yk
Well of course this is possible. You can port a bug compatible version of a
program to any other language. That is called Turing complete ( and may
involve writing a x64 emulator in VB Script). /snark

A bit more serious, I wonder which security problems Rust would have, if it
would be as well studied as C.

------
krick
So, let's say I'm on drugs and I'm writing TLS implementation being not "real
Rust programmer". What are "rules of the thumb" I should follow (let's assume
I have that much self-control) to not end up with something like this?

~~~
Sanddancer
The biggest thing is to let the memory allocator do its job. Don't cache
buffers, etc between uses to speed things up; once it's used, throw it in the
dumpster and get a new chunk of memory. Your nifty performance hack will
succeed in leaking vital information much faster than the stock memory
allocator. Other things are if your allocator doesn't do it for you, zero out
your memory before you use it, and if you really want to get fancy, zero it
out when you're done using it. Also, test on more than one OS/Architecture.
Your code may work beautifully on your Linux x86 box, but does it still work
under OpenBSD? How about running on an ARM board? Good, portable code that
doesn't rely on trickery is one of the best ways to ensure that your
assumptions won't cause the next security disaster.

------
lmm
"Code no true C programmer would write", eh? And yet one did, in a high-
profile, security-critical library. When you find Rust code like this in the
wild, I'll start to believe in some kind of equivalence.

~~~
protomyth
I guess we're going to get into a "No True Scotsman" situation, but given the
OpenSSL codebase, I don't think the OpenBSD folks regard them as "true C
programmers". Rust is only being used by enthusiasts currently, so I'm sure we
will see code like that once it gets into the general population.

------
qguv
Shouldn't that analogy read:

code no true C programmer would write : heartbleed :: code no true rust
programmer would write : (exercise for the reader)

------
mseepgood
Type / memory safety != security. The Rust people also mistake "no
segmentation faults" for "no crashes".

~~~
kibwen

      > The Rust people also mistake "no segmentation faults" 
      > for "no crashes".
    

I was a witness to one of the first public demonstrations of Servo to the
Mozilla community at large (Mozilla Summit 2013). At one point during the
demonstration Servo suffered a runtime panic, and the presenter (a Servo dev)
self-deprecatingly apologized for the crash. A Gecko engineer in the audience
raised his hand and asked if it was a segfault. The answer was that it was
not, to which the Gecko engineer replied, "well then it's not actually a
crash". So yes, now we're arguing semantics, but in a systems programming
context a segfault is most usually what one means by "crash".

