
Rust means never having to close a socket - steveklabnik
http://blog.skylight.io/rust-means-never-having-to-close-a-socket/
======
bobfunk
Awesome post. Best explanation I've seen so far how Rust's resource
management. Always impressed with Yehuda's ability to explain complex
techniques and abstractions clearly.

Having the compiler eliminate resource leaks is really, really handy. This is
one thing that's very easy to get wrong in go. Will be interesting to see how
much of a cost the upfront typing has during quick prototyping or explorative
programming, but its great to see this kind of type system make it into
something that's well on track to becoming a mainstream programming language.

------
mwcampbell
I agree that automatic, deterministic resource management (as opposed to GC
plus a dispose pattern of some kind) is highly desirable. But I think for the
vast majority of projects, I would prefer automatic reference counting with
the occasional weak reference to break cycles, particularly if that were the
default for the language. I can appreciate that being explicit about resource
ownership is an advantage in situations where one needs to absolutely minimize
performance overhead. But for most applications, I think a system of ownership
and borrowing like Rust's would just add extra cognitive load for the
programmer for no appreciable gain. Of course, Rust isn't a language for most
applications AFAIK; it's a systems language for performance-critical
infrastructure like Servo. And Rust is probably not a language for blub
programmers. But I think this blog post is absolutely right about the problems
with GC plus a system for disposing of resources. It sucks that mainstream,
blub-friendly language have mostly chosen GC over reference counting; I think
automatic reference counting provides the right set of trade-offs for most
applications.

~~~
kibwen
I agree that Rust isn't the be-all, end-all language. But I think this line is
mistaken:

    
    
      > I think a system of ownership and borrowing like Rust's 
      > would just add extra cognitive load for the programmer 
      > for no appreciable gain.
    

In my experience, the fact that the compiler can check this stuff ends up
_subtracting_ cognitive load. Of any language I've used, Rust code is the code
that I worry least about.

Is there syntactic overhead? Yep. Does it impose constraints on design? Yep.
But when it comes to resource management, everything that Rust does is just
something that you'd need to use a pen and paper to keep track of in other
languages (including pervasively reference-counted languages, since AFAIK
there's no way to automatically enforce the proper use of weak pointers).

~~~
Dewie
> In my experience, the fact that the compiler can check this stuff ends up
> subtracting cognitive load. Of any language I've used, Rust code is the code
> that I worry least about.

That's interesting. So this holds for languages and applications where you
could get away with not knowing the automatic memory management implementation
and adjusting your design based on that? For example, that you don't have to
worry yourself with making sure to not allocate too many values on the heap,
at least in most of the code?

Would you prefer using Rust for problems and domains where you strictly don't
need the efficiency and determinism (like memory usage) you get from using
Rust?

~~~
kibwen

      > Would you prefer using Rust for problems and domains 
      > where you strictly don't need the efficiency and 
      > determinism (like memory usage) you get from using Rust?
    

It would highly depend on circumstance. How often is my code going to be run?
How long will it be maintained for, and how often will it be updated, and by
whom?

I think we all have a hierarchy of sorts: bash or Perl for one-off, disposable
scripting tasks; Python or some other light dynamic language for when things
start getting more serious, but are still personal projects; C# or some other
relatively heavyweight hammer for when we're writing code that will live
beyond our control, maintained by others and run for years and years.

I don't write production code in Rust yet (heavens no, not until 1.0 at
least... godspeed to wycats :P ) but right now it's in that third category of
languages where I trust that it can be maintained by a team and trusted to
exist for years. However, I _would_ consider Rust for that second category of
somewhat-serious personal projects, because I know too well how such "personal
projects" can vault unexpectedly into that third category of "mission-critical
team projects". But before that, I'd need to wait for the library ecosystem to
mature.

EDIT: I should clarify too that resource management is only part of the reason
why I don't worry about Rust code. There's also stuff like the ability to
_know_ that a given global variable is immutable, and can't be changed out
from under me. Or knowing that an innocent-looking block of code can't
unexpectedly kill the process (divide-by-zero notwithstanding).

------
crazychrome
Great post. I think it's the first I've ever read about Rust clearly clarifies
the owner of a ownership is scope. It may sound ridiculous but all other
talking about ownership without mentioning who is exactly the owner.

~~~
andrewflnr
Yeah, I sort of had to figure that out for myself too. Everyone talks about
ownership being recursive without specifying the base case.

------
spion
If borrowing is the default, why does it require additional syntactic noise?
Why isn't _transfer_ achieved by adding sigils such as "&"?

~~~
kibwen
1\. Familiarity to C++ programmers, for whom taking a reference via `&` is a
familiar operation.

2\. There are two different kinds of references: `&` is immutable and `&mut`
is mutable. One way or the other you'd need some sort of differentiation.

3\. Borrowing, though crucial to the idea of Rust, is still less fundamental
than ownership. You can have a language with ownership and without borrowing
(indeed, this is how most prior languages with ownership work). But you can't
have borrowing without tracking ownership.

~~~
wycats
Also, there are a whole host of very common types (integers, etc.) that
actually want to be passed by value and copied.

There's no point in making a pointer to an integer and then dereferencing the
pointer in the callee when an integer fits into the same amount of space as a
pointer in the first place.

Very simple types (without heap pointers or destructors) can be passed by
value and efficiently copied. You end up with a bit of an intuition for "value
types" (simple types whose identity is bound up in the shallow memory that
they contain).

In other GC'ed languages, there is a fixed set of these kinds of values (often
called "primitives"). In Rust, any simple user-specified type can be a
"primitive".

~~~
Animats
If something is immutable (and doesn't own something else), it doesn't matter
whether it's passed by value or reference. That's a decision the compiler can,
and should, make, as an optimization decision. In Modula, where pass-as-read-
only was the default, small items were passed by value and large ones by
reference. "Small" is a CPU-dependent optimization and should be left to the
compiler, not "user intuition".

The default should be pass as read-only reference. If you want a copy of
something, you should have to explicitly make one. If you want write access,
you should have to say so. Rust has all the right concepts, but the defaults
aren't in the safest direction. The experience of "const" in C/C++ teaches us
that they should be.

~~~
wycats
There is an important distinction in Rust: ownership.

Copyable objects are (by definition) simple enough that copying them does not
introduce aliasing. Objects with heap pointers or destructors cannot be
copied, because now there would be two "owners" of those values.

For what it's worth, I've found that a lot of these assumptions ("it doesn't
matter, let the compile optimize it") really remove the programmer from
important power.

And before you find yourself saying "are you saying everyone should write
everything in assembler?":

1\. There is a distinction between writing code in assembler and having to use
a `&` sigil for "by-reference" in a language that has lambdas, traits and many
other high-level features.

2\. You should generally use Rust for cases where performance characteristics
are important, and where control over memory is important. If you're sure it's
not important, by all means go with a higher-level language that abstracts it
away.

I made this point (perhaps more clearly) in my talk at GoGaRuCo:
[http://www.youtube.com/watch?v=ySW6Yk_DerY](http://www.youtube.com/watch?v=ySW6Yk_DerY)

------
shadowmint
As someone who's been using (and really liking) rust for about 6 months, there
are two things in this article which are, I'd say, 'common misconceptions'
about rust:

    
    
        One of the coolest features of Rust is how it automatically manages 
        resources for you, while still guaranteeing both safety 
        (no segfaults) and high performance.
    

In rust there are two types of code. 'safe' and 'unsafe'. The above is only
true if your entire code path contains _only safe code_.

NOT, if you only write safe code.

That is, if you write entirely 100% safe code, and use some library that uses
'unsafe' code at a low level (this is very common; for example, an C binding
or memory optimization), then rust does _NOT_ guarantee safety.

Furthermore, there is _no way_ to know if your code path includes unsafe code.

Rust is relatively safe; it's just not totally safe. Using rust does a
reasonable job of protecting you from segfaults most of the time; but it's not
a silver bullet.

    
    
        As soon as the program stops using the resource, its cleanup logic gets 
        invoked.
        
        ...
    
        Because only one scope owns an object at a time, you can tell just by 
        looking at it which objects will be destroyed when it's done executing.
    

This isn't always necessarily true.

Rust uses 'drop flags', which are extra bytes at the end of struct instances
in memroy, to track objects and keep meta information about them to determine
if they should be dropped.

At the end of scope when the drop checker runs it checks against these drop
flags to determine if an object should be dropped or not.

(For example, if a closure captures a variable it may no be dropped at the end
of the scope it would normally have been dropped in).

There seems to be this myth that resource deallocation in rust is magically
determined at compile time and the has 'zero cost'; it's simply not true.

There is a runtime cost to the drop checker; it's not particularly large, but
it's certainly not just a sequence of drop operations.

Don't get me wrong; I really enjoy rust, and I like it a lot. ...but those two
ideas pop up repeated in discussions about it, and they're not exactly
correct.

~~~
pcwalton
> That is, if you write entirely 100% safe code, and use some library that
> uses 'unsafe' code at a low level (this is very common; for example, an C
> binding or memory optimization), then rust does NOT guarantee safety.

What do you consider a "guarantee"? Do you consider pure JavaScript to
"guarantee" safety, even though there have been exploitable bugs in the JIT
and interpreter? Do you consider pure-computation, non-allocating Rust to be
safe, even though LLVM may contain codegen bugs that allow for undefined
behavior? Do you consider seccomp-isolated code to be safe, even though there
have been, and will likely be in the future, kernel bugs that allow sandboxed
process to escape?

The point is that instead of talking about "100% safe", which is never the
case in the real world, I think it's better to talk about _trusted computing
bases_. That is, instead of saying "X is safe" or "X is unsafe", we talk about
what effect X has on the memory-unsafe _surface area_ of a program. Rust's
goal is to reduce the trusted computing base of an application written in it
to the hardware, compiler, unsafe blocks, and type system. This gives us
essentially the safety properties of "managed languages" (except I don't like
that term because it tends to imply garbage collection and lack of fine-
grained control over memory, which don't apply to Rust).

A specific example I like to give: Sure, we could hardwire (for instance) Vec
into the language, and therefore reduce the number of lines of unsafe code in
the libraries. But doing so would just be moving the unsafe code from the
unsafe blocks in the libraries to the compiler itself. There would be no net
gain in safety from it—compiler code can have bugs just as library code
can—and there would be a decrease in maintainability and flexibility.

You can think of Rust's safety properties as establishing a type of sandbox if
you'd like. All sandboxes have trusted computing bases, and Rust's compile-
time sandbox is no exception. But "safety" as applied to a sandbox still has
an important meaning. Chromium/Firefox OS's sandbox, for example, is a safe
sandbox, even though its security depends on strong assumptions about the
trustworthiness of the OS kernel and the IPC layer between the trusted and
untrusted processes.

> There is a runtime cost to the drop checker; it's not particularly large,
> but it's certainly not just a sequence of drop operations.

After the drop optimizations are done, this will only be true if you
_conditionally move_ an object on one or more branches. It's already true
today if the object gets SROA'd on the stack, as LLVM can then optimize out
the drop flags. We've talked about adding a lint so the compiler can
optionally warn and allow you to fix it if this performance cost is a concern
to you. (It's a very minor cost—one test and branch on a stack byte, and again
only if the object is conditionally moved.)

Note that C++ move semantics incurs essentially the same cost.

~~~
jerf
"We've talked about adding a lint so the compiler can optionally warn and
allow you to fix it if this performance cost is a concern to you. (It's a very
minor cost—one test and branch on a stack byte, and again only if the object
is conditionally moved.)"

Might I just make a meta comment that while that may not be a huge use case,
the ability to assert that some particular optimization is firing and get
warnings if I've done something to break it is something I've wanted out of a
language for a long time. I know it's easier said than done, but I'd love it.

------
kazinator
In garbage collected languages with macros and an "unwind protect" operator,
we avoid explicitly closing files and other resources using with scoped
binding constructs implemented by macros.

    
    
       (with-locked mutex
          ;; ... critical section
       )
       ;; mutex is released here

~~~
ben0x539
That only works if the lifetime of a resource is a lexical region, right?

~~~
agumonkey
Good question. I wonder if there was some Lisp with linear logic that would
allow lexical and ~borrowed (sorry if I misuse the term) resources management.

~~~
malisper
It shouldn't be very hard to create a basic ownership system. Here is a
version I just wrote, which works by defining a new version of "defun" that
uses dynamic variables to keep track of ownership:

    
    
      (defvar *transferring* '() "Objects which are going to be transferred from one owner to another.")
      (defvar *resources* '() "The resources which need to be freed when the current procedure is done.")
    
      (defmacro new-defun (name args &body body)
        "Define a procedure which will have ownership over the resources it creates using any of the 'new' procedures.
        `(defun ,name ,args
           (let ((*resources* *transferring*))
             (setf *transferring* '())
             (unwind-protect (progn ,@body)
               (mapcar #'free *resources*)
               (mapcar #'free *transferred*))))) ; These are resources which were supposed to be transferred by weren't due to an error.
    
      (defun transfer (resource)
        "Transfer ownership of a resource from the current one to the next procedure called which can accept ownership."
        (push resource *transferring*)
        (setf *resources* (remove resource *resources*))
        resource)
    
      (defun new-open (&rest args)
        "Open a file whose owner is the first procedure up the stack which can accept ownership."
        (let ((file (apply #'open args)))
          (push file *resources*)
          file))
    

All that is needed now is a method "free" which will free any given resource.
With this, any resource allocated with a "new" procedure (the only one I wrote
was "new-open") will by put under ownership of the last procedure on the stack
which was defined with "new-defun". Transfer between owners can take place by
using "transfer". The best part is that more facilities are easy to build on
top of this. The "drop" procedure would be trivial to add. The only problem I
can see here is that there may be some issues if "free" throws an error, since
"unwind-protect" doesn't protect the cleanup-forms. This should be fixable by
just wrapping every call to "free" with "ignore-errors".

------
rcthompson
So in the first example, "from_file" is never used after the first call to
"io::util::copy". Is Rust smart enough to realize that this is the last use of
"from_file" and release it right then, or does it wait until it goes out of
scope at the end of the function?

~~~
steveklabnik
Currently, borrows are lexical. We don't want them to be that way forever.
[https://github.com/rust-lang/rust/issues/6393](https://github.com/rust-
lang/rust/issues/6393)

~~~
wycats
To be more specific, automatic dropping will likely always be scope-based. In
the rare case where you need to drop something before its owning scope
completes, you can explicitly use `drop` as people have said downstream.

Borrows are a different story. Especially with mutable borrows (which are
basically a static lock on the value), the current lexical restriction is too
coarse-grained. There are always workarounds, but it will be nice when it's
fixed :)

~~~
rcthompson
Is there a specific reason for automatic dropping to be scope-based? Is it
because dropping things could have side effects (e.g. closing a filehandle)
and scope-based dropping makes it easy for someone looking at the code to
determine when these side effects will occur?

------
dengnan
Anyone has good learning resources about rust? I find it's a very interesting
language but the document on its web site is so limited. Many topics are not
covered or explained in vague terms. The new guide is a good starting point
but still needs some improvement.

~~~
doe88
For newcomers I think the documentation is quite good. But where I think there
is currently a big problem it is for intermediate / advanced level
documentations, it is very limited. Sometimes you'll read some code and say to
yourself "I didn't know I could do that, cool". In order to grasp more
knowledge you'll have to read RFC, commit logs, issues, and source codes, but
because it certainly is not your main job it's really time consuming.

~~~
dengnan
Exactly! I think there's lack of document which quickly goes through basics
and spends more time on rust-specific features. (Or I'm too lazy to find
them?) For developers programming for decades, we do not need to repeat too
much about integers, characters, etc.

I personally like to use slides to learn things. I'm not sure if this is
typical, but slides are usually concise and easy to read.

~~~
steveklabnik
This kind of thing is coming. I wanted to let the Guide sit for a while before
I got to it.

------
chaotic-good
_In Rust, unlike in garbage collected languages, you never1 explicitly close
or release resources like files, sockets and locks_

GC languages is trying to emulate machine with infinite memory so you don't
need to bother yourself with _memory_ management. It's not compatible with
rust's resource management system that manages resources and treats memory as
another type of resource. The same is true for C++ BTW, in good C++ code you
never release resources by hand and use ownership for this (RAII).

------
illumen
For files python[0] does not need an explicit with to close the file. It gets
closed when references reach zero.

[0] For the pedants repeating each other in the replies... Where by python, I
mean the implementation that 98% of people use, called python, which the
competition calls CPython. Yes it is a great feature of the implementation,
like the deterministic memory management through reference counting.

~~~
wycats
What happens if there's a cycle? It will get closed when the cycle is finally
collected.

You can rely on a GC (and refcounting + a cycle collector is absolutely a GC)
to finalize your resources for you, but it is quite difficult to reason about
when the file will eventually be closed.

Also, the reference counting semantics of CPython are implementation-specific.
From the PEP that introduced `with`:

> Note that we're not guaranteeing that the finally-clause is executed
> immediately after the generator object becomes unused, even though this is
> how it will work in CPython. This is similar to auto-closing files: while a
> reference-counting implementation like CPython deallocates an object as soon
> as the last reference to it goes away, implementations that use other GC
> algorithms do not make the same guarantee. This applies to Jython,
> IronPython, and probably to Python running on Parrot.

~~~
phunge
Props to Rust for tackling this head on, more languages should provide
resource allocation that's deterministic, predictable and syntactically
convenient.

CPython's behavior is nice, but it seems to me it came about by accident. Big
heavy resources use refcounting because _everything_ uses refcounting. Plus,
if CPython had true concurrency, across-the-board refcounting probably
wouldn't have lasted nearly as long (reason being: multithreaded refcounting
requires atomic ops which make them much more expensive).

But in the general language design world, IMHO, refcounting is actually a
pretty good compromise for resource management. Because:

1) The efficiency lost compared to Rust's approach is probably immaterial
(since the Big Expensive Object that you're mananaging dwarfs the refcounting
-- syscalls and file descriptors are expensive). So I don't think _zero_
overhead is critical.

2) Cycles are pretty easy to avoid when you're only refcounting Big Expensive
Objects (FDs, mmapped buffers, etc. etc.) For a general runtime, refcounting
is tricky. But if it's a special part of the language that's small, simple,
and only for resource management, it's pretty convenient. I think it's worlds
better than the approach of mixing GC and resource management -- language
designers should admit that's a terrible idea.

~~~
kibwen
I agree that refcounting is a great and usable compromise in this space, but
your own comment flirts with why I don't think modern languages are chomping
at the bit to base resource management on it: concurrency.

Granted, I have no idea how much overhead is imposed by atomic operations vs.
a stop-the-world or concurrent GC (if anyone has some data, I'd love to see
it!). But given how it's become de rigeur for new languages to come with a
baked-in concurrency story and emphasize concurrent applications, I don't
blame them for not wanting to tie themselves to the RC cart.

~~~
chetanahuja
_" Granted, I have no idea how much overhead is imposed by atomic operations
vs. a stop-the-world or concurrent GC."_

An atomic read/write/increment/decrement is a non-blocking (from the point of
the view of a user code running on the CPU) CPU operation. The CAS operations
required to do atomic inc/dec are relatively expensive compared to normal
memory read/writes of course but nowhere near the scale of stop-the-world GC
(where _all_ threads in the runtime have to be blocked for a potentially large
amount of time, sometimes measured in whole seconds or even 10's of seconds
for large heaps).

Anyone who has had to troubleshoot JVM's handling large heaps (in the ~10GB
range) quickly learns to love deterministic costs of refcounted resource
management.

~~~
pjmlp
That is an implementation issue.

There are plenty of JVMs to choose from, even pauseless ones.

~~~
chetanahuja
Hah.. Implementation issue. That's a good one. It's just like an SUV not being
landmine proof is an implementation issue. Just because somewhere a landmine
proof humvee exists doesnt mean its attainable or practical for an ordinary
user.

------
jaekwon
This works well for system resources, but what about custom objects
destructors?

~~~
steveklabnik
You implement the Drop trait yourself, and it Just Works.

------
hubert77
really interesting stuff. small typo.

fn person(person: &Person) -> &str { ... }

should be:

fn first_name(person: &Person) -> &str { ... }

------
spullara
"Rust achieves both of these features without runtime costs (garbage
collection or reference counting), and without sacrificing safety."

If it is freeing everything at the scope boundary it is incurring a huge
runtime cost relative to garbage collected languages. GC scales with the
number of live objects during GC while this scheme scales with the number of
allocated objects. It is far more predictable than GC but I don't believe that
it isn't more expensive.

~~~
masklinn
> It is far more predictable than GC but I don't believe that it isn't more
> expensive.

A GC has to determine liveness of objects, so it has to traverse the object
graph (or at least the most recent generation if the GC is generational) to
know which objects are still alive and which aren't. That's what "GC scaling"
refers to. Then it still has to deallocate dead objects, and pay the same
deallocation cost as scope-bound free.

~~~
spullara
It doesn't have to deallocate dead objects. That is sort of the point of most
collectors.

~~~
pcwalton
All GCs have to deallocate objects. It may be very cheap to do so, for example
when you have a semispace collector and no finalizers, but switching the
spaces is still a form of deallocation. Production-quality GCs don't use
semispace for everything because of the huge memory use overhead, and all
practical GC'd languages I know of need finalizers in some form.

~~~
spullara
Finalizers are certainly the exception rather than the rule for objects.
Deallocation to me represents some action taken for a particular object and
scales with the number of objects deallocated. Semi-space copying collectors
do not have this property. I think we only disagree on the semantics of
"deallocation" — which is why I specified my argument in terms of how things
scale. Similarly compacting collectors like Java's CMS also have this scaling
property without semi-spaces.

------
kolev
I love Rust, but really can't use it at work until it has a fully-featured AWS
API. I had similar issues trying to use D and Dart in the past. Go doesn't
have an official SDK either and although goamz isn't complete, the situation
there is much better.

~~~
walrus
You could start writing it.

~~~
kolev
Maybe in a parallel universe where I have time for these kind of things... It
shouldn't be too hard to reverse-engineer botocore, use the these JSONs here
[1], and generate code based on them.

[1]
[https://github.com/boto/botocore/tree/develop/services](https://github.com/boto/botocore/tree/develop/services)

------
m0th87
There's a couple bugs in the sample code: `is_thirties` won't compile because
it's missing a type declaration, and the `person` function should be called
`first_name`.

------
yoodenvranx
Does anybody know if Rust will reach all those nice goals or are there some
theoretical / comp sci reasons why it might fail?

~~~
Rusky
Everything in this article already works (and has for quite some time). The
author is using it in production.

~~~
yoodenvranx
Yes, I know it is working for quite some time, I was more wondering if there
are some hidden inherent problems lurking below the surface or if the theory
behind rust is sound, e.g. some corner cases where it breaks down or result in
undefined behaviour.

~~~
kibwen
There's not yet any formal model that proves that the borrow checker is sound,
but such work is forthcoming.

~~~
dbaupp
[https://github.com/nikomatsakis/rust-
redex](https://github.com/nikomatsakis/rust-redex)

------
justinsb
C++11 introduces unique_ptr, which has a lot of the same safety
characteristics. Rust's syntax does seem much nicer, though.

~~~
pcwalton
> C++11 introduces unique_ptr, which has a lot of the same safety
> characteristics.

Sure, other than not actually being safe (UB resulting from use after move of
unique_ptr being an obvious one; there are many others).

~~~
crantanplum
What part of that is undefined? When you move the pointer, the original
becomes null..

~~~
masklinn
> What part of that is undefined? When you move the pointer, the original
> becomes null..

Nope, the argument is "valid but unspecified" not necessarily null. And
[http://en.cppreference.com/w/cpp/utility/move](http://en.cppreference.com/w/cpp/utility/move)
specifically has this as an example of UB:

    
    
        std::vector<std::string> v;
        std::string str = "example";
        v.push_back(std::move(str)); // str is now valid but unspecified
        str[0]; // undefined behavior: operator[](size_t n) has a precondition size() > n
    

There's an other fun UB because std::move doesn't check for aliasing, so self-
assignment:

    
    
        v = std::move(v);
    

is also an UB.

And of course even if it _were_ null., pcwalton talks about use after null.
Deref'ing a null pointer (to use it) is still an UB.

~~~
lbrandy
OP is correct. You've quoted the general language around move semantics but
unique_ptr itself has a stronger gaurantee.

From 20.7.1

    
    
      Additionally, u can, upon request, transfer ownership to
      another unique pointer u2. Upon completion of such a
      transfer, the following postconditions hold:
    
      — u.p is equal to nullptr,
    
    

unique_ptr isn't what's unsafe here. Null pointers are. The difference,
admittedly, is mostly semantic. There is nothing undefined about using a
unique_ptr after moving unless you use the internal pointer in undefined ways.

~~~
pcwalton
> There is nothing undefined about using a unique_ptr after moving unless you
> use the internal pointer in undefined ways.

If you dereference (i.e. use) a moved unique pointer you get undefined
behavior.

When talking about "using a pointer" most people mean dereferencing it, not
for example comparing against null. It may be somewhat imprecise language, but
it's what security people mean when they talk about "use after free", for
example.

~~~
crantanplum
This is definitely outside the scope of my knowledge, but if you dereference a
null pointer, won't you get an illegal memory access?

Undefined behavior, at least colloquially, is about dereferencing a pointer to
memory that may or may not be accessed and may or may not work/crash-the-
program. That's what makes it undefined and an absolute disaster to track
down.

I'm pretty sure that if you try to deference memory address zero the OS will
bark at you.

Again, it's not my area of expertise, so please correct me if I'm wrong

~~~
dbaupp
Dereferencing null is not guaranteed to do any particular behaviour; on many
systems, getting the machine to actually dereference 0 will give a
segfault/illegal memory access, but the compiler optimises assuming this never
happens, and so can break a program that accidentally "relies" on it.

e.g.

\-
[http://stackoverflow.com/q/6793262/1256624](http://stackoverflow.com/q/6793262/1256624)

\- [http://blog.llvm.org/2011/05/what-every-c-programmer-
should-...](http://blog.llvm.org/2011/05/what-every-c-programmer-should-
know.html)

------
kevingadd
Using automated resource management to drive socket (and handle, in general)
lifetime is a terrible idea. Sometimes you really, really want to close a
handle at a given time. The _disposal_ of associated resources is another
matter, and automating that is fine, but leaving a socket open 5 seconds too
long is simultaneously 'correct' from a garbage collection perspective and
totally, totally not the right thing to do.

The key observation here is that things like closing sockets are actual
information channels, so tying them to implementation details like object
lifetime is a terrible idea. This, concidentally, is why some attempts to tie
the lifetime of long-running tasks/threads to object lifetime are a bad idea.

~~~
ef4
There is no garbage collection in rust. There is no possible "five second
gap".

Wherever you would have written "socket.close()", in rust you can just say
"drop(socket)". The difference is, it takes advantage of the ownership system
to statically prove that you won't try to use the socket after close.

Or just let it go out scope and it will be dropped immediately. _NOT_
eventually garbage collected.

~~~
monocasa
> Wherever you would have written "socket.close()", in rust you can just say
> "drop(socket)". The difference is, it takes advantage of the ownership
> system to statically prove that you won't try to use the socket after close.

Does a drop somehow say something to the type system as well, disallowing
reads and writes afterwards? I'm new to rust, sorry if that's a stupid
question.

~~~
pcwalton
> Does a drop somehow say something to the type system as well, disallowing
> reads and writes afterwards?

Yes.

~~~
msopena
I'm curious about the drop implementation, which I found here
([http://doc.rust-lang.org/src/core/home/rustbuild/src/rust-
bu...](http://doc.rust-lang.org/src/core/home/rustbuild/src/rust-
buildbot/slave/nightly-linux/build/src/libcore/mem.rs.html#348)):

pub fn drop<T>(_x: T) { }

I understand that the drop function is simply taking ownership of the passed
value so Rust knows that once "drop" finishes, the _x can be 'dropped'.

But I though that the T had to be bound to have "Drop" trait (i.e. T: Drop) so
that Rust knows it's possible to insert the call to 'drop' like _x.drop()?

~~~
kibwen
The names are a bit confusing here, and I might petition to change them.

The `drop` here is just a simple library function and isn't technically
related to the `Drop` trait, which is implemented with magical compiler pixie
dust and allows you to define a destructor via a `.drop` _method_. Anything
that goes out of scope has this `.drop` method called automagically.

You're correct in that if you were calling `x.drop()` explicitly, you would
need to have a `Drop` bound on `T`.

