Hacker News new | comments | show | ask | jobs | submit login
Announcing Rust 1.19 (rust-lang.org)
318 points by steveklabnik 12 months ago | hide | past | web | favorite | 80 comments

Great running into you at Shizen @steveklabnik!

Just started using Rust in a serious capacity this month to secure some C++ functions that are called by our Erlang apps, with great assistance from Rustler [1]. Several people have complained to me about the decision to remove async IO from Rust, but I'm really grateful that it happened, because it lets Rust focus on being the best at what it is. Erlang's concurrency primitives and Rust's performance & security are a match made in heaven.

[1] https://github.com/hansihe/rustler

Hey hey! You too. Sorry I had to skip town before we could grab a drink, it was a hectic trip!

> the decision to remove async IO from Rust

I'd be interested in hearing more about this, did they maybe mean green threads? https://tokio.rs/ is the big async-io project, it's certainly not removed in any sense!

I should clarify the "remove async IO" as the decision to not include an event-loop primitive in the core language or stdlib, which is tantamount to complaining about GC not being part of Rust core :)

Metal IO is an awesome library as well:


Ah yes, I forgot that we had libuv at some point, duh! Thanks :)

Tokio depends on mio.

Incredibly excited to see unions available in stable Rust now!

The release notes mention my RFC, but a huge thanks also to Vadim Petrochenkov for the implementation, and all the myriad RFC contributors.

Now that unions have stabilised, is there any significant un-expressable things in C APIs left? Bitfields?

Yes, to the best of my knowledge, bitfields are the last thing in C APIs you can't express conveniently in Rust; you can handle them with careful macros, but that's annoying, just as it was for unions.

In terms of things you can express now but that could still use some work: anonymous structs and unions. You currently still have to name all the intermediate aggregates, even when the C API doesn't. See https://internals.rust-lang.org/t/pre-rfc-anonymous-struct-a... .

The tough thing about bitfields in C are their poor portability as they can get laid out differently in memory by platform and compiler. They're often avoided even in C when you're looking for a good degree of interoperation (e.g. structs used in networking or on-disk-formats, etc.).

The scary thing about bitfields is also that in the context where they make the most sense to use, writing bits to CPU registers, they are also the easiest to misuse as writing one bitfield requires both reading and writing the whole byte(minimum addressable unit), which can have unexpected effects when writing to registers. On some chips, reading registers can have different values than writing so the value you are writing the second time might cancel out what you wrote the first time, or that each write triggers something you only wanted to trigger once.

    myregister.tx_baudrate_bitfield = 1;  // Writes the whole myregister
    myregister.tx_now_bitfield = 1;   // Reads the whole myregister, then bitmasks and finally writes.
The same mistake can of course be done with manual masking also but then it's more obvious that you are reading the registers also. When the code looks like above it's very easy to not realize that it also requires a read.

Right. And we can't just say "we need bitfields", we need 100% C-compatible bitfields, for FFI.

But not very important for FFI, since bitfields rarely seem to show up in C APIs (presumably for exactly the reasons mentioned above).

A kind of similar issue appears even with enums, because it is hard to guess the size of the integer that a given enum is implemented with. This is why it some not to use C enums in public interfaces to libraries.

Gecko (and many other large C++ programs that care about memory usage of certain heap allocated types) uses bitfields often enough, and we've had to teach bindgen how to generate bitfields so that we can access them from Rust.

Were these intended as part of an external interface for Gecko? In which case I would say using bitfields (as opposed to explicit masks) was a mistake.

But yes, I can imagine that for Mozilla there is a need to mix Rust and C++ right inside an application at boundaries that are not usually considered external, in which case all kinds of non-portable C constructs are acceptable.

> A kind of similar issue appears even with enums, because it is hard to guess the size of the integer that a given enum is implemented with.

True; fortunately, Rust has a means of declaring the size of an enum, with #[repr(u8)] and similar. But you do have to figure out the size of the C enum.

I have run into issues with bitfields, and flexible array members.

I'm not familiar with flexible array members, what are they?

Having an empty array at the end of the struct which is really in-line data:

    struct foo {
      int num_elements;
      char data[0];
    struct foo *d = malloc(sizeof(foo) + num);
    d->num_elements = num;
    /* Now d->data is effectively an array char[num_elements] */

IIRC that should work in Rust, Foo would be a DST.

So this is tricky.

The way custom DSTs work in Rust is super annoying at the moment, but slightly less annoying for generics.

You are free to define a custom DST that is like:

    struct Foo {
        header: u32,
        flexible: [SomeType]
However, there is no way to construct this type. The most you can do is calculate field offsets and write a ton of unsafe code.

If the type was instead

    struct Foo<T: ?Sized> {
        header: u32,
        flexible: T
you would be able to construct a `&Foo<[SomeType]>` from a `&Foo<[SomeType; N]>` via DST coercions.

This only works if you know the size of the array at compile time.

For a runtime sized thing you have to implement a bespoke vector-like thing. You can find an example of that code in https://github.com/servo/servo/blob/e19fefcb474ea6593a684a1c... where we have a generic "HeaderWithSlice<H, [T]>" type which can be heap allocated as a header followed by the flexible DST [T].

This could be improved. The HeaderWithSlice thing could probably be a useful crate for implementing completely-heap-allocated vectors (where the len/cap are on the heap) or shared reference counted types with flexible members. We haven't really split it out as a crate because we don't actually ever mutate it so the amount of code we need is significantly less (but it's not as useful as it could be).

So yeah, flexible array members can be implemented in Rust, but it's a lot of hacky work. It's an equivalent amount of work and unsafety to write a custom Index impl on a repr(C) type with a zero-length array at the end. The DST doesn't actually get you much here.

I've never quite understood this one.

For example-- what happens if I create an array of foos and then malloc data of each element to some arbitrary size?

You don't do that. Each one is usually allocated on its own, and you keep a pointer to each. Though you could store them contiguously if each contains a length member or they're delimited in some other way.

This page[1] gives a explanation of flexible array members. They were introduced in C99.

[1] https://en.wikipedia.org/wiki/Flexible_array_member

I was pretty surprised, how the heck are unions different from enums?

I read that they're essentially the same as C unions (~untagged enums), but I have no clue how `match` works in that case.

A pattern on a union just interprets it as the specified field. It doesn't try to determine which one was actually written to last.

One thing that recently surprised me is that Rust lacks default values and named arguments for functions.

Some consider it a counterpattern or bad memory to be able to not fill out all parameters or reference them by name. However some interfaces easily require like 50 different function parameters that cannot be removed in a simple way and they all make sense in different configurations. Without default values and named parameters you're lost there. I don't get this design decision on Rust side at all.

Coming from Python I agree that the lack of keyword/default arguments is initially conspicuous, though in practice it seems that library authors seem content to implement builder patterns to work around their absence. While ideally I do think that the builder pattern is less elegant than supporting keyword arguments, in practice it does avoid the unfortunate pattern I've seen in some Python APIs to just cry #yolo and cram a hundred parameters into a single function call. I've never seen a good interface that requires "like 50 different function parameters"; call me zealous, but if I were to design my own programming language, I'd make it a hard compiler error to have more than four parameters to a function. :P

> I'd make it a hard compiler error to have more than four parameters to a function

There's already such a language, it's called Haskell and it only allows one parameter per function.

Haskell has currying, which is cheating. :P

Saying library authors are content with their absence is probably a bit too strong. There's not really another option, and I doubt many folks would choose not to author libraries for this reason alone. I personally have found their absence annoying when writing Rust libraries.

I'd probably be in favor of a hard cap on # of function arguments. :)

But there is another option: a configuration struct, potentially combined with an implementation of Default. Indeed, part of the reason this conversation has gone on so long is that there's one camp who just wants to emulate keyword arguments via additional sugar for structs and Default.

(There's also another another option, which is to have a different function for each combination of parameters. This obviously doesn't scale in the large, but it's perfectly acceptable for functions that take only a single optional parameter, which IME is a plurality of APIs that want optional parameters).

While you can misuse a feature, you don't have to. My example is extreme and you probably don't run into something like that often depending on what you do. There could be other solutions. There surely are other solutions that are neither more clear nor better in any way. Sometimes you just got to do what you got to do. What surprised me about Rust is that it has all those cool features but lacks a very basic one that most programming languages have.

> a very basic one

With my language-nerd hat on...

While they may seem basic as a user, designing a language means you need to think about edge-cases. Methods in Rust have some special rules around dispatch, and in order to implement one or both of these features, all of that stuff needs to be considered and designed.

In other words, a lot of work goes into new features, even ones that are easy to use.

In Rust's case, we haven't ruled out adding these features completely, but nobody has put in that work to come up with a proposal. Part of that is that while people tend to see the lack of these features as a mild annoyance, it's not enough to prioritize over other work. We'll see how it all shakes out.

Want to echo this part:

> nobody has put in that work to come up with a proposal.

Features like this mainly need a champion who really cares about getting it into the language & can work with the language & compiler teams to complete that process.

My example of misuse is not to forswear the issue entirely, only to demonstrate that there do exist philosophical objections to keyword arguments. Trust me, I've long been an advocate of keyword/default arguments for Rust. :P The last time that anyone made a push for them was in the run-up to 1.0, when it was decided there were more pressing issues than keyword/default args (alongside a bevy of other features), and the decision to implement them was officially postponed. I might get around to submitting an RFC to revive them sometime this year, though I'm still unsure about some of the semantic details that we might want to consider.

Additional note to static_noise: would you like me to contact you if I eventually revive one of the postponed RFCs for keyword/default arguments, so that you can weigh in? Twitter, Github, etc.?

When a feature is present, without strong disincentives, it will be misused; consider for example matplotlib: many functions have over 10 possible default arguments, and discoverability is almost nil (stack exchange is the place to look for answers).

If you do have the struct implementing `Default`, as suggested from sibling comments, and it's fully public, you can also use the "functional update syntax" (where the `..` is):

    struct Config {
        interesting_thing: bool,
        a: i32,
        b: i32,
        c: i32,
    fn main() {
        Config {
            interesting_thing: true,
In addition, the builder pattern is especially useful for larger configuration objects like this.

I think the derive_builder crate is worth a mention here:


The main reason I haven't used derive_builder much is that I'd much rather have a compile-time guarantee that all the field that I want are filled out than have to check for an error at runtime (or worse, just `unwrap/expect` everywhere). I'm not sure there's currently a way around this on stable, but I'd personally prefer writing boilerplate for my types than having to handle errors in places that shouldn't need them.

Maybe they'll get there with time, but with 50 different parameters (... for one function? really?) I'd say put most of them in some kind of an options object, which certainly can have defaults and also checks your values at compile-time.

Yes, the currently-considered-idiomatic solution here is the builder pattern.



I count about 40 plus more which are not listed.

You surely could first make a dictionary or structure and fill it with these arguments. But then "kwargs" are just that, a dictionary.

I find the interface of the plot function pretty straightforward. There are a lot of options but it's pretty clear what they do and when to use them. Most often you use only some of them - but often different combinations. Splitting this up into multiple helper objects that need to be constructed and filled beforehand would turn the default one-liner into a ten-liner which is not better.

> But then "kwargs" are just that, a dictionary.

Maybe Ruby on Rails is an exception, but while I used to be a fan of default/keyword arguments, especially in combination, seeing how they were used there made me very much not like them any more. It's impossible to tell what's going on.

How about how Smalltalk, Obj-C, Swift uses them?

It's very clear there...

FWIW, Obj-C doesn't have keyword arguments. It just has infix method names. But all arguments in an Obj-C method are required, and the order is significant.

Similarly, in Swift, the order of defaulted arguments is significant. So you wouldn't see anyone do something like that crazy matplotlib method in Swift, because remembering the order of all the arguments in that function is impractical.

Yeah when we do named args it will very likely be based on Swift, but a bit different due to backwards compat and the desire to integrate it into existing methods like Vec::from_raw_parts and ptr::copy.

> However some interfaces easily require like 50 different function parameters

These interfaces are poorly designed; as in natural language, you almost never need more than a subject, direct object, and one or two indirect objects, any of which may themselves be compound entities.

A call with more than about four parameters has usually decomposed at least one thing that should be composite in the argument list.

Builder pattern/defaults. I personally find it much more straightforward/organized than using default values and/or named parameters. Python loves named parameters and sometimes their over usage drives me crazy.

I agree about default values, but named parameters have other benefits beyond acting as makeshift configuration options.

1) More readable code. The classic is foo.Bar(true). It breaks the flow of reading to have to hover and see what that 'true' means. Much nicer to see foo.bar(launchMissiles: true).

2) Protects from a particular class of dumb mistakes. You have a function foo(x, y) where x and y have the same type. You refactor it so that one of the parameters isn't needed anymore. It's surprisingly easy (read: I've seen it, and I've done it), when you clean up the function calls, to accidentally delete x instead of y or vice-versa. Named parameters prevent that.

> 1) More readable code. The classic is foo.Bar(true). It breaks the flow of reading to have to hover and see what that 'true' means. Much nicer to see foo.bar(launchMissiles: true).

This is a case where I've started using Enums in Java. For example:

In Rust, you could have a macro to make defining these types easier (and have it automatically generate to_bool and from_bool methods):


> However some interfaces easily require like 50 different function parameters

Please tell me you're not responsible for this monstrosity https://salilab.org/modeller/9.18/manual/node315.html

Can't you just do that with passing a struct + Default[1]?

[1] https://doc.rust-lang.org/std/default/trait.Default.html

It seems like you can... but then this could be done implicitly at no loss and lead to less clutter. It's not hard to understand what default parameters or named parameters do.

Hm. I'm not sure about the addition of unions. Why add something that is unsafe to read or write? You need an additional mechanism to let you know which type it is OK to access.

They mention the case where the type can be distinguished by the lest significant bit, but wouldn't it be better to handle that case as an enum? That is, the least significant bits define the enum tag, while the remaining bits define the associated value.

(By the way, I really mean this as a straight question, not a criticism in the form of a rhetorical question. I really don't know enough about it to be criticizing it.)

As other commenters have mentioned, the demand here is almost all due to the desire for smoother interoperation with C code. What's gone unsaid so far is that, despite still requiring the `unsafe` keyword for many operations, this feature helps make Rust code more safe when calling into C, because it simplifies the interface and eliminates the need for hacked-up handwritten workarounds. IOW, a little bit of standardized unsafety to replace a larger amount of bespoke unsafety.

FWIW the hacked up workaround that bindgen uses is _beautiful_


Basically, if you have a union of A, B, C, you create a struct with three zero-sized fields using BindgenUnionField<A>, and then add a field after that containing enough bits to actually fill out the size. Because the BindgenUnionField is zero sized, a pointer to it is a pointer to the beginning of the struct, and it has an accessor that treats the pointer as the contained type.

This makes the API for field access `union.field.as_ref()` instead of `union.field`, but that's still pretty clean.

It's still a hack, and I'll be happy to see it go, but it's a really fun hack.

If only English already had a word for "beautiful, yet reprehensible". :)

Well "awful" etymologically means "inspiring awe"...


You know this word in another language?

There must be one in German. :)

Probably just the concatenation, like beautifulyetreprehensible.

>Because the BindgenUnionField is zero sized, a pointer to it is a pointer to the beginning of the struct

Is that a stable assumption?

Because it's repr(C) IIRC yes.

Unions are used in C APIs so having them is nice for FFI.

Besides, enums aren't always optimal for some things. Let's say that you have an array of 32-bit values whose prime-numbered indexes contain either a signed or unsigned 32-bit integer, depending on a single global boolean tag. You can't encode that invariant into a (non-dependent) type system. If you want to do it safely, using enums, you are going to have a tag for every value, and that's going to cost you. (A bit crazy example, but bear with me.)

Unions are a way to circumvent the type system a bit. They allow you to keep track of the stuff inside memory slots using the way you see the best. But there's a reason they're unsafe - the responsibility is on you!

You don't need to get as tricky as that.

Rust doesn't let the programmer specify the layout of enums in detail right now, so you can't specify where the compiler should place the discriminator.

The example I care about is a word which is a pointer if the bottom bit(s) are 0, but otherwise contains a bunch of packed fields, where the least significant bits are a nonzero value I can do something useful with.

I believe that they've framed it in this article as mostly for compatibility with C/C++. A simple example is JS objects are often represented as a single f64 with the NaN space punned for 32-bit pointers and 53-bit integers. If you want to put some code written in Rust in that environment, if you could define a union that could expose a safer API by using bit twiddling you could. Presumably if you have the memory to spare or are unconstrained in design you won't use this feature which is part of why it took so long to get into the language.


TL;DR: FFI with C is much harder without unions. There are smaller reasons as well.

They're largely meant as an easier interop with the C ABI and C APIs that use unions. You wouldn't be using them in your higher level Rust library.

Other commenters have mentioned the use case for FFI, and there's a very small niche use case for type punning, but there's one thing that this enables you to do that was absolutely not possible with the language previously.

This is zero cost stack allocation of data in a way which avoids destructors.

Basically, currently, in Rust, if you want to allocate a type and avoid destructors being run, you have to write a wrapper around `Option<YourType>` that nulls the option in its destructors. Or you heap allocate it and turn the box into a raw pointer after allocation. Both have overhead. The zero-overhead way of doing it is to stack allocate an array and cast pointers, but then you need to know the size beforehand.

With unions, you can stack allocate a `union Foo {x: YourType}` and just use that. Unions don't have destructors, so this stack allocates enough space for your type, and lets you unsafely but conveniently access it as your type (no ugly pointer casting hacks), but you can guarantee that destructors won't be run.

The obvious question here is -- why is this even necessary? Surely you can just call mem::forget to avoid destructors right before the function returns.

However, destructors also get run whilst panicking, so if your function accepts a callback, and that callback panics, you can't avoid destructors without the overhead mentioned before.

For a concrete example of this use case, check out ArcBorrow::with_arc() (https://doc.servo.org/servo_arc/struct.ArcBorrow.html). ArcBorrow<'a, T> is basically a borrowed reference to a T that is known to be backed by an Arc (atomic reference counted type). You can obtain borrowed references to an Arc normally, which is great -- lets you share the data without bumping atomic reference counts all the time and paying the atomic overhead. But if you have an &T -- a borrowed reference to a T -- there's no way to bump the reference count on that if you need to escape the borrow; since there's no guarantee that the &T borrows from an Arc allocation and not something else. So you must pass down an &Arc<T>, and that has double indirection. There are other reasons (pertaining to the existence of RawOffsetArc, which has to do with FFI constraints) as to why &Arc<T> won't work for us there, but I won't get into those. ArcBorrow<'a, T> lets us freely pass around borrows of &T which can be cloned as an Arc if necessary.

Anyway, ArcBorrow has a with_arc method (https://doc.servo.org/servo_arc/struct.ArcBorrow.html#method...). This takes a closure and passes an &Arc<T> to it.

But we don't have an Arc<T>, we have an ArcBorrow<T>, which has a different representation (in particular, ArcBorrow contains a pointer to the data, whereas Arc contains a pointer to the allocation, which starts earlier because of the refcount).

So we construct a fake Arc<T> on the stack (https://doc.servo.org/src/servo_arc/lib.rs.html#884-907), and share it with the closure. Because it's a fake Arc<T> we can't actually let its destructors be run, so we put it inside NoDrop, which on nightly uses unions (but on stable uses the non-zero-cost methods I mentioned above).

I use this same trick in array-init (https://github.com/Manishearth/array-init/blob/a0cb08928b42d...), where I stack allocate an uninitialized array and let you fill in the elements with a closure. If the closure panics the destructor of the _partially_ initialized array should not run, so again, it's in a NoDrop.

In general when writing unsafe abstractions you often need escape hatches like these.

Stupid question, since I'm not sober...why doesn't ArcBorrow just store a reference to the Arc and also a reference to the underlying T? That would solve the double indirection and also give the ability to clone the Arc?

Great effin post btw, i spent about 40 minutes readin that shit

Not a stupid question! That would be the way to do it in a Rust program with far fewer constraints or interacting tradeoffs.

> why doesn't ArcBorrow just store a reference to the Arc and also a reference to the underlying T

That's two words you're copying around on the stack.

Admittedly, that's a negligible cost. We don't care about that cost. I bet that cost never shows up in profiles. It would be premature to optimize for that cost :)

The real reason is within the "There are other reasons" I mentioned above ;)

These other reasons have to do with RawOffsetArc; it's a long story. The short version is that you may not always have an Arc<T> that you're creating an ArcBorrow from, it may be something else.

So basically this code is Servo's style system, and it is being used by Gecko (Firefox's browser engine). Gecko is in C++.

Servo's style system is quite parallel. So certain things are shared via Arc<T>. Pretty normal.

However, some of these things are shared with C++ code too! We've taught Gecko's refcounting setup about what an Arc is, and it does the appropriate FFI call when it needs to addref/decref it. This is all great. It works. These types are otherwise opaque to Gecko, and it does FFI to get to each.

However, we have one struct, ComputedValues, which stores all the "style structs" (where computed CSS styles go). It's basically a bunch of Arc<T>s of these style structs. ComputedValues is a Rust-side thing, and it's stored in a heap allocation dangling off a "style context" in the C++ code. It's otherwise opaque to C++.

The main operation Gecko does with ComputedValues is fetch a style struct. The style structs are C++ structs which both C++ and Rust can understand. So these getters are a bunch of FFI calls that take ComputedValues. This FFI call turns out to have an overhead that turns up in profiles, and there's an extra cache miss involved in hopping to the ComputedValues allocation (which also turns up in profiles). Both are major.

The fix is to store ComputedValues inline in the style contexts, and make it non-opaque so that C++ can actually read the types. Basically, C++ should see some regular pointers to the style structs. Rust will see Arc<T>.

But Arc<T> is a pointer to the allocation of the Arc. Arc is allocated with the refcount first, and the type T next. And the Rust struct layout isn't something C++ can understand, so code that assumes the offsets and does pointer arithmetic will be brittle and can change in a Rust upgrade. Thus arises RawOffsetArc<T> (https://doc.servo.org/servo_arc/struct.RawOffsetArc.html), which is represented as a pointer to the T, but it has the foreknowledge that T is arc-allocated and has a refcount preceding it. RawOffsetArc<T> is the same as an Arc<T> in all other aspects.

So these structs are now stored in a RawOffsetArc<T>, to make the pointers match with the C++ side representation.

However, there's also pure servo code that uses Arc<T> for this. So we can't just pass around &RawOffsetArc<T> because the servo code doesn't have that. It's not easy to migrate, nor do we really want to (Arc<T> has some more APIs and I don't want to add support for all that to RawOffsetArc). So it becomes easier to create ArcBorrow<T> as something that is guaranteed to have come from either a RawOffsetArc<T> or an Arc<T> (both are the same in behavior and heap representation, just that their stack pointer representation is offset. Converting between the two is a simple pointer bump on the stack). Because they're the same, ArcBorrow<T> can just be a pointer to the T, and the rest works out.

This is one of the reasons -- the other reason is that unlike Rust, where the refcounting is done by the wrapper (you can stick anything in Arc<T> and Arc<T> will handle the refcount), Gecko puts the burden of refcounting on the inner type. This means that if you use RefPtr<Foo> in Gecko, RefPtr will not create a refcount for you, Foo is expected to have AddRef()/Release() methods, which usually bump a refcount field it defines. Furthermore, it's taken as a given that if you have a `Foo`, it is heap allocated (and thus can be refcounted).

This means that having Foo instead of RefPtr<Foo>* is pretty common in Gecko. And it gets passed over FFI a lot to Servo, which again has to either construct transient Arcs, or treat it as an ArcBorrow. We currently do both, but I'm planning on migrating stuff to be more reliant on the ArcBorrow model since it leads to cleaner code.

(A lot of this complexity stems from the fact that browser engines are pretty tightly coupled codebases, and thus the "style system" doesn't have a clean API boundary. There's a lot of reaching into each others' guts that is necessary to make this work)

Wow, a break yielding a value from within a loop is awesome! Do any other langs have that?

Sure, Ruby:

     x = (1..100).each do |i|
       if i > 9
         break i
Evaluating that gives

    > x
    => 10
You can also supply a value to next within a loop, which is occasionally useful

If you're using the loop macro, you can do it from Common Lisp (see [0]).

  (block outer
    (loop for i from 0 return 100) ; 100 returned from LOOP
    (print "This will print")
    200) ==> 200

  (block outer
    (loop for i from 0 do (return-from outer 100)) ; 100 returned from BLOCK
    (print "This won't print")
    200) ==> 100
[0] - http://www.gigamonkeys.com/book/loop-for-black-belts.html

EDIT: I always do formatting wrong on here.

One can also break out of the LOOP and return the so far accumulated value:

  CL-USER> (loop for i from 1 below 10
                 sum i
                 when (= i 6)
                   do (loop-finish))

Not just with LOOP, RETURN works with every standard iteration construct.

I feel like this has to have precedent from somewhere, most likely from another expression-oriented language, but I can't seem to find one right now. I've checked the RFC expecting to see a discussion of precedence from other languages, but no luck so far.

ruby does this

have code around my app doing things like:

  loop do
    code = random_string(6)
    break code unless code_already_used?(code)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact