Hacker News new | past | comments | ask | show | jobs | submit login
Rust once, run everywhere (rust-lang.org)
340 points by steveklabnik on Apr 24, 2015 | hide | past | web | favorite | 105 comments



Not sure why this post doesn't mention it, but for nontrivially sized C APIs, you don't have to convert the function signatures to Rust manually; instead you can use rust-bindgen:

https://github.com/crabtw/rust-bindgen

(Caveat: I think it would need to be fixed up to avoid unstable APIs in order to run on stable releases of Rust, as opposed to nightlies.)


rust-bindgen is an amazing project, dating all the way back to Rust 0.1 and dutifully updated by various community members throughout more than three years of catastrophically breaking changes. It's an invaluable boon to anyone binding a C API from Rust.

  > I think it would need to be fixed up to avoid unstable 
  > APIs in order to run on stable releases of Rust, as 
  > opposed to nightlies.
The code generated by rust-bindgen won't be using any unstable features, so even if rust-bindgen itself happens to require a nightly build of Rust (which would be less than ideal), it can still be used to generate interfaces that can be used by stable Rust.


Interesting. Can rust-bindgen support object oriented c?


Not sure what you mean by "object oriented C". It doesn't support C++ presently (something I'd love to see improved); there are some projects that do Objective-C interop; if you just mean C that uses structs in an OO-like way, it shouldn't be that different from any other C.


I assume parent meant something like GObject, where you get opaque reference counted structs. (Generally something like gobject-introspection is used to generically bind it to other languages, not auto-generated wrappers.)


Yes, when using structs in an OO like way. Does it generate c-style rust wrapper ? or can it generate class-based rust wrapper ,perhaps with some guidance ?


AFAIK, rust-bindgen doesn't currently have support for translating C functions and structs into anything other than the same things in Rust. It's somewhat idiomatic for manually-written bindings, layered on top of the generated ones, to take care of things like providing a memory-safe interface (which can't really be done automatically), converting between native Rust types and C representations if necessary (perhaps including things like Result vs. error codes or nulls), and adding object orientation. If you want to go full automatic, though, you can do a lot with Rust macros, so give them a try.

Examples of manual wrappers on top of bindgen - there are many:

http://erickt.github.io/blog/2014/12/13/rust-and-mdbm/

https://github.com/kenz-gelsoft/wxRust/blob/rust-servo/src/b... (I don't like the design of this one, actually)

https://github.com/rust-gnome/gtk/blob/master/src/widgets/ac... (better)


This ownership stuff is such a blessing and amazing. I'm currently working on a contract where we're trying to fixup some code that crashes under heavy load, as well as troubleshoot some perf issues. The root cause is that it's not clear who actually owns what until when, so in some cases an object is destructed while there's still some code thinking it can use it. Fun.

For perf, 30% of the CPU is burned in malloc/free, due to them copying strings around. The system has an arena allocator built in, and many of these strings might be able to go in there. Except no one is sure exactly how long the lifetime is on these things. So everyone copies everything just to be sure. Rust would force addressing this kinda thing up front. (And this project coulda added an refcounted structure or something, but it's a few hundred kloc of C and C++ so...)


Have you considered using something like boehm-gc to profile the app to get a better handle on item lifetimes, leaks, etc? It seems fairly ideal for figuring out where things are going cactus on those leaky strings.


That's a cool idea. Honestly though a lot of the core is rather grotty plus there's dozens of modules loaded in any given configuration, each which can hook into anything. It's not particularly fun.. (I don't name anything because the software does work for some scenarios and is giving a free basis to what would otherwise be expensive code... And the core devs are nice so I don't want to sound ungrateful or rude.)


Uhh... you do realize C++ std::string has small string optimizations right? That would probably take care of most of the 30%. const string& would probably take care of the rest. No program should have performance problems due to strings.


Small string optimization doesn't help that much. Chrome's use of std::string accounts for almost half of all allocation: https://groups.google.com/a/chromium.org/forum/#!msg/chromiu...


As always it would depend on the context of usage whether the allocations would be problematic. Use of SGI STL::rope<T, Alloc>[0] has helped me in the past in a similar scenario. It is part of SGI stl implementation that did not make it into part of the standard library specification although it should have, imho.

Essentially ropes are character strings represented as a tree of concatenation nodes optimized for immuatability. A rope may contain shared subtrees so is really a directed acyclic graph where the out-edges of each vertex are ordered. Kind of search trees that are indexed by position.

[0]: https://www.sgi.com/tech/stl/Rope.html [1]: [pdf] http://www.cs.rit.edu/usr/local/pub/jeh/courses/QUARTERS/FP/...


That doesn't mean that allocations are a big part of chrome's performance, and in fact I would guess that they aren't since allocations are a large performance win and chrome is optimized.

Also, chrome being a web browser means a huge part of the program is strings.


Most of the code is C, and uses APR, but with its own wrappers. E.g. "project_sprintf" and such. Some of the strings aren't quite small enough, too. They're like HTTP header strings, unseparated. I agree no one should have this problem, but if you naïvely start passing char* around and have a complicated ownership model (that's probably technically unsound, but works in practice most of the time), you can end up duplicating and getting hurt.


std::string's small string optimization is automatic, you don't have to do anything with it or about it. Bigger strings create allocations, small ones don't, and statistically most strings are small. Also if you use const references you can be sure that you aren't modifying the strings and aren't copying them. With C++11 you can move strings and so the ownership is explicit.


Unless the entire program is based on manipulating and parsing large strings....


But then you could reference parts of large strings, copy small sections and manipulate those, etc. Modern programs can do 10 million heap allocations per second. If heap allocations are really a problem, that is a gigantic design flaw, and if it is happening with strings, simply using std::string should make a large impact. Surely not every string passed is both huge and needs to be copied.


Cases with primitive arguments are easy to understand, but I think it would be beneficial for the Rust book to have way more detailed info about working with buffers and heap-allocated structs that the other language holds onto past the foreign call returning. Looking at examples on github (I don't mean the ones from this blog post), one can believe the examples work, but it also looks like there's undocumented knowledge about deep implentation detail involved when e.g. allocating something in one language and then letting the other language hold onto it.


> it would be beneficial for the Rust book to have way more detailed info

I was just talking with someone on Twitter about this today, actually. I agree with you 100%, and want to fill that kind of stuff out. You gotta pick priorities, and since we'll have more new people than experienced people at first, I've been focusing on the basics. Now that the language isn't changing all the time, shortly after 1.0, I'll be able to start writing stuff like this, that covers topics in more depth.

I also think that libraries will pop up to help with this specific kind of issue, too. We'll see.


While we're making doc requests on this topic, could you address passing function pointers between C/Rust at some point? Sorry if this is already obvious to more experienced Rust users.


I don't think it's obvious at all, and yeah, absolutely.


I'm looking forward to seeing the major security-critical libraries being rewritten in Rust, even if they are then often called from C. Especially OpenSSL/LibreSSL, which is a mess inside, has way too much legacy code, and too many ways to turn encryption off, log keys, or weaken checks.


Up to this point we've been encouraging people to not go wild implementing crypto et al in Rust simply because memory safety is not the only attack vector and we need to carefully consider language extensions and guarantees that would accommodate security-critical code.

That said, I do personally think that Rust will one day be an excellent language to take the place of C in implementing fundamental cryptographic libraries. But first we need domain experts to hammer on the language and provide feedback.


In the case of LibreSSL, it's probably not going to happen very soon; the OpenBSD devs are very much C programmers (and Ted Unangst already addressed this in the case of Heartbleed, though his methodology is the subject of some contention: http://www.tedunangst.com/flak/post/heartbleed-in-rust ).

While Ted's post has been criticized by a few Rust programmers (see: http://tonyarcieri.com/would-rust-have-prevented-heartbleed-... ), it still brings up the argument that one's language choice isn't a magic bullet. There's an adage (I have no idea who originated it) that's along the lines of "every 'idiot-proof' system underestimates an idiot's ability to break things", and that's worth considering here. Rust prevents a lot of bugs, but it's not a guarantee of security. One needs to understand the actual problem that needs solved, and this is where the OpenBSD devs are focusing with LibreSSL (and their various other projects); their security track record stands as a good testament of that notion of understanding trumping language choice.

This isn't to say that I disagree with you. Rust is readily poised to replace C in a lot of contexts (IIRC, one of the obstacles right now is the use of a different allocator, so trying to use a Rust library from C results in some performance overhead), and it would be nice to see it used in a lot more projects. Just bear in mind that moving to Rust isn't the only step that would be required to fix these sorts of security bugs.


Of course moving to Rust (or Go) won't fix all security bugs.

It will though stop ordinary random code having remote code execution or spurious data leak bugs.

That is really significant. It needs cheering and encouraging now, and soon it needs forcing.


Quite true.

Memory safe languages aren't bug free in terms of security bugs, as there are also logical errors, as shown by many Java exploits.

However, removing the typical C memory corruption errors out of the picture is already quite an improvement.

Rust and other system programming languages have more probability to succeed outside classic UNIX systems though, as UNIX and C go hand-in-hand and I doubt that will ever change.

The only non-classic UNIX is Mac OS X and it already has its own "Rust".


Well, the nice thing about Rust is that it's at least theoretically possible to swap out C with Rust without affecting very many things (at least not negatively, other than the duplicate allocators IIRC), so a Rusty Unix is at least plausible.


I would love to see this too, but it's not going to happen until Rust is as portable as C, which won't happen until it compiles to C, IMO.


I suspect you can have this already exists if you want it bat enough, given that rust generates llvm bitcode, and there is an llvm-cbe "target" architecture that just generates C.

The results probably won't be well-optimized, though; C isn't actually very portable at this level, there's too much implementation-defined behavior that needs to be taken into account. You'll probably spend as much effort trying to robustly handle the quirks and deficiencies of some obscure platform's compiler than doing an LLVM backend for the processor...


The LLVM C backend was removed in LLVM 3.1. But the Monster has been reanimated by [0]/[1]!

I wonder if anyone has updated the Cell SPU backend that was eventually removed. That is one of my favorite architectures. =)

[0]: https://github.com/draperlaboratory/llvm-cbe

[1]: http://thread.gmane.org/gmane.comp.compilers.llvm.devel/7594...


This could give a kick to Rust. According to recent news they are going to have a stable syntax + features for a while. http://blog.rust-lang.org/2015/04/03/Rust-1.0-beta.html http://blog.rust-lang.org/2014/10/30/Stability.html


> Rust makes it easy to communicate with C APIs without overhead

Zero overhead vs. the case where you'd also have function call in C.

But in some cases, especially with numerics, if you have a C library that is designed to be inlined, you get slowdown even calling C from C, if you prevent inlining.

A random number generator, SIMD-oriented Fast Mersenne Twister [1], was such a case when I tried. Just calling with `_attribute__ ((noinline))` makes it 2x slower, even when calling from C.

[1] http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/


I don't see any reason (though I'd love to be enlightened) why the Rust implementation couldn't someday inline C functions. I'd actually be a little surprised if the underlying LLVM implementation can't use LTO to do it out of the box.


As mentioned downthread, this is exactly what Servo intends to do to interop with SpiderMonkey. I'm very excited to see that project proceed.


For small functions that cannot be inlined, they should work across arrays that are as big as you can make them to make the overhead trivial.


Can't link time optimization help with that? I assume it's language agnostic.


Bindings can leverage the ownership and borrowing principles in Rust to codify comments typically found in a C header about how its API should be used.

Could the Rust compiler make use of design by contract style annotations in the C header? Maybe it could reason that an unsafe block is in some ways safe.

There is a specification language for C called ACSL (which was designed for and is mainly used by Frama-C). Annotations look like:

    /*@
        ensures \result >= 0;
        assigns \nothing;
     */
    int foo(int x)
    {
        if (x < 0)
            return 0;
        return x;
    }
I noticed there are annotations for Rust with the LibHoare library (https://github.com/nrc/libhoare). The example above might look like:

    #[postcond="result >= 0"]
    fn foo(x: int) -> int {
        if x < 0 {
            0
        } else {
            x
        }
    }
It would be really cool if ACSL could be translated to LibHoare.


I don't think it makes sense to build this functionality into the language or compiler directly, at least not at this time. However, it certainly makes a lot of sense to have tooling that uses annotations like that to generate safer Rust bindings than just the raw function declaration.


I recall something about Rust's libc crate being glibc-specific. What's the status on that, and can one build the compiler on Solaris/illumos, NetBSD and OpenBSD?


Here's an experience report by someone who managed to get Rust running on musl, for the purpose of creating fully static Rust binaries: https://gist.github.com/cl91/bb927df2525738502131#file-stati... (discussion at https://www.reddit.com/r/rust/comments/33boew/weekend_experi...).


UPDATE: Official (though still experimental) support for using musl with Rust code is landing in the compiler as we speak! https://github.com/rust-lang/rust/pull/24777


> What's the status on that

We still use glibc, yes. Being able to use musl instead is something that the community and team have been experimenting with lately.

> can one build the compiler on Solaris/illumos, NetBSD and OpenBSD?

The last two have community members that keep the build green. I _think_ illumos works, but I'm not sure.


Oh, look what just popped up: https://github.com/rust-lang/rust/pull/24777


Why does rust need a libc?

Im still new to rust, so forgive me if this is a stupid question.


No it's not a stupid question. libc (or CRT on windows) really is the library that exposes all the user space system libraries. It contains the functions to do IO, sockets, threads, and etc. So we use it to expose that functionality to rust users.

Now there are some languages, namely Go, that skip libc and just implement directly against the syscall interface. Go has the advantage of being able to draw from Google's vast experience interacting deep within the system, so it was comparatively cheap for them to do this.

For rust, it never really felt like it was worth the effort for the benefit we'd get out of it. It was more important to get the language done.


Programming languages on Windows other than C and C++ typically link with kernel32.dll and so on, and not a C runtime. This gives a stable interface at a slightly higher level of abstraction than syscalls. Relying on libc is more a Unixism; without duplicating a bunch of the work in libc, you simply can't use many system services in ways the end user expects - network name resolution comes to mind (NSS), but other things, like threads, don't have portable standards at a lower level.


There have been some efforts to make Rust utilize the native Windows API instead of libc on Windows. While this is ongoing, the lack of experienced Windows developers participating the Rust project has been slowing down the progress. We'll be very happy to get your contribution!


Python links the VC runtime (so not even the system CRT).

MinGW links the system CRT by default, which is one reason that it can be fussy to compile Python extensions with it.

(I'm more curious about what a survey would reveal than I am trying to argue with you about what you said)


Software originating on posix systems is most easily ported at the libc interface level, so it's natural to target some libc on Windows. But it usually creates awkwardness, either because of version dependencies, or the system libc not having everything they need and using some specific VC runtime, like you say.

I was a developer on the Delphi compiler for some years. Statically linked, a minimal Delphi executable had no dependencies other than the *32.dll libraries. And there is no system functionality that requires use of some libc, or significant duplication of effort. The Win32 API doesn't even use the C calling convention.


In theory you could later implement libc (libcrust. calling it now) in Rust as well so you get the best of both worlds couldn't you?


This is actually a project I have considered with in the past, though not very seriously. Theoretically you could completely replace libc with a Rust implementation and use it system-wide, and the rest of your programs would be none the wiser. It's a non-trivial amount of effort, however. :)


I'm not sure if it makes any sense to implement libc in anything other than C.


It would be an interesting experiment, seeing as how Rust code can be called from C and vice versa.

There's already been some tinkering on various operating system kernels written in Rust (even if they don't do much besides printing "Hello, world!" quite yet), so I figure a pure-Rust standard library is the next step in that.


Yes, but the standard crates don't really use any of the glibc specific Stuff. I found it not too difficult when I ported Rust to use `newlibc`, for use with PNaCl. I have even managed to build a PNaCl version of `rustc` itself.


> I recall something about Rust's libc crate being glibc-specific.

If this is true, it's because support for others hasn't rolled out yet, not because it has some inherent dependency on that version of libc.


What happens if the double_input Rust function is declared extern but not no_mangle? When would you want a mangled extern Rust function?


If you omit the `#[no_mangle]` definition then the compiler will end up emitting a mangled symbol (e.g. one with a long hash at the end that's hard to guess).

Many C APIs take a callback function pointer, which in rust has the type `extern fn()`, and this is the primary use case for an `extern` function defined in Rust which doesn't have `#[no_mangle]` on it.


Here's an example of that in a lib I've been working on: https://github.com/bluepeppers/libnetfilter_queue/blob/maste...


Thanks for the example. What is the advantage of using an unmangled extern fn as a C callback? That the user-friendlier unmangled function name will appear in stack traces or can more safely be called from other Rust code?


True, though it wouldn't be hard to write a GDB extension to unmangle the names (I know D had this from quite early in it's lifetime). In general, I leave unmangled names to things that are part of the external interface: when unmangling the name, I'm explicitly saying "this symbol's name is part of the contract I have with user of this library".


There's already something that ships with Rust that unmangles some stuff: http://michaelwoerister.github.io/2015/03/27/rust-xxdb.html


Can you make external "C" callback functions generic? The function itself I mean, not the signature of the FFI function that takes the callback. If so, there's a great use case for not mangling C callbacks.



Thanks. I guess using `extern "C"` to imply `#[no_mangle]` would be too cute. :-)


Does using an FFI call eliminate the possibility of the compiler inlining the call?


In most cases, yes. However, with some work we should be able to allow inlining across language boundaries as long as you're using clang to compile the C and C++. This is because rustc uses the same backend as clang (LLVM).


Though it may seem like this would be brittle, it's worth mentioning that this is how Servo eventually intends to interoperate with SpiderMonkey, so it will be a strategy that is well-researched and officially supported.


SpiderMonkey has an embedding API. Is the overheard between Servo (or Gecko) and SpiderMonkey so great that cross-language inlining is important?


> SpiderMonkey has an embedding API.

Yes, but it's a C++ API and includes a bunch of inline bits in the form of RAII classes and so forth.

Making those non-inline would be a noticeable performance hit.

Similarly, using the non-inline jsapi.h versions of the various inline stuff in jsfriendapi.h would be a noticeable performance hit: those were added for Gecko to use based on performance measurement and profiling.

The main place where this inlining is needed is in the DOM bindings, where pretty much anything you're doing is overhead on the path from JS to the actual implementation. This is especially noticeable when the actual implementation is fast (e.g. many getters in the DOM). In modern browsers the binding overhead is on the order of 2 dozen instructions or so. A non-inline function call takes... well, it depends on the ABI. On x86, with cdecl, you only have 3 caller-save registers but have to push all the args on the stack. On x86-64, with the AMD64 ABI (so everything except Windows), there's a ton of caller-save registers that might need to get pushed/popped around the call. Either way, the chance that you add noticeable overhead to an operation that's already <30 instructions is high. And that's if you only have to make one call. If you have to make _several_ such calls as part of the binding code, you're just screwed. And then you start adding APIs that compute and return all sorts of stuff in as single call (see the SpiderMonkey typed array APIs) and other such ugliness.


So it is simple to interface with C, presumably because the C ABI is well established. But it is not a type safe interface, so you have to give it the function signature.

It is not so easy to interface with the other languages, you have to do it through the C ABI.

Doesn't that suggest a good opportunity for all these actively developed languages (Python, Ruby, Julia, Rust, Go, plus the output of GCC and LLVM) to get together and agree on some kind of type safe ABI for calling functions in a generic way? Even if it is just in the form of some metadata annotation of function signatures to go along with the C ABI?


That said, I think Rust probably will need to improve on two fronts:

1) Better binding to C++ API

2) Compiling via MSVC on Windows. Windows is a bit of a red headed step child for lots of projects and Rust won't have that luxury


Improving our Windows story will be an enormous priority post-1.0, as mentioned here: http://internals.rust-lang.org/t/priorities-after-1-0/1901

It's especially important as Mozilla intends to start shipping minor pieces of Rust code in Firefox this year, but Rust-based components will be relegated to nightly Firefox until Windows support improves.

Here's a patch-in-progress for the first Rust code to be integrated into Firefox (Servo's URL parser): https://bugzilla.mozilla.org/show_bug.cgi?id=1151899


Microsoft stepped up and contributed to Node/libuv to get those projects working well under Windows -- given the new leadership, I'd venture it would be in Rust's interests to reach out and see how willing they are to help. Maybe the result would be surprising :)


I remember hearing somewhere that servo is focusing first on things that are nicer to do in Rust than in C++. Do you know if there is something that makes Rust more suitable for URL parsing than C++?


Servo is trying to answer ambitious questions like "can layout be done as a sequence of top-down and bottom-up parallel tree traversals", "can we GC Rust DOM objects safely", and "how much of a browser engine has to be infected with support for non-UTF-8 strings".

Gecko, on the other hand, will gain Rust in modules that can be changed without replacing the entire browser. I'm happy that one of the first will be the code that supports the URLUtils DOM API. When the C++ code was changed from being just "URL parsing" to supporting segment changes, it came to have way more than its fair share of memory safety bugs. It needs a rewrite and it might as well be in Rust.


Calling C++ from "ffi" is very hard. With "C" there is almost one to one mapping of the function you wrote to what the compiler produced and linker sees (there could be an "_" or more added, or something similar, but that's it).

With C++ mangling (decorating) is specific per vendor, per compiler, and it might be even per compile options. Then you have different way of handling exceptions, new/delete, location of the this pointer, deep/shallow virtual tables, etc. etc. And for templates it's possibly very awkward to support them in any good way.

It could be that wrappers like SWIG might help there, but would require some amount of work, or rewrapping the C++ interface to be "C" like with "objects". Some API/library writers go to the extent where even if the library is written in C++, a "C" interface is provided, and a new "C++" interface is written on top of the "C" (e.g. it's not using the original C++ one). This might be due to problems with dynamic (RTTI) symbols and different linkage versions (since you've asked about MSVC - different runtime libraries loaded in your application - yes this is pretty normal for Windows).


Having great C++ support is good for using libraries from the C++ ecosystem, I guess. This might help Rust to get traction, but there might be other features with more interest from the community that gets prioritized.


> Better binding to C++ API

You realize that if this is perfect, you've basically implemented C++. It's easy to implement trivial bindings, but much beyond that requires an immense, immense effort and development.


And that's why I said better. Perfect is the enemy of good (enough).

I know both are areas of experiments in Rust. I think the intent is to use llvm to link(?!) C++ code in a format Rust can use.


> I think the intent is to use llvm to link(?!) C++ code in a format Rust can use.

It's difficult to see this working except with trivial bits of code. For instance, how would you link to a type generated from a template with a string parameter? The most I can see is linking with an explicit subset of C++.


#1 is very hard. D is the only language I know of that's reasonable about this, and even then, it's not everything.

#2 is something we care about, and want to have the option to do in the future.


You do realize that 1) can be non-trivial to achieve even if one is using C++ :)


Re 2), my current fantasy is that Microsoft wakes up and gets behind Rust. Imagine having full VS support with Rust (not that Racer and vim are bad), and jump-starting tons of users to start writing _safe_ low-level code.


Just yesterday there was a new release of RustDT, a Rust IDE that works on Windows (http://users.rust-lang.org/t/rustdt-0-2-0-released-auto-comp...) and at least one Windows user seems to be raving about how amazingly everything in it seems to Just Work (https://www.reddit.com/r/rust/comments/33p4oh/rustdt_ide_rel...).


Oh that looks fantastic. Eclipse based so I'm sure u can get a vim plugin too... You may have made me very happy.


Has the call from C to Rust also zero overhead?


It does indeed! Due to Rust's lack of a garbage collector or runtime altogether, when C calls into Rust there is no overhead. In the example in the blog post [1] you can see that the Rust function has no extra syntactic burden, and the compiler also will not add any form of conversion one way or another. The C code then just emits a direct call to the symbol, so everything is totally free!

[1]: http://blog.rust-lang.org/2015/04/24/Rust-Once-Run-Everywher...


I was hoping this was an announcement of Emscripten support for Rust :(



Using my fork [1] (you'll have to build it yourself, sorry), you can create Emscripten bins. After building your program, run `$NACL_SDK_ROOT/toolchain/linux_pnacl/bin/pnacl-thaw $RUSTC_OUT` on the output (rustc emits stable pexe's by default) and then run `$EMSCRIPTEN/emcc -o $RUSTC_OUT.html $RUSTC_OUT`. Volia.

I make no guarantees that it'll all work however; I don't test Rust->JS like I do Rust->PNaCl.

[1]: https://github.com/DiamondLovesYou/rust.git

EDIT: Grammar.


We need Emscripten to upgrade their LLVM, last I heard. We're running a pretty cutting-edge version at the moment.


Likewise I keep hoping someone will make a compile-Rust-to-portable-C story realistic also! The LLVM C backend has been revived!

https://github.com/draperlaboratory/llvm-cbe

http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-August/07593...


Last October I tried using that repo to compile Rust to C and it proved to be totally broken (i.e. not Rust's fault). There have been a bunch of commits since then, so maybe it works these days... shouldn't be hard to test.

However, don't expect it to generate code that's portable between architectures with different pointer widths.


> However, don't expect it to generate code that's portable between architectures with different pointer widths.

That's a limitation I can live with. If my only concession is that I have to have two builds (32/64), and everything else is portable, I'll be a very happy camper.


I was hoping it's about cross-platform compilation.


Being based on LLVM, the details are there, but it's not as easy as we'd like. This is something we want to significantly improve the UX of in the future, for sure.


Please become a popular language once 1.0 comes out. Please.


C bindings are more straightforward. What about C++ however?


See elsewhere in this thread for discussion on that.


Can anyone post a link to a RubyGem written with a Rust extension??


Here is how we bind the Skylight gem to the Rust bit: https://github.com/skylightio/skylight-ruby/blob/master/ext/...


https://github.com/steveklabnik/rust_example is one way to do it, albiet the most complex one: a thin C wrapper over the Rust, which is then a C extension for Ruby. Easier ones are using FFI or Fiddle.


I think they are referring to this http://blog.skylight.io/bending-the-curve-writing-safe-fast-... in the blogpost


Here's PyCon2015 talk on how to call Rust from Python: https://www.youtube.com/watch?v=3CwJ0MH-4MA




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: