Hacker News new | comments | show | ask | jobs | submit login
How I Wrote a Modern C++ Library in Rust (hsivonen.fi)
406 points by hsivonen 12 days ago | hide | past | web | favorite | 116 comments





Is there some kind of target or plan for converting C/C++ to Rust in Firefox? Or is it just happening as there are people interested in working on parts of the stack?

These progress numbers are very interesting:

https://twitter.com/eroc/status/1061049330574884864

and I was wondering how directed the effort was.


Not a specific target of having to replace C++ for the sake of replacing C++.

The way Rust code gets added includes:

* A new feature needs an identifiable library, so the new library can be written in Rust to begin with. (Example: U2F token USB integration.)

* Old code needs a rewrite anyway, so the rewrite can be in Rust. (Example: Character encoding converters.)

* Servo has proven a component, so it makes sense to bring it over. (Examples: Stylo and WebRender)

* History of vulnerabilities in code that was replaced. (Example: MP4 metadata parser)


Is anyone aware of secondary effects this has had? e.g. removed C++ code that has later found to have bugs, or newly re-written crates now more useful to the wider community than the same code locked up in C++.

> e.g. removed C++ code that has later found to have bugs

The article links to a longer article (https://hsivonen.fi/encoding_rs/) about encoding_rs. The longer article mentions a bug that got fixed in Firefox ESR after the code had been replaced with encoding_rs in non-ESR Firefox. (I wrote the bug, too, though.)

> or newly re-written crates now more useful to the wider community than the same code locked up in C++.

encoding_rs is an example of a crate developed for Firefox but also developed as a crates.io crate from the start. ripgrep is probably the best-known Rust-only app that uses encoding_rs. Since Visual Studio Code bundles ripgrep, I believe Microsoft shipped encoding_rs before Mozilla did!


In terms of deploying rust directly to make money- Microsoft is probably the leader right now with actix used in azure iot

Dropbox are also using Rust in both their storage layer, and their desktop client.

You do realize that dropbox has very small profits, right? They made all of 50m in q2

Did you mean this actix? https://github.com/actix/actix

Is it mature? Can you shed more light on how is it used in production?


They're referring to https://news.ycombinator.com/item?id=17433142

I don't believe that it uses Actix, though. Actix was created by and is maintained by a Microsoft employee.


He has mentioned on an HN comment that they are using it internally, but isn't allowed to disclose how.

https://news.ycombinator.com/item?id=17191454


We use actix at azure iot

Ah, something that’s not in that repo? That’s awesome!

> removed C++ code that has later found to have bugs

I doubt this would ever be discovered; who would analyze code that was formerly a part of Firefox?


There's actually a lot of people who want actual, experimental data to back a language's claims about safety. A subset of them use C and C++. I occasionally argue with them about safety benefits of other languages. They demand more proof than the design, esp field data. I do keep stuff like this as experimental evidence that will add up for such empiricists over time. Although, I prefer controlled experiments where you teach amateurs C, modern C++, and Rust over a specific time followed by testing (esp fuzzing) of their code to test the safety claims. Run it in a dozen different places to see if results are consistent.

There's also folks that just study these things to identify patterns in problems created, prevented, or detected (at what effectiveness) in various languages and techniques in software development. Along similar vein, each bug report also provides (in theory) a test case for automated tools that detect bugs. It's very important to have a huge, diverse pile of code to test those tools with. That's because each one's algorithms might have blind spots missing bugs. The more code and bugs we have, the better we can assess those algorithms' accuracy. And then build better algorithms. :)


It seems fairly obvious that if Language A's design prevents a certain class of bugs possible in Language B's design, A is safer than B. If someone isn't satisfied by this, they probably are biased against Language A. I'd tell the hypothetical person to try writing a buffer overflow vulnerability in safe Rust.

That said, I can't find the source right now but I believe the quote is something along the lines a sizable percentage of Firefox's security bugs would be less severe or nonexistent in Rust vs. C++.


Usually with the argumentation that nicksecurity is referring to, they eventually switch to is "but Language A still allows for bugs class type X thus why bother".

So one then needs to resort to statistics and other stuff as argument validation.

For example, even after being proven wrong with Godbolt that it is possible to write safer code in C++, while keeping the same or even less hardware requirements, many embedded C devs still argue that it is not worthwhile for safer code.

Rust, just like other (almost) memory safe systems languages will get the same human judgement.


Don't get me wrong, I love Rust, but I think any programming beginner starting with Rust as a first language is pretty likely to fail.

Ownership is a hugely important part of designing programs, and it's something people need to come to terms with eventually, but a language where you can't do even hello world without understanding ownership adds a lot of mental overhead to the learning process when someone is still not even comfortable with for loops and function calls.


`cargo new $project_name` literally generates a hello world program so it is misleading to say you need to understand ownership to write `hello world`.

That being said ownership is rather hard, but liberal usage of `.clone()` can get you pretty far.


That's a different subject entirely. Beginners should learn from resources teaching fundamentals, both primitives and patterns like abstraction/decomposition, using a simple language with minimal, incidental complexity. Once they grok that, my next recommendation is using another simple, but real-world, language like Python or Go with lots of code they can practice on. Read, modify, and debug. Then, with key skills, they can tackle hard stuff like C, C++, and Rust.

Pyret at pyret.org is a good candidate since there's a group called Bootstrap successfully teaching it to middle schoolers. That's bootstrapworld.org.

Far as Rust, Ill also note people exploring the language or doing quick-and-dirty coding can just use reference counting if they want. Rust supports that. There's a performance hit but that's probably fine in those use cases.


> who would analyze code that was formerly a part of Firefox?

People looking for bugs in Firefox ESR.


I know it's "extended support release" but I keep reading this as "Firefox Eric S. Raymond"

People looking for bugs in Waterfox or Pale Moon.

Why is Mozilla trying to replace components of Firefox with Servo, rather than trying to complete Servo as a full and compliant engine? I.e. it looks like browser.html is not very usable, since Servo itself is quite behind in actual features support compared to Firefox.

I would say that there are only downsides in going that way.

If you take on a full rewrite you lose a lot: you can't show that you are incrementally better, you cannot show that it will be a good long term investment, you must rewrite even well maintained core parts that works fine, you don't get to improve the original engine with the good parts and essentially you get nothing in return.

For a much better answer than mine: https://www.joelonsoftware.com/2000/04/06/things-you-should-...


As far as I know there is no overarching plan, but there are many people who do hold that motivation - some are more aggressive about their intent and others are more lax, and yet others are skeptical - as with any community of developers.

The broad sentiment seems to me that it's a good thing to move over, as pragmatism allows. Personally I'm a converted skeptic - I had my doubts about the language and my initial stabs at it left me somewhat frustrated - but as I've grown to internalize its semantics and behaviour - the benefits in terms of safety and clarity of intent and optimization potential are clear (e.g. aliasing semantics are just so much better).

A mix of factors enter into whether a component moves over or not, including the views of the developer in question, the complexity of the API boundary between the main codebase and the subcomponent, and the complexity of the component logic itself.


It’s the Oxidation project: https://wiki.mozilla.org/Oxidation

So the uncrustification is done by adding rust :P

It's a protective layer.

So like aluminum then.

I have started it more than once but gave up on the sheer complexity involved as many times but I really would love to have attributes to auto generate a C ABI from a Rust API alongside headers and then high level python and C++ bindings to it. With the recent improvements to wasm-bindgen I looked into if stuff can be repurposed there but it seems very custom crafted sadly.

If someone is interested in that as well, maybe there are some ways to join forces.


Are you aware of cbindgen[0]? It's not exactly what you want, but it's certainly far closer than wasm-bindgen is.

[0]: https://github.com/eqrion/cbindgen



We are using that atm but it requires manually writing a cabi.

It still requires you expose C api, instead of it being generated automatically with proc macros or something.

I’ve longed for something similar in writing JavascriptCore bindings. Something lighter weight than swig but which still has an IR for describing the full API surface for codegen plugins would be immensely useful.

C ABI to the rescue!

That may be the one thing left of C in 50 years' time.


A dead lingua franca, Like Latin. Could happen, but I suspect embedded programmers will stick with C forever.

This is one reason I don't really like embedded programming. There seems to be generally a fear of newer systems languages like Rust and a fear of newer features in C++. An embedded programmer told me he doesn't use the C++ STL because he doesn't want functions allocating stuff on his back. Doesn't seem very convincing as you can always redefine the global operator new.

There is a fair amount of interest in Rust from embedded programmers, though at the moment you can only really use it for ARM cores (which are probably among the most common embedded cores but there are a huge number of alternatives). There is a dedicated working group within the rust community for embedded which is pushing towards having a good user experience for those wanting to use Rust in such an environment.

But even within embedded rust you won't find much appetite for dynamic memory allocation, and a fair amount of the work has been stabilizing the mechanisms by which you can build rust code which does not use its standard library ('no_std'). This has nothing to do with the fear of the new and lots to do with predictability of code. Using a single heap for all allocations is not at all ideal in an embedded context: your allocations become much harder to predict (both in terms of failure and in terms of time taken), errors become harder to recover from, and it becomes harder to reason about the amount of memory your system will use (especially in edge cases). For embedded work you will generally try very hard to allocate everything statically, and use some kind of pool allocation if you cannot (and often you combine the allocation and whatever datastructure you are placing the objects in).


> Using a single heap for all allocations is not at all ideal in an embedded context

The same is true in game development. Using a per-frame arena allocator for short lived objects can make a big difference to performance. And Rust’s lifetime rules can be used to make this sort of thing completely safe.

It’s a shame that rust std completely relies on the hidden global allocator instead of accepting an allocator as a parameter. It means that while you can write your arena allocator, you can’t use it with any of the built in Box/Vec/HashSet/etc types.

I actually really like Zig’s answer here of just passing in an allocator as an argument in all data structures. I’d love to see that in rust!


We didn’t have an allocator API, or we would have liked to.

Global allocators have finally stabilized. Working on it!


It's not a problem of how things are allocated, but rather when. In an embedded environment, you need to be very aware of what is being used at any given time and how, and complex templates like in STL have a tendency to allocate and deallocate in ways that can be difficult to follow, or might have a worst case memory footprint that blows through your RAM.

Not disagreeing with your point about STL classes having a tendency to make egregious allocations (heck, a lot of them are fundamentally designed around using allocators for pretty much everything), but the fact that they’re often complicated templates is totally orthogonal to that, and I think it’s disingenuous to say that there’s a causal relationship between the two. If anything, templating encourages statically-verifiable, compiler-enforced code invariants and outputs, which can be pretty darn nice for embedded.

> There seems to be generally a fear of newer systems languages like Rust

It's not fear of newer languages (most of the time). I would love to use Rust for embedded in a professional environment but it loses out quickly based on supported platforms.

As leader of an embedded development team, I can't choose a language that will limit the processor parts we can use when we get closer to manufacturing and cost reduction. At volume, many millions spent on software development is dwarfed by cost savings from swapping in the right part.


I've always been curious about this, could you give more details? It would be interesting to know what a 8-bit chip costs vs a 32-bit one, for example.

8 Bit chips tend be anywhere from 1 cent to $3 in volume. 32 bit chips are usually be from $1 to $20.

It doesn't tend to be that extreme a part swap - if you're doing signal processing and UI then you're probably always going to use a 32 bit part. If you're just doing some basic logic with a couple of digital inputs then you're probably always going to use an 8 bit part. You might move 32 bit ARM to 16 bit MSP430 to hit a power budget or similar.

Leaving aside processor architecture, the factors that affect cost from most to least (approx) are: Flash size, number of peripherals/features, clock speed, ram size. Basically, sometimes another manufacturer will have a part that meets your feature requirements much cheaper, and then you port the code. The rest of the time, you're just squeezing program space, memory usage and processor usage to reduce the required specs.


From what I recall, SpaceX doesn't allow any external libraries in their embedded rocket code.

Yeah, most of them are still on C89, even if the compilers do C11.

Real C programmers use C89, huh.

As someone who works on a large OS consisting of ~200m lines of C: I really don't think so.

On modern OSes C ABI is largely irrelevant.

ChromeOS, Android, Windows/UWP, Fuchsia, ...

C ABI is a actually the OS ABI, when they happen to be written in C and expose syscalls in C.

As for C in 50 years, it depends how much we keep on using UNIX.


And how many of those are defined in terms of the underlying C ABI?

For example, WinRT - used by Windows/UWP - can be expressed entirely in terms of C structs and function pointers.


By exploiting how Windows C++ compilers layout VMT tables.

Yet this is much less relevant in UWP, which allows for richer type system.

Additionally the COM/UWP ABI requires type libraries for properly accessing the objects and marshaling.


COM is a C ABI first and foremost - somewhere in MSDN, there's actually a page that exactly specifies vtable layout. It's not really surprising that it's C++-compatible, given that it's the most obvious way to do a vtable when you have just a single base class. But yes, of course, Win32 C++ implementations would maintain that compatibility - you could consider it the C++ bindings to COM, in a sense.

And yes, WinRT does require its metadata for marshaling (COM doesn't - you can compile proxies/stubs from IDL). It does not require that for accessing objects, and there are many WinRT components which, in fact, cannot be marshaled at all. But all that doesn't make it any less of a C ABI - the metadata format is documented, and the standard WinRT APIs used to access it are themselves WinRT-compliant, and therefore the whole thing can be consumed from C, if painfully.


That is like telling anything can be consumed from Assembly, thus it has an Assembly ABI, even if painful.

Any ABI is assembly ABI, by definition, since you are not constrained at all in assembly.

Similarly, the definition of a C ABI is "ABI that can be expressed in C terms" (not "ABI that is easy to code against in C") - i.e. an ABI that any language with C FFI can use. That last part is what makes this categorization useful.

And it has some significant implications beyond "assembly ABI" - for example, C doesn't have guaranteed optimized tailcalls, and therefore any such ABI can't provide such a guarantee, either. Or - C doesn't have exceptions, and so COM and WinRT have to use error signalling mechanisms that are expressible in C (HRESULTs and thread-local error description objects). Or take async - WinRT offers CPS-style futures via IAsyncResult, and because that's just a WinRT interface, any language that can do C FFI can do WinRT async (unlike fibers, goroutines etc, which require intrinsic support).


The parts of Android that don't use the C ABI are not an OS though.

Try to use that C ABI and see how far you go writing an Android application, an user space driver in Android Things, or a Treble driver.

If I'm reading this correctly, the "modern C++" part of the library is in fact supplied by a C++ wrapper around a C-style API.

Correct. The Rust API is recreated in C++ using the corresponding C++ facilities with a C API in between.

"with a C API in between."

I smell vulnerabilities miles away with that sort of implementation. Hope the Rust programmers remember basic garbage collection in C, since C itself doesn't have it automated.


Just because it's using a C-style API doesn't mean there needs to be any C code involved at all (I don't know if there is in this case, but in general it's not necessary). You have one side say "hey, pretend this code I've got here is C", and the other side says "hey, let me call this function that I think is C". Every language has a way of calling C functions, so you don't even necessarily need a C compiler.

> I don't know if there is in this case

In this case, there indeed is no .c compilation unit between the C++ and Rust code that see each other via C linkage.


You should really consider reading the article before commenting on it. The mechanisms for deleting heap memory obtained from Rust are discussed in depth.

The epilog alone was worth the price of admission. Having one compiler tell the other what to do via code generation is a great way around the lack of ABI compatibility.

I wish Rust, C++, Go, D... could get together and agree on a common ABI/module format. It doesn't need to be complete (which is to say I'm fine with having to use , just enough that 90% of my program can be written/rewritten in whatever language makes sense and used in the other without having to drop to C.

For starters it needs to have classes (probably using PIMPL like things internally by default). It needs to have some sort of error handling. It needs to support some basic data like std::vector (but they can start from scratch).

Edit: my fingers typed API not ABI first...


What you're asking for is a common ABI. To have shared classes, basic data structures, or even error handling, a binary format needs to be defined and adhered to. Of course, to bake all that into languages (and their runtimes), is a great responsibility; but more than that, it can easily be a severe limitation in the way any language may like to do things.

So we have things a little more basic than that, at various levels:

1. API: C. This takes care of a bit more than just data interop and common functions; it maintains an interface, takes care of platform portability concerns (calling conventions, linkage, runtimes), and allows future growth.

2. Modules: From executables and shared objects to services running on a remote machines accessible over TCP/TLS/HTTP(S).

3. Conventions: From free-form (de)serialization formats like JSON/YAML, to read-optimised formats that come with their own implementation code like Cap'n'Proto and Flatbuffers, to share-optimised formats that come with their own read/write/live-share implementation code like Apache Arrow. These come the closest to being a common "ABI", but (thankfully) restrict themselves from being ingrained into a language.


> What you're asking for is a common ABI. To have shared classes, basic data structures, or even error handling, a binary format needs to be defined and adhered to. Of course, to bake all that into languages (and their runtimes), is a great responsibility; but more than that, it can easily be a severe limitation in the way any language may like to do things.

Being able to speak a common ABI doesn't necessarily guarantee that your local ABI has to conform identically to it. In Rust, for example, you need to add a repr(C) to be able to actually guarantee that the layout conforms to the C ABI. CORBA IDL is something that tried to achieve this kind of capability.


> Being able to speak a common ABI doesn't necessarily guarantee that your local ABI has to conform identically to it.

No, but you still need to support the common ABI. E.g., for Rust, that's an easy job, compared to Go.


> For starters it needs to have classes

But, Rust doesn't have classes.

> It needs to have some sort of error handling

But, Rust doesn't use exceptions, just algebraic data types (`Result`).

> It needs to support some basic data like std::vector

But, these aren't primitives, these are just std library constructs. Which is arguably an advantage, if you get a good FFI you can just use the other language's libraries directly.

A higher level FFI is a good idea, I think you're going to discover that the intersection of languages is surprisingly close to C though.


Rust has things that at a binary level are the same as classes, so they could easily be part of a cross-language ABI.

And so what if std::vector isn't primative. It's an extremely common construct that should be part of a cross-language ABI. Hell, it's simpler than strings and they are part of the C ABI.

This is one of those things that everyone is going to nay-say until someone actually does it. The same happened with webassembly. I remember when it was deemed impossible and PNaCl would never work etc. etc.


> Rust has things that at a binary level are the same as classes, so they could easily be part of a cross-language ABI.

They are not "at a binary level the same as classes" (whose classes? C++'s classes?), at least not the moment things like vtables get involved.


Presumably this would be classes in whatever sense the ABI would define them. Which could be defined in such a way that they can be mapped straightforwardly to Rust structs and traits.

> Presumably this would be classes in whatever sense the ABI would define them. Which could be defined in such a way that they can be mapped straightforwardly to Rust structs and traits.

Which would then not map straightforwardly to C++ classes, because the vtable pointer would be in the wrong spot.

You can't have an ABI that makes two things that represent things in concretely different ways somehow resolve that inherent contradiction.


> For starters it needs to have classes

Rust doesn't have classes. And as discussed in the article, Rust puts the vtable pointer on the reference while C++ puts it on the pointee, so even fully abstract C++ classes ("interfaces") and Rust traits (when used with type erasure rather than with generics) are implementation-wise different even if they look conceptually similar.

Go doesn't set up the stack in the same way as the others.


That could actually somewhat work for C++ and Rust (although getting things like generics/templates across would be a huge challenge and you'd lose Rust's safety guarantees at the unsafe C++ interface. Also good luck dealing with exceptions). For Go and D you have a bigger problem: garbage collection. Having two different runtimes play well with each other is far from trivial and could hurt performance.

The main reason the C ABI is de-facto the lingua franca for language bindings is because it's almost runtime-less. You basically only need to agree on things like stack layout and where the parameters/return values go. It's a super low bar.

Dealing with C++ objects, overloaded functions, namespacing, exceptions etc... Now that's a whole different can of worms. How would you automagically map something like std::cout and its operator<< in Rust or Go for instance?


This has somewhat been solved by cgo by having it documented clearly, and turning a runtime flag on that tells you whether you're doing something you're not supposed to.

> Dealing with C++ objects, overloaded functions, namespacing, exceptions etc... Now that's a whole different can of worms. How would you automagically map something like std::cout and its operator<< in Rust or Go for instance?

You'd define std::cout and operator<< in terms of a sane underlying reader/writer ABI?


Right, but my point is that then you're generating a complex FFI (which requires high level info about how "operator<<" on std::cout implements streaming and not bitshifting as usual), not a simple binding. Doing that automatically and "standardly" would be quite complex. Those aren't primitive types, they're abstraction built on top of the language's typesystem, g++ is not aware of what a "stream" is in C++, nor is rustc aware of what the Read and Write traits mean.

This was the dream of C#, Managed C++, F# and VB.Net on the .Net runtime. It worked out OK for some people, but IMO not all that great -- all the languages trended towards being syntactic sugar on top of C#. The hard part is rarely the language, but learning the semantics of the platform the language runs on, and those languages have very different opinions of what that platform should be.

That carries on as UWP, and as someone else pointed to me, Microsoft is going multiple platform with another iteration of it, xplat.

The ABI isn't the hard part. You need to figure out how to track memory ownership when those languages have radically different memory ownership models and often limited support for "foreign"/"unowned" memory.

Passing basic datastructures back and forth isn't hard and there are tools that will autogenerate the code to do that. But when you start using them you'll find they're not solving the hard part of the problem.


No, the ABI is actually the hard part. C++ doesn't even have a stable ABI within just its own ecosystem, ignoring interop with other languages.

Memory ownership is trivial by comparison, particularly in this case when all the languages have the equivalent of malloc/free, so you just make that part of the ABI surface.


On Windows it is called COM/UWP.

I think Microsoft might also be working on a newer system with that goal: https://github.com/Microsoft/xlang

I haven't really looked at that in detail, but I would find any tooling for better language interoperability very worthwhile.


Many thanks for the heads up, I wasn't aware of it.

Looks interesting, maybe another step into Microsoft Linux.


That may be portable across languages and runtimes, but it comes at the cost of portability across platforms.

There's nothing about COM or WinRT that is inherently unportable across platforms. It's just a standard ABI, that can be expressed in C at that - so any platform that has a C ABI can have either of those implemented. It's just that never really bothered.

GObject is a good example of a truly cross-platform ABI like that.


GObject (-introspection).

Interesting enough, the Vala language compiles to C with GObject's for portability:

https://en.wikipedia.org/wiki/Vala_(programming_language)

It was a clever decision. I'm not sure if the right one but a nice thing to try to see how well it works. I wonder if anyone did a "Looking Back"-style write-up on how it's worked out so far with pro's and con's.


The GTK+ and GStreamer bindings for Rust are already mostly generated code (and seem to work very well): https://github.com/gtk-rs/gir/blob/master/README.md

I'm not sure what the exact status of extendin GObject hierarchies from Rust is. See https://github.com/gtk-rs/gobject-subclass


It works relatively well.

I just have two complaints.

The build times, as Gtk-rs seems to do code generation during the build, and the usual Rc<RefCell<item>> for accessing internal widget data on the callbacks.


Vala is.. good and bad. (Like most things.)

Certainly successful at bringing new developers into the Gtk world (especially thanks to elementary OS and its developer documentation). Familiar syntax for C# devs.

The downside is, well, memory bugs. For example, sometimes perfectly reasonable Vala code results in error messages like "Unable to parse accelerator '\u0008\x8dn\u000b\u0008': ignored request to install 501 accelerators" or downright segfaults: https://gitlab.gnome.org/GNOME/vala/issues/626


QObject (-introspection).

You cannot have it all.

Either you get a bytecode format, or a platform specific one.


> I wish Rust, C++, Go, D... could get together and agree on a common ABI/module format.

Don't forget C, with the amount of C code out there you'll have to make sure it can handle this new format. And by the time you limit yourself to the lowest common denominator between all those languages you'd probably just reinvent the C ABI anyway.


The point is the C lowest common denominator is a bit low. I'd like something that by dropping C we can be a little higher.

I'm fine with the ABI having an explicate this pointer (which python already does for classes) which allows C to call/implement classes as well, but it needs to be in the ABI (languages with first class classes can choose to hide the this pointer and v-table).

Operator overloading (which forces name mangling or something like it) is tricky though. As is exceptions. I don't have answers to that.


> The point is the C lowest common denominator is a bit low. I'd like something that by dropping C we can be a little higher.

I don't think it really raise the bar much. Rust for example isn't an OO language so classes and v-tables won't be included. Go doesn't have templates so off they go. Operator overloading, multi inheritance, exceptions, garbage collection all go.

What your left with is essentially the C ABI. Being this lowest common denominator is why it's the the common ABI just as much as the success of the language.



Never ever ever gonna happen, although some progress can be made in areas: D is working (Quite a lot done already) it's way to being ABI compatible with C++.

What could be done, however, is to provide an ABI-as-a-library at each end then pass that with the C interface. Textual Specification, statically processed at each end into some kind of parser type thing. Still borderline suicidal effort, however, as not all languages (Go, I think?) treat the address space equally among a clusterfuck of other things. Any form of parametric polymorphism would be a disaster too, I assume.


This is a great idea (ignore the naysayers). It's been discussed before here and in the link 2 comments later: https://github.com/rust-lang/rfcs/issues/600#issuecomment-40...

Reminds me of COM.


Maybe, but COM's reputation is it is too complex. From what I can tell the reputation is well deserved.

It's too many things at once. The ABI is excellent and is used all the time, see DirectX, MediaFoundation, .NET, UWP, etc, they all use good parts of COM.

Scripting support (IDispatch, WSH) is OK despite a bit outdated.

Registration infrastructure (HKCR), GUI widgets (ActiveX), threading model (these apartments) and inheritance of implementation (aggregation) ain't good, way overengineered.

Compound document format (structured storage), RPC both local and networked, are just horrible.


Much of COM's horror comes from being an OO interop system designed to work in non-OO C. All sorts of manual reference counting. If you use COM objects in C# it's much less horrifying, although there's a certain amount of "body horror" involved in firing up one whole windows application inside another one. You can run IE in a cell of an Excel spreadsheet, but that doesn't make it a good idea.

> You can run IE in a cell of an Excel spreadsheet, but that doesn't make it a good idea.

Embedding a MS Works worksheet in a MS Works document however was sometimes a good idea in '95.

I've yet to see a better implementation than that and I find it really disappointing that the state of the art has moved backwards in this area (in addition to UX and performance relative to hardware specs).


I agree. Apple also had an effort called "OpenDoc" which they gave up on.

Working with VB made COM really nice. Both for writing COM servers and consuming them. C# is actually a step back because it doesn't do the reference counting right and instead relies on GC to free objects.

Core COM is basically predefined vtable layout and object reference comparison rules combined with IUnknown for lifetime and type queries. It can be verbose, but it's very simple.

Stuff like OLE Automation is very much not simple, but it's not a requisite part of it all.


COM works pretty well in Delphi, .NET, C++ Builder, MFC, C++/CX and now C++/WinRT.

The problems and boilerplate usually affect those that insist in using the C level APIs.


I have think about this, how a REST API elide a lot of the problems in how integrate disparate stacks...

How have something alike? Maybe put a ring buffer as mediator and push/pull a binary encoding format (messagepack?). Probably need a little of C for the bridge.


Sounds like a huge effort just to get Rust in. I predict huge problems for the Firefox program if they continue this mixing. It's a huge added complexity.

> ... just to get Rust in

You make it sound like they're just cargo culting a shiny new language when in fact they invented the language in the first place to solve their complexity problems with C++.


I mean developing Rust in the first place is probably a huge added complexity on its own, clearly they're not doing this on a whim. There's also probably a huge upfront cost that's going to pay off as more and more Rust code make it into Firefox.

It's hardly any more intrinsic complexity than adding a C library to a C++ project.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: