Hacker News new | past | comments | ask | show | jobs | submit login
Understanding Objective-C by transpiling it to C++ (jviotti.com)
94 points by ingve on Dec 2, 2023 | hide | past | favorite | 68 comments



Nifty!

Of course, if one is interested in really understanding the nitty gritty implementation of a lot of Objective-C (and UIKit classes), then there is probably no better place to poke around than Mike Ash's blog. He now works for Apple and hasn't posted much since then, but the level of detail combined with the clarity of his writing made it a go-to source for everyone in early days of iOS development. It demonstrates why his getting hired by Apple for systems/language engineering was highly overdetermined.

https://www.mikeash.com/pyblog/


Portable Object Compiler may be of interest, too.

It's an open source compiler for a more traditional dialect of Objective-C, and it generates C code:

https://sourceforge.net/projects/objc/

The bootstrap source release of Portable Object Compiler itself shows the kind of C code the compiler generates from Objective-C:

https://sourceforge.net/projects/objc/files/bootstrap/


POC isn’t really worth looking at as an Objective-C, its author was kind of a nut who long insisted that NeXT “ruined” Objective-C with its additions back in the early 1990s.


I loved Objective-C when I was developing on the NeXT. I haven't done with it much since then. I don't think it really got the credit it deserved. Templates would have been nice to have with it. Messages where so useful.


Nah, template metaprogramming really doesn’t fit with the language philosophy; every message-send generates (essentially) the same code because it needs to be handled at runtime time, so templates won’t buy you anything.

However, the lightweight generics that we added a decade ago so ago are great for when you want to add additional type hints to catch compile time bugs, just like explicit type declarations in general: They don’t affect the generated code, they just provide additional compiler diagnostics.

Swift on the other hand goes far enough down the same terrible rabbit hole as C++ that, as with C++, its type system was recently demonstrated to be Turing complete. Thus long term maintenance of large, complex codebases is going to be much more difficult because you can no longer look at code in isolation and have a basic idea of what it does.


I disagree. Templates would have given messages a whole new world. I haven't looked at Objective-C generics. They didn't exist in 1989.


I know basically nothing about the Apple language ecosystem but why does no one ever mention or seemingly use Objective C++?

Is it that Objective C is a better version of C with OO and C++ features?

If that’s the case why was Objective C++ created?


I've always understood Objective C++ to be a way to mix in C++ source code in an Objective-C code base, not really a new language but more of a "mode" of linking / compiling.

> Objective-C++ does not add C++ features to Objective-C classes, nor does it add Objective-C features to C++ classes. For example, you cannot use Objective-C syntax to call a C++ object, you cannot add constructors or destructors to an Objective-C object, and you cannot use the keywords this and self interchangeably. The class hierarchies are separate; a C++ class cannot inherit from an Objective-C class, and an Objective-C class cannot inherit from a C++ class. In addition, multi-language exception handling is not supported

From a Stack Overflow comment / archived Apple docs: https://stackoverflow.com/a/3684159 https://web.archive.org/web/20101203170217/http://developer....


Yes, in my experience the most common usages are to pair an Objective-C Cocoa UI to cross platform C++ internals, to use C++ for more performance critical parts of an app, or even to just be able to take advantage of the plethora of C++ libraries without going all-in on C++.

Things might’ve changed with Swift being around now, but as I understand one place where the Obj-C UI + C++ internals setup used to be common was Audio Unit extensions due to the high latency sensitivity of anything working with audio.


That first one is probably the most common, you'll see .mm files for Apple-specific platform code in many big cross-platform C++ projects.


Author here. We do a LOT of Objective-C++ at Postman (https://www.postman.com), as our upcoming desktop framework is primarily C++.


I bet there's a ton of Objective-C++ code around that serves as a shim between a C++ codebase and macOS/iOS system frameworks. The difference to plain ObjC isn't all that big except that the "C part" uses C++ semantics (I guess that's why nobody explicitly mentions it as ObjC++ but just files it under the ObjC umbrella name)


Objective C++ is just a way to mix C++ and Objective C code in the same file. It’s largely a convenience and not its own language so to speak.

It’s functionally the equivalent of how you can use C code within C++ code, and a hint to the compiler to allow parsing of C++ syntax as well as ObjC


Objective-c++ is used extensively, it’s just not covered in most discussions of objc because then you’re having to explain the exciting interaction between it objc object semantics and C++’s. Eg the complexity skyrockets.

The biggest public user of it would likely be webkit, but plenty of closed software both in and outside of apple use it.

But the reality is that it is at this point primarily a bridging language. Post the introduction of ARC one of the biggest win/reason to use it over raw objc went away.

But it’s still tremendously valuable as a way to have a nice API that is also ABI stable in a way that isn’t the C and C++ misery pit. Of course now swift has the same ABI stability support as well as a slew of other improvements so there seems ever less reason to write new objc.


It sees heavy use, most of it just isn’t public. WebKit is, but for example Facebook uses it for their apps.


What's there to mention about it?

Long ago, c++ exceptions and objective C exceptions didn't play well together but they've been unified so that's not an issue anymore.

(In the beginning, Objective c exceptions were macros around setjump()/longjump(). Objective C 2.0 added actual language level support with @try/@catch/@finally/@throw which were compatible with C++ exceptions)


You shouldn't use Objective-C exceptions though; most code (and by default all ARC code) is not exception safe and will leak memory if you jump through it.


Objective-C++ was created to enable the creation of the Lotus Improv spreadsheet. The underlying spreadsheet engine was developed in C++ and bridging bidirectional via C to Objective-C for the UI was judged to be too inconvenient, and the developer tools folks at NeXT realized the C++ and Objective-C grammars were easily reconcilable so they put them together to allow mixing at the statement level. It worked great!


I have started to do the same thing with go but it's largely unfinished. The next thing I will try to do when I have some time is to implement goroutine with the new c++20 coroutines.

(https://github.com/Rokhan/gocpp)


Slightly off topic but did anyone ever use Objective-J[0], specifically Cappuccino which was a great framework for building JS frontends.

I built an application using Cappuccino after having built a bunch of apps for iPhone and it was the most intuitive framework after getting involved in Xcode and Co.

Is cappuccino still a thing since Apple went to swift?

[0] https://www.cappuccino.dev/learn/objective-j.html


I built an app using Cappuccino and Objective-J back around 2010.

We even built an EOF/CoreData clone on top of it. Good times.

It was incredible to work with, but ultimately not very “webby”.

I’d rather have WebObjects back. It should have never died.


Looking back it's amazing how much effort the authors made to make it objective-C like and not at all like any other Web technology.

I guess that made it too niche for a broader audience.


I saw a few presentations from the 280 North guys. It was clearly impressive what these 3 guys built. Learning from Cocoa/Objective-C/Interface Builder well, they realized they needed to build 3 separate, but interconnected things, the framework (Cappuccino), the language (Objective-J), and the GUI builder (Atlas).

They sold to Motorola for $20 million in 2010, and that was the kind of the last I heard.

https://news.ycombinator.com/item?id=1631002


I had no idea Clang had this rewriter, what an odd feature to keep around for all these years.

I wonder if Apple actually uses/used it for anything.


Windows iTunes if memory serves.


Do you have a reference for this? Would love to include it in the post


No just folk lore from folks I work with. I used to work at Apple and now work somewhere where we do a lot of iOS work where there are a lot of former Apple people that worked on Foundation AppKit clang etc in another life.


It isn’t really supported anymore so I don’t think so.


Cool! You might even be able to run Rellic [1,2] on the LLVM IR produced by Clang when compiling Objective-C code. If it works, this will spit out goto-free C code, not C++.

[1] https://github.com/lifting-bits/rellic

[2] https://blog.trailofbits.com/2022/05/17/interactive-decompil...


It’s mostly just objc_msg_send and a couple other methods in that header.


I’m not sure if the authors comment that the rewriter isn’t used is accurate.

It might have been used for the windows port of iTunes.


Objective-C is interesting. The problem I had back when we were using it was lack of performance. For us, in many cases, refactoring code down to C easily improved performance over 400 times. Code that struggled to deliver output in a usable time scale quickly became better than real time.

Today, in computing, we are using far more energy than necessary because of the proliferation of inefficient languages in server, desktop and mobile codebases. I would guess we could achieve a non-trivial improvement in carbon footprint if computing were to have the same kinds of energy efficiency requirements we impose on appliances and vehicles.

This study [0], published in 2017, evaluated the time, energy and memory efficiencies of some thirty languages. Sadly they did not include Objective-C. Swift did make the list.

Table 4 shows the normalized results. C was most energy and time efficient and very close (third) in memory efficiency. Swift consumed nearly three times more energy, four times slower and nearly three times the memory. Python requires 76 times more energy, 72 times more time and three times more memory.

Clearly there is a cost to choosing a language along these vectors. Given the ubiquity of computing in human life today, I am sometimes surprised that we have not become far more critical about permanently baking-in terribly bad energy inefficiency into systems and products. Developing energy-efficient software using languages like C, C++ and Rust isn't really that difficult.

[0] https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sle...


ObjC can be plenty fast, if you know what you're doing. And there really isn't that much to learn; it's just that most people don't learn and so they keep running into landmines. I'll cover the main points right now, in no particular order.

1. Allocation and deallocation involve calls to calloc/free, which are rather costly. It's easy to accidentally create and destroy an NSObject on each iteration of a hot loop. Be aware that this comes with a performance hit, and can easily come to dominate the time taken by that loop.

2. Retains and releases are pretty fast, especially on Apple's processors (which are optimized for non-contended atomic operations). However, they are atomic operations – some fancy compare-and-swap thing usually – and so you should try to avoid doing them on every iteration of a hot loop.

3. ObjC method calls are slower than C++ vtable dispatch, which in turn is slower than static function calls. Also, there's no inlining opportunity. There is caching to make method calls faster if you call the same method a lot, and the code for cache lookups is impressively fast, but if you need some code to be Fast, consider using C-style functions in critical parts.

4. When in doubt, use the Instruments profiler to find bottlenecks. It's really good.

And now a small grab-bag of more minor ones:

5. +[NSString stringWithFormat:] is surprisingly slow, or it was when last I checked. Its CoreFoundation counterpart is similarly slow. (It's the same underlying code.)

6. When you load a plist, most of which will be deallocated almost immediately, surrounding this with an @autoreleasepool block often does wonders to reduce heap fragmentation. Same goes for other operations which will tend to produce large memory spikes of objects that will stay allocated pending an autorelease pop.

7. The Accelerate framework defines a number of SIMD data types; e.g. simd_float8 is a vector of 8 single-precision floats. You can do arithmetic on them like a normal numeric data type, and the compiler will automatically emit cross-platform SIMD code. If your needs are fairly basic, this is a remarkably easy way to do SIMD programming. I've managed to get ~8x speedups with it before. (A pity that the docs are so lacking.)


The object oriented part of Objective-C was never meant for high performance code. Apple documentation always recommended using C when performances are an issue, which can be easily done because Objective-C is in fact C plus a bunch of object oriented concepts.

However, in the end if you want every bit of performance you have to use SIMD intrinsics or write asm directly, so even C can be 8x times or more slower than a manually optimized code.


Objective-C isn’t actually that bad as long as you’re aware of what constraints it puts on the compiler. People like to go “oh method calls are so slow!” but you can do a billion of them a second. The real performance win comes from selective use of optimization-friendly code on hot paths that don’t act as a black box to the compiler. Some teams move at Lightspeed to some weird janky architecture that just sucks in every aspect because they think it will help, and it usually doesn’t.


The real slowdown of Objective-C method calls is at startup where the caches haven't been filled. Pure Objective-C apps typically spend 5-15% of startup time on objc_msgSend


I haven’t looked into this for a while, but are method caches not included in the dyld closure?


I’m not sure


The key point is:

As a percentage of the totality of software engineering, virtually nobody looks at energy optimization.

Smart phones? Well, the manufacturers likely pay attention to this to some degree. However, I can guarantee you that almost none of the millions of app developers working on third party apps ever look at energy as an optimization vector.

That was my point, the carbon footprint of computing systems is dominated --and permanently marked-- by the energy-dominant language running on it.


It’s actually dominated by the time complexity of the code running on it.


> It’s actually dominated by the time complexity of the code running on it.

No. Please read the paper I posted. It’s the language.


I’ve read it; it’s a garbage paper. I don’t fault the authors for doing the study but applying the results to anything in reality is almost tantamount to malpractice. It just means AWS people go up on stage and make fools of themselves pushing Rust for environmental impact or people like you try to explain how iOS apps kill your battery to one of the few people who actually has worked to optimize this. The first thing we always investigated was doing less work and fixing bad algorithms, followed by very localized language replacement on hot paths, then as a last resort wholesale rewrites. The ratio of these options was close to 90:9:1.


You sound angry.

You also sound like you are talking to a bunch of morons who don't understand the difference between time complexity and the computational/implementation realities of different languages.

On a personal note, I started my hardware/software development career in the early '80's, writing non-trivial applications in assembly language for a dozen different processors and, of course, designing entire computers from scratch. On from there through languages such as C, C++, Forth, APL, LISP, various Basics and, eventually pretty much all of the "modern" languages. During that time I designed and manufactured custom bit-sliced micro-coded processors with discrete chips in the early days and, PLD/PAL's and, later-on, FPGA's. Sure, I use Python a lot today, yet, I know what I am walking into when I do. I also use lots of C/C++ and, these days, ARM assembler.

Slow code that is both time and energy inefficient is a modern reality. Virtually nobody coming out of a typical university CS curriculum today is exposed to coding without objects. Bloated, slow and, yes, high energy-consuming code is almost normal these days. Look at almost any open source codebase for evidence of this. And, frankly, as machines become faster, nobody cares, because inefficient code works just fine.

Sometimes I compare it to sailing. When there's lots of wind everyone is a sailor. When wind is minimal, you better know what you are doing. Today people benefit from processors running multiple cores at multi-GHz clock rates with tens of gigabytes of memory and terabytes of storage. Of course that leads to bloat and inefficiency at every level! Why wouldn't it?


Yes, that paper does make me angry :) I understand and agree with you that most code that exists today is inefficient and slow, but there is room for "let's after the lowest hanging fruit" in the discussion. Almost all projects these days are object-oriented in some form because complexity demands it. Performance and maintainability for modern software can be at odds along some axes and we pick the latter because our hardware gets better, and we might as well use it for that. That doesn't mean that there aren't places where you can see easy wins or that there isn't horribly inefficient software that ought to be optimized regardless but when you look at the broader picture "let's rewrite everything because it's in an inefficient language" is generally not actually an optimal strategy.


All things being equal, no. It's the language.


All things are never equal, so this isn’t particularly relevant or useful.


That philosophy isn't particularly relevant or useful. How else can you truthfully compare 2 things? Do you always pre-emptively give up and say "well things are always kind of different anyway so it doesn't matter how they compare"?

PS, big O isn't everything. The constant factors that big O hides are in fact meaningful and can make or break an algorithm. Some languages make some constants worse by having extra overhead in some places or better when they have special facilities.

See also: sorting algorithms deferring to insertion sort O(n²) for small runs, galactic algorithms.

See also: languages where everything is a pointer to dynamic memory, vs languages that have actual values. The language default "dictionary" implementation and its time and space complexities. I could go on. For energy efficiency, or memory efficiency, or speed, choice of language does matter, and depending on the size and nature of the data, can overshadow algorithmic choices.


I'm not saying those things don't matter, or that it's not possible that you can overcome some algorithmic complexity just by brute forcing it with a smaller constant. I'm saying that it is exceptionally rare that you are ever in a situation where you can go "oh I have these two identical pieces of software, let's just pick the one that uses the efficient language". Really, it comes down to this question: "this code is slow, what should I do to improve it?". The answer is usually "fix the time complexity somewhere or do less work" and it becomes good enough. Maybe in some isolated cases I might rewrite a bit of it in a different language if appropriate. When I say "time complexity dominates footprint" I mean that most of the code changes I make along the lines of changing something to take "100 ms → 2 ms" where the other language would be "25 ms → 0.5 ms" but you can see why I would not say that the language dominates here.


NeXTSTEP drivers were written in Objective-C.


> Rust isn't really that difficult

Fun Rust novice exercise: Write a linked list implementation in Rust.

Regarding the carbon footprint stuff, I think any runtime performance efficiency stuff might often be outweighed but the massive compile times for development and CI.


I am sure you know enough about Rust to know that writing a linked list in Rust is not a “novice exercise”. It is also not something you require to use the language.

This is like a C# dev trolling a C developer by say “first novice exercise”, HTML encode a UTF8 string. I mean, it is a single line of code in C# so how hard could it be in C?

Or, it is like a C dev telling a Java programmer that their “first novice exercise” should be to write a fixed binary layout to a device driver or even just to call directly into an OS system call. Both are trivial in C after all. How about using a hash table for something though? Reasonably big job in C but new HashMap in Java.

Rust is designed to prevent exactly the kind of thing you do to create a linked list and it is designed to make it difficult for a good reason. It is a cherry picked example meant to sound smart but, in reality, it is an eye-rollingly dumb thing to say.

In DOS, I can write a program to dynamically overwrite and extend the behaviour of the operating system in RAM. In Linux, I cannot easily do that. I guess that means DOS is more advanced? To me, this sounds like the argument you are making about Rust.

I do it care which language people like. There are pros and cons of each and legit arguments on use one over the other. Why not use one of the valid arguments instead of dumb gotcha comments that only tell us the languages are different and not which one is better.


Writing a linked list is a novice exercise in most programming languages. It’s also a novice exercise given in first year of CS at universities. It might not be the most efficient data structure, but it’s a really trivial one.


In assembly it’s really easy to write self-modifying code. It used to be the norm in fact. But today’s high-level programming languages generally make it impossible, and most operating systems also prevent mutating code in memory (no-execute page protection, etc.) Yet there are still use cases like JIT compilers.

The linked list is the data structure equivalent of self-modifying code. It’s easy for a novice to understand, but there’s no great reason for anyone to actually do it today outside of a specialized library.


>This is like a C# dev trolling a C developer by say “first novice exercise”, HTML encode a UTF8 string. I mean, it is a single line of code in C# so how hard could it be in C?

It’s not like that at all. The difference you’re identifying is that C# has it in the standard library while C does not. But it’s a conceptually intricate problem and implementing it from scratch in both C and C# would actually be somewhat similar, but C# obviously has some convenience features that would make the code tighter.

Linked lists are very simple and GP’s complaint is that Rust makes it very difficult to implement them.


The difference is that both C and C# let you use pointers and make it your responsibility to use them correctly. C# provides the facilities, both in the naked language and in the standard library to not need pointers whereas you need to use pointers to do even trivial tasks and so the standard library makes extensive use of them. Pointers are a core language feature of C so obviously it is trivial to make use of them ( though not at all trivial to use them safely ). Rust’s core principle is that unsafe memory access should be prevented at the language level. Unsurprisingly, a data structure completely reliant on potentially unsafe use of memory will be difficult to implement.

Both C# and Rust include features in their standard libraries so that implementing your own linked lists is unnecessary precisely because you are not supposed to. This is “non-idiomatic” as they say. C does not include linked lists in the standard library because this is exactly the kind of data structure you are meant to create when needed as the language makes it completely trivial to implement them. In C, a linked list is completely idiomatic and many, many C projects use them.

If you write an OS in C, you will almost certainly create a linked list structure. Linux did. If you write an OS in Rust, you do not need to do that.


Your point would be valid if rust had code examples that was actually simpler to write in rust than in other recent languages.

My understanding is that rust provides safety guarantees that no other language matches, but at the cost of pretty much everything else. In that regard, the "linked list" example is a pretty good (although maybe over-pathological) example of the hard parts of the language.


> My understanding is that rust provides safety guarantees that no other language matches, but at the cost of pretty much everything else. In that regard, the "linked list" example is a pretty good (although maybe over-pathological) example of the hard parts of the language.

The build system is easier to use than any other language I know of. Integrating third-party libraries into a project is night-and-day between C++ and Rust, and the Rust ecosystem benefits tremendously from it.

Energy efficiency is also extremely good, as is performance.

https://aws.amazon.com/blogs/opensource/sustainability-with-...

And as far as code confidence, I don't know of any other language that compares, either. You can dodge memory safety errors in Java or C# thanks to the GC, but exceptions keep flow control from being as explicit as Rust since your function can be silently pre-empted.

I see Java or C# as paying a small (and in many cases fully justified) cost to smooth things over so there's less friction to development. Go additionally adopts an extreme focus on keeping the language small and simple so it's easy to learn.

Rust declines to pay that runtime cost, but requires you to explicitly address those friction points, while C++ lets you hang yourself.

Rust also has a carefully curated standard library and idiomatic patterns, while C++ has addressed the problem of lessons learned by continually adding to the standard library without keeping different features compatible or consistent with each other.

So, yes, Rust will have a steeper learning curve, but it levels off long before C++'s does. In addition, the compiler messages are far more helpful in Rust, and the default tooling is far better than C++.


The relative learning curve between Rust and C++ is strongly dependent on the type of software you are building. I use both. They each have domains where the other is not particularly close in being equally effective (or safe!) as a programming language.

One area where C++ still runs rings around Rust, for example, is modern database kernels. Rust’s safety features largely don’t apply, so no benefit, and Rust has large expressivity gaps that make it more complex and less safe to implement things that are straightforward in C++. Also, the relatively weak generics and metaprogramming features of Rust causes this type of code to balloon in size and invites bugs.

But for other types of code, this barely matters at all because you don’t have many uses for these C++ features. Horses for courses.


I think a few things can be true at the same time. A (doubly-)linked list is an elementary data structure. It is reasonable to expect implementation of such data structures to require no special magic, since they are one of the first data structures many people learn. There is minimal use for these data structures in modern programming so the exercise is a bit academic. Every doubly linked list I have written in vaguely recent memory, and there haven’t been many, used indirect addressing (like indexes) and not pointers.

While I don’t think it matters that much in practice for many types of systems software, I think it is fair to recognize that this lack of basic expressivity is occasionally troublesome and an unexpected gap in the context of classical computer science for someone learning Rust.


> Fun Rust novice exercise: Write a linked list implementation in Rust.

    struct LLItem<T> {
        next: Option<Box<LLItem<T>>>,
        value: T
    }


You can do it safely by importing a ghost-cell crate. That particular pattern is not yet part of the Rust standard library because it comes in so many varieties, and it's not yet clear how to best support it in the language. But I assume it will get there at some point.


Fun C exercice: write a linked list impl that's guaranteed leak free, thread safe and type safe at compile time. Good luck!


[edit: oliver used wall of text. Was it effective?]

No need to get defensive, no one is arguing that rust doesn’t do a lot of things well.

They’re saying that a lot of the restrictions makes things much harder than other languages. Hence the general problem rust has where a lot of trivial tasks in other languages are extremely challenging.

The fact that these things are much harder in rust due to design decisions in the language to ensure that everything is guaranteed to be safe in all cases does not change that the language can be harder to use.

You’re talking up getting a safe implementation in C, but what matters is “can I get the same level of safety with less complexity in any language”, and the answer is yes: Java and c# implementations of a thread safe linked list are trivial. If I wanted I could do it in c++ though the complexity would be more than c# and Java it would be easier than rust.

We know that rust makes some things more complicated than other languages for the same level of safety plans correctness, but that’s ok because complexity is a trade off. Rust has increased complexity of some “simpler” things to reduce the overall complexity of larger systems. This is an ok choice.

But it is still a trade off, and part that trade off does make some things harder.

People can point that out without it being an attack on the language.


> They’re saying that a lot of the restrictions makes things much harder than other languages. Hence the general problem rust has where a lot of trivial tasks in other languages are extremely challenging.

Like what? So far the discussion has revolved around rewriting a linked list, which people generally shouldn't ever need to do because it's included in the standard lib for most languages. And it's a decidedly nontrivial task to do as well as the standard lib when you don't sacrifice runtime overhead to be able to handwave object lifecycle management.

- C++: https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-...

- Rust: https://doc.rust-lang.org/beta/src/alloc/collections/linked_...

> No need to get defensive, no one is arguing that rust doesn’t do a lot of things well.

That's literally what bsaul is arguing in another comment. :)

> You’re talking up getting a safe implementation in C, but what matters is “can I get the same level of safety with less complexity in any language”, and the answer is yes: Java and c# implementations of a thread safe linked list are trivial.

Less perceived complexity. In Java and C# you're delegating the responsibility of lifecycle management to garbage collectors. For small to medium scale web apps, the added complexity will be under the hood and you won't have to worry about it. For extreme use cases, the behavior and overhead of the garbage collector does became relevant.

If you factor in the code for the garbage collector that Java and C# depend on, the code complexity will tilt dramatically in favor of C++ or Rust.

However, it's going to be non-idiomatic to rewrite a garbage collector in Java or C# like it is to rewrite a linked list in Rust. If we consider the languages as they're actually used, rather than an academic scenario which mostly crops up when people expect the language to behave like C or Java, the comparison is a lot more favorable than you're framing it as.

> If I wanted I could do it in c++ though the complexity would be more than c# and Java it would be easier than rust.

You can certainly write a thread-safe linked list in C++, but then the enforcement of any assumptions you made about using it will be a manual burden on the user. This isn't just a design problem you can solve with more code - C++ is incapable of expressing the same restrictions as Rust, because doing so would break compatibility with C++ code and the language constructs needed to do so don't exist.

So it's somewhat apples and oranges here. Yes, you may have provided your team with a linked list, but it will either (1) Perform less efficiently, due to needing the GC to check whether to free things (2) Require more expertise to use safely, due to C++ being incapable of expressing constraints

> Rust has increased complexity of some “simpler” things to reduce the overall complexity of larger systems. This is an ok choice.

This is sort of right and sort of not.

In the context of Java and C#, Rust hasn't "increased complexity", it makes it explicit rather than paying runtime cost to try and hide it. In the context of C++, Rust hasn't "increased complexity" either, it makes it mandatory to deal with things that C++ lets slide.

I'd look it more as Rust requires a higher degree of confidence in the code. Rust is more likely to take the programmer to task about "What did you mean?" or "Are you sure about that?". It's like doing a code review with an extremely pedantic developer.

As a product of this, the performance is better because the programmer has pre-answered questions which would otherwise need to be disambiguated by a garbage collector at runtime. The less ambiguous behavior makes it faster to integrate modules, because the compiler can point out discrepancies between what the code owner said they expected and how something is being used by itself.

But it's not like Rust added that complexity - it was always there. C++, C#, and Java just let you ignore at the risk of adversely affecting software stability or performance, respectively.


You seem to be mixing "designing" a linked list and just implementing a known design.

> which people generally shouldn't ever need

Agree!

However, we can still use the implementation of a "reasonable" linked list a good yard stick to measure things across languages since it usually involves a good coverage of basic language features (like collection, traversal, life time management etc... etc...).

Also, looking at the rust implementation of the linkedList you linked, we do have the magic unsafe keyword somewhere in there... which negate a lot of your argument have probable safety.

> Less perceived complexity.

This a seems very strange thing to say. Isn't reducing perceived complexity the name of the game in language design ? Reducing the complexity the dev have to carry in their mind by moving some of the decision to the toolchain is in my option a very valid approach.

> In Java and C# you're delegating the responsibility of lifecycle management to garbage collectors. For small to medium scale web apps, the added complexity will be under the hood and you won't have to worry about it.

> For extreme use cases, the behavior and overhead of the garbage collector does became relevant.

First, designing for non extreme case is a valid approach in language design. Make the common case dead easy, and for complicated/rare case, provide API and customization points. In .Net/Java it is possible (also not always easy) to beat the GC into submission.

Second, i think the comment about GC not being adequate for large scale web app is very 1990.New garbage collectors are able to manage those use case easily. A lot of the largest backend are in java.

> If you factor in the code for the garbage collector that Java and C# depend on, the code complexity will tilt dramatically in favor of C++ or Rust.

Why would you factor the code of the garbage collector in the equation...

> However, it's going to be non-idiomatic to rewrite a garbage collector in Java or C#

We are not talking about idiomatic vs non idiomatic. We are talking about simple and not simple. Writing GC friendly code in Java (even better in C# in my opinions with Structs) is still relatively simple and clean.

> You can certainly write a thread-safe linked list in C++, but then the enforcement of any assumptions you made about using it will be a manual burden on the user. This isn't just a design problem you can solve with more code - C++ is incapable of expressing the same restrictions as Rust, because doing so would break compatibility with C++ code and the language constructs needed to do so don't exist.

The unsafe keyword in the implementation pretty much negate all of this...

> Yes, you may have provided your team with a linked list, but it will either (1) Perform less efficiently, due to needing the GC to check whether to free things (2) Require more expertise to use safely, due to C++ being incapable of expressing constraints

1) is a very strong statement, GC code can perform better than manual memory management, and does in a lot very real case.

GC vs non GC is not about performance or efficiency, it's about control. Do you want to do it, or do you want the system to do it. The best parallel is register allocation : can you do a better job than the compiler in some case ? maybe. But in average (especially when you had things like x-function register allocation) in most case the compiler beats every dev except the top 5%.

> In the context of Java and C#, Rust hasn't "increased complexity", it makes it explicit rather than paying runtime cost to try and hide it. In the context of C++, Rust hasn't "increased complexity" either, it makes it mandatory to deal with things that C++ lets slide.

Making it explicit is increasing the complexity...

> it makes it mandatory to deal with things that C++ lets slide.

And thus making it "harder" to produce the same code...

> But it's not like Rust added that complexity - it was always there. C++, C#, and Java just let you ignore at the risk of adversely affecting software stability or performance, respectively.

I don't have a nice way to say it, just gonna say it : This is what fanboyism sounds like. C++/Rust and java are different point in the language design point. Now they are not perfect, as in it might be posssible that for each their respective domain, we can with the benefit of hindsight design something that works better. But to think that rust magically found a point in that design space which doesn't also add another sets of compromise is not realistic.


The Swift optimizer has matured dramatically since 2017, to the point where I think this benchmark would almost certainly not result in a 3x power usage/4x slowdown compared to C.

On the current benchmark game, many of the purely algorithmic benchmarks are close to C performance, but many of these benchmarks seem to be stale and haven't been touched in quite a while. It would be nice if someone went through and rewrote these using current idiomatic Swift (where many of the "use unsafe bits for speed" tricks are unnecessary) and see how it really stacks up.


> It would be nice if someone went through and rewrote these using current idiomatic Swift

It's always nice when someone else does what we can see is required, but mostly we have to be the ones to do that work.


That's good to know.

I might have one of two projects in the next few months that will require jumping into Swift. Not touching Objective-C if I can help it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: