Hacker News new | past | comments | ask | show | jobs | submit login

Jumping through hoops and hoping you put enough weak pointers. Such structures, if they are getting mutated, also tend to take significant performance hits even if you get it right, because of all the synchronized increment/decrementing.

Another place where this can really hurt is "reactive" approaches, where cycles are very hard to avoid. Weak delegates (like what WinRT uses for events) can help here, but they raise a number of complications. The biggest is that they compose poorly: if you have an intermediary in your event chain, you need to make sure the subscription is the only place where a reference is being held.




FWIW I will try to summarize the Swift architect's comments on this:[1]

a) Declaring weak pointers does take some more reasoning about your object graph. A silver lining is it makes the intention of your code more explicit which helps with maintenance.

b) Swift emphasizes value types which reduces the need for pointers vis-a-vis Java.

c) Most garbage collectors also add overhead when updating object references.

d) Swift is massively more memory efficient. Fast GC requires 3-4X the memory.

[1] https://www.quora.com/Why-doesnt-Apple-Swift-adopt-the-memor...


a. This one smacks of sour grapes. It doesn't help with maintenance when you're dealing with a dense graph, and every time you add a relation you have to check if you need to make something weak. Reasoning about reference graphs is often not intuitive, due to it being hard to see indirect references. Also note that some cyclical structures cannot be broken by statically using weak references, and require special action like specialized manual reference counting (I recently had to implement such a system ).

b. Very true, but these are the exact opposite of the cases that I'm talking about.

c. Two things:

First, memory has gotten incredibly cheap and dense for servers. Server GCs are generally more cache friendly than reference counting (due to both allocation strategies/compaction and the lack of need to touch reference counts), which is important because CPU performance gains are slowing down.

Second, the graph he shows (I've seen it before) is looking at older GCs, and more importantly GCs where memory usage outside of the working set was not optimized for. In a paged system, it is of little benefit to ensure memory outside your working set is released.

Obviously on mobile the situation is different, but there are also GCs that have far lower memory usage because they have been optimized for constrained scenarios. Android's new ART collector is sort of an example of this, but their efforts were massively undermined by the fact that they had to adapt to all of the existing Android library/app code which uses tons of object pooling. Manual object pooling with GCed objects creates all kinds of problems for GCs, particularly generational ones.

I agree with the decision to go with ARC for Swift from a lot of perspectives: Swift's target market is not usually dealing with complex interconnected models, using a GC would make ObjC interop difficult, etc. But it is far from obvious based on this information that it is beneficial to have a language stick with only a GC or only reference counting.


Without knowing specifics it seems like you're focusing on complicated "dense graph" scenarios. Perhaps Swift is a bit harder for those.[1] However your initial claim was that dropping GC was bad for "most server-side code" (emphasis mine). Does most server-side code involve dense graphs?

As a point of logic, optimizing to make 95% of cases a lot less "painful" would be worth a tradeoff of a bit more pain in the already difficult 5%. Thus if Swift fits that description it stands to reason that it could displace server-side Java.

[1] (Although one could argue those are exactly the scenarios where reasoning more carefully about your object graph is helpful.)


I focused on that somewhat in this thread because that was what you initially asked about, but note that only my first response to Swift's creator's justifications even mentions that. I do think servers moving from GC to RC would not be very advantageous, but not for that reason (or at least, like you said that reason only applies to a minority).

The biggest reason is that while some server side applications are very latency sensitive, many (most?) are not. For most websites, significant (i.e. collections of older generations) GC pauses are going to be relatively infrequent and having a few requests slow down just a bit for less than a second is not a huge problem. In these cases (which if they aren't the majority, they're still close) throughput is more important that latency, which is where a GC shines. And ease of programming/maintenance is likely to be more important than both.

As far as "helping you reason about your object graph", I don't see the argument about how forcing you to RC helps you do that at all. All of the extra reasoning is centered around something that would be otherwise irrelevant. Even in cases where you do something that is not amenable to GC collection (e.g. events/observers that keep objects alive which you didn't intend to), it seems of little benefit to have to constantly think about all of your other references at the same time.


> GC pauses are going to be relatively infrequent and having a few requests slow down just a bit for less than a second is not a huge problem

I respectfully and strongly disagree. Outlier performance issues are a huge problem when scaling businesses.

Additionally "ease of programming/maintenance" is exactly an area where, overall, Swift may have some major advantages over Java. Without even further debating the cost-vs-benefit of declaring dense graph relationships, I believe you've conceded that this a small part of the overall picture.


> I respectfully and strongly disagree. Outlier performance issues are a huge problem when scaling businesses.

For server-side GCs, a major collection is well within the bounds of incidental network latency, and happens far less frequently. Outlier performance is a huge problem, but from what I can tell outlier performance specifically due to GC pauses is an issue in far fewer cases (one that comes to mind is servers for certain financial services).

> Additionally "ease of programming/maintenance" is exactly an area where, overall, Swift may have some major advantages over Java.

That's true, but these advantages are uncorrelated to reference counting and they diminish if you choose to compare with a language that has aged better (C# comes to mind). If you're really concerned about high-performance, low-latency server programming and you are willing to switch to a less common language you'll be much better served by something like Erlang, which is specifically designed for that.

Erlang's memory model is fairly similar to what I have described here, but still runs into some issues that I feel could be solved by giving more control over where exactly things are allocated. It also resists any kind of mutable state; I agree that mutable state can complicate concurrency heavily, but having an option to use it sparingly would help (process dictionaries are too limited).

My point is basically that just like GCs aren't the solution to every problem, ARC isn't either. I think a language that allows for both will beat both Java and Swift handily when it comes to making both easy things easy and hard things possible.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: