Hacker News new | past | comments | ask | show | jobs | submit login
Swift Proposal: Move Function (github.com/apple)
98 points by todsacerdoti on July 28, 2022 | hide | past | favorite | 99 comments



The move function is not move semantics (though the latter is being discussed by the community). The main purpose of the move function is to explicitly document where performance sensitive code depends on a specific type of compiler optimization and providing diagnostics if those conditions change in a subsequent revision of the code.

Swift’s focus on progressive disclosure means most Swift users won’t need to worry about this.


Swift’s focus continues to be “how do we make the compiler happy to help it emit faster code” IMO. Very much in line with Lattner's, a compiler guy par excellence, original vision.

Unfortunately for me, it's not aligned with my preferences. I like small languages and frameworks that I can master and build upon. Tough luck for me, of course, who cares.

But I'm not convinced Swift is also aligned with what should be the language's main purpose: build great iOS and Mac apps.


Lattner wanted it all: a language great at system programming, scripting and web framework and building apps. It's very ambitious and unheard of, but so was LLVM, so who knows, maybe he was onto something.

8 years later, it's safe to say it's not the language of choice for the new kernel or driver code, not the one for the hot new web framework and it's not replacing Python.

The only thing it's got going for it is that it's the only officially supported choice for making iOS apps going forward. Ultimately, no one uses Swift voluntarily, and that's a bad sign.

I write all of this with sorrow, I was cheering with joy when Swift was released.


> Ultimately, no one uses Swift voluntarily, and that's a bad sign.

Sorry but this can't be more wrong. I love Swift so much that I can tolerate and forgive its compiler's sluggishness.

Swift is now available on Linux (has been for a while) and there's Apple's server framework called SwiftNIO. It's designed for fast and efficient multithreaded server-side code. I voluntarily picked it for my next backend project and am very happy so far.


Swift on Linux / on the server exists, but it's far from a pleasant experience.

I admit that I last worked with it in early 2020, but at the time there were numerous annoying open bugs with the Swift compiler on Linux (such as the installer messing with the Python path), and then there was the issue of divergence between Foundation (which is closed-source) and its Linux replacement. Meanwhile, the stack size for secondary threads was ridiculously low and there was no good story for fault recovery (because you don't want to crash your server if you accidentally access an invalid array index).

Maybe things have improved, but I'm doubtful.


Would you speak more about swift for things other than Mac apps? Last I heard swift was limited to Ubuntu on Linux, and Windows was experimental. Which is too bad because it seems like a really nice language, and I like the ARC garbage collection. But I’m hoping the cross platform story has improved since I last heard.


According to this page [1] it's Ubuntu, CentOS and Amazon Linux at the moment, and Fedora is underway. I don't know much about Windows because am not considering Windows as a backend OS and I can imagine it would be problematic to support. Swift grew in the UNIX environment and it usually requires some serious effort to port UNIX stuff to Windows. And then I don't even think it will ever be as reliable.

On the Linux front, there's Apple's SwiftNIO which is a generic lower-level framework that itself is not an out-of-the-box web or API framework. You need another extra layer above it, which is for example this community-driven project called Vapor [2]

You can, in principle, build a e.g. JSON API based on barebones SwiftNIO and you will save a lot of 3rd party dependencies in this case, but this path is a bit masochistic, you really need to know what you are doing in this case. Vapor is much easier to jumpstart any kind of server project with.

[1] https://www.swift.org/blog/additional-linux-distros/

[2] https://vapor.codes


> Ultimately, no one uses Swift voluntarily

I would use it in more places if I could, it's a language I feel extremely productive in. Apple however has shown zero interest in helping popularise it anywhere outside of their ecosystem and left it up to the community.


You could even say they're basically hostile to it.


I personally love Swift and find it a joy to use. To be fair, I’m ultimately doing {i,mac}OS dev with it [1] but the parts that actually touch SwiftUI/UIKit/AppKit are just a small part, and I usually start by writing packages with all the business logic/model data.

I write Go for work and my all-time favorite language is probably still Rust, but I find that a ton of really thoughtful and smart choices have gone into Swift-the-language as well as how it’s used to build apps.

Happy to expand more on what I like about using Swift if you’re curious.

[1] which you can now write as the same codebase using SwiftUI


> {i,mac}OS dev

I like that. I'm stealing it :)

> Happy to expand more on what I like about using Swift if you’re curious.

Please! I was very excited with it at the beginning, but the few times I dipped my toe it was frustrating and slow.


>I like that. I'm stealing it :)

You led me to finally track down the name of this feature in Bash: "brace expansion"! I often wish my non-techie friends recognized this syntax, along with sed replacement syntax for fixing typos.

> I was very excited with it at the beginning, but the few times I dipped my toe it was frustrating and slow.

Okay, it definitely can be slow to compile, especially with large blocks of type-inferencing ResultBuilder DSLs like SwiftUI or RegexBuilder. I find this mostly occurs when there's a type error within the DSL, and it often takes forever for the compiler to stop trying to do inference and give up with "Type of expression is ambiguous without more context" or simply "The compiler is unable to type-check this expression in reasonable time". But in practice I rarely run into this issue unless I'm creating a massive hierarchy that should be split into smaller components anyway. And I haven't really run into issues with runtime performance yet, though I haven't made anything too performance-critical with it. In any case, Swift makes it decently easy to call out to C/C++/Objective-C if you had a performance-critical routine.

What I like about the language:

It gives me the satisfaction of languages like Rust that let me express types and logic in a way that feels "correct". I rarely feel like I'm hacking stuff in due to deficiencies in the language. Swift encourages a standard approach of record types with polymorphism but still supports legacy NSObject-based stuff in a logical way, so I never feel like I have to jump through too many hoops if I need to touch dusty old frameworks.

I like the type inference, even if it's not perfect category-theoretic Hindley-Milner magic and gets stuck sometimes as mentioned previously.

I like the `guard` statement as someone who's done a lot of Go programming and always instinctively used early returns even when my CS professors told me it was a bad practice ;)

SwiftUI is really nice to use and is the only UI framework that I've ever enjoyed using (and am hoping to write it for my job soon)--it sometimes feels more like a UI prototyping tool to me because it's so straightforward, except that at some point while prototyping, you realize you've already made the finished UI without writing a dreadful mountain of glue code and interface files!

I really like the syntax in general, though I also appreciate Rust's terse symbol-based approach, I think the English-like labels actually work really well despite being a bit verbose. It definitely benefits from an editor with autocompletion, but I wouldn't say it's strictly necessary.

I appreciate the strict Unicode-correctness of the language that makes it harder to do bad stuff with strings.

I think the ARC-based approach to memory management is a good balance between Go's GC and Rust's borrow checker that makes Swift feel like a very pragmatic choice for lots of different problems (though UI was obviously first in mind).

I really like the ArgumentParser and RegexBuilder frameworks, and I think they're the best implementations of each in any language I've used.

In general, Swift just feels good to write and easy to read while being extremely powerful in what you can create with it, and coupled with the ease of use and fast feedback of SwiftUI, feels like a very complete approach to creating many different sorts of software, for my own use or to sell.


Apple stated at WWDC that is now being used on system level on Ventura.

They also dedicated some time on the state of the frameworks session, to put it clearly out there that Objective-C is done and it is time to move on.

It took about 20 years for C++ to take over some key areas to C, and even there it wasn't a success in POSIX and embedded, where C still rules.

8 years isn't nothing.


> They also dedicated some time on the state of the frameworks session, to put it clearly out there that Objective-C is done and it is time to move on.

Is that the "Platforms State of the Union" session from the first day of WWDC 2022? I'm looking in the Developer app right now and can't find the "frameworks" one you're talking about.


Yes, that is it, sorry about the wrong naming.

Around 4 minutes into it.


I believe I found the part you're referring to, in Apple's transcript of the talk, and I'm afraid I have to agree with your assessment. What I've found starts like this:

> But designs evolve, hardware advances, and what was once cutting edge becomes the expected baseline. The Objective-C language, AppKit & UIKit frameworks, and Interface Builder have empowered generations of developers. These technologies were built for each other, and will continue to serve us well for a long time to come, but over time new abstractions become necessary.

https://developer.apple.com/videos/play/wwdc2022/102/

It gets worse from there, for Objective-C. The speaker goes on to basically say that the new triumvirate of "integrated language, frameworks, and tools" is Swift, SwiftUI, and Xcode Previews, and that these will be evolving together, and influencing one another's evolution, as Objective-C, the "kits," and IB once did for one another.

I love coding in Objective-C, and did so for work for almost 5 years. (I've within the last year switched to a front-end developer job.) Swift, to me, whatever its upside, feels like it was written by someone who hates Objective-C. That makes me sad.

But, putting aside my editorializing, I read what Apple is saying as basically this. Objective-C, etc., is all but deprecated. At long last, it's being left behind. You may get away writing CRUD apps for a good while longer, but all the new stuff will be Swift-only, and at some point any UI written in Objective-C is going to start to look dated, once the inevitable innovations come.


> 8 years later, it's safe to say it's not the language of choice for the new kernel or driver code, not the one for the hot new web framework and it's not replacing Python

Rust has become all that, and for a change, I am happy to observe that we're stuck with a pretty good language.


Not quite a Python replacement, but yeah, Rust hits a really interesting spot on the language spectrum.

I sense a lot of Rust envy among Swift people. Just my impression, of course. They have much in common, but I think Rust has more merits on its own, particularly in memory management.


Your remark is particularly timely since a Swift proposal to add a move() function just came out.

Basically, a timid but interesting step to get Swift to move toward ownership semantics, on top of its current Arc based memory management.


Honestly, it seems like the association with iOS (or more precisely, the requirement for smooth interop with Objective-C) is what held back Swift from those other goals. I admit that I don't really have any Swift experience personally, but from my conversations with people who have tried to use it for non-iOS purposes, a lot of the pain seems to come from that fact that Swift kind of has this mix of fresh new ideas along with some unfortunate compromises for compatibility. There's a lot of support for more functional paradigms, and yet there's still OO/inheritance, presumably for interop with Objective-C; there are more modern string types, and yet sometimes you have to deal with NSString; there are ways of dealing with errors and optionals functionally, and yet there's still nil and exceptions. I recognize that a lot of people prefer OO and exceptions, but I think having to support both more traditional paradigms and more current popular trends watered down the language a bit and added friction to learning what otherwise could have been a much more streamlined alternative to C++. That said, it's also possible that no first-class support for Linux due to a lot of the innovative stuff going on in the language being tied to the Apple ecosystem could have been enough to stop it from mass adoption, but I can't imagine that it would have had to care so much about Objective-C interop without the Apple ecosystem tie-in, so it's hard for me to reason about whether having the dual paradigm stuff without the Apple tie-in would have been enough to hamper it.


> Ultimately, no one uses Swift voluntarily, and that's a bad sign.

Not true for me, I've used it successfully for server side work. Being able to use the same language is nice since I can pull in the same routines/data structures and not re-implement them.


Voluntarily can be relative: being pushed to use Swift as a recommended language for new iOS development can influence decisions to use it "voluntarily" for backend.


I think the low adoption of swift outside development for apple platforms is because of the immaturity of the necessary tooling and libraries, not because of issues with the language. There's a sizeable community of people who are passionate about the language and work on that tooling and library support, myself included, but there's not much institutional support outside of apple, so progress is slow.


I was also really rooting for Swift -- it seemed like a cousin of Rust's geared more towards developer experience and ergonomics and less towards performance. Unfortunately I think they squandered their impressive start by being a macOS-only language for the first several years and only becoming cross platform relatively recently.


> a language great at system programming, scripting and web framework and building apps.

Sounds like D.


I had hoped ObjCs successor for app development would be something like FScript. Something that removed the footguns and unnecessary ceremony from ObjC but that had good interop with existing code.

Instead we got Swift, which ironically, given its uptake by external devs, I think was aimed at complaints from internal Apple users. Internal users at Apple were frustrated by how easy it was for external devs to introspect and modify private ObjC framework code. They also wanted performance features like value types and method inlining.

I don’t think many external app devs cared about those things, or to the extent they did C++ was an attractive option because a stable ABI doesn’t really matter internal to application code.

I continue to use ObjC for Mac and iOS dev because it’s a much simpler language than Swift, the system APIs I interact with are still implemented in ObjC, and C++ interop is easier via ObjC++. I suppose the day will come where I need something that is only accessible from Swift, and I’ll write Swift then, but for now Swift doesn’t solve any problems that I have.


I was very happy when swift was introduced and it solved many classes of bugs that were very popular in Obj-C. It was very much a big help for external developers.

Swift apps are not crashing randomly, you don't have nils running rampant because of type safety.

You can create types that nicely convey reality using rich enums and you don't have to define objects with 18 fields that are mostly always nil, but when propertyA == 16, then this field has some value.

Swift also is not an official superset of C, like Objective-C, so you don't have all the footguns of C language, like integer promotions and assignment being an expression.

You just have so much improvements over objective-C if you want to ship an app to hundreds of thousands of users and be sure that it works correctly and that unhandled edge cases won't blow up.

Maybe it is more complex to write a compilable piece of swift, that a compilable piece of Objective-C, but I am happy that this is the case, because it means that the compiler has my back and finds bugs and issues for me.


I agree with the premise "you don't have nils running rampant because of type safety" but the reality "Swift apps are not crashing randomly" is far from the truth.

Right now, I have a research project using Swift Charts and it definately "crashes randomly" partially because Charts expects a subset of type-safe values and does not communicate which subset except by chucking EXC_BAD_ACCESS with no usable error.

Sure, that's example of a beta framework. However, SwiftUI is the proposed Swift-y replacement to Objective-C's UIKit and is loaded with examples of "the type system we moved to should catch this but can't" followed by the even more common "the type system will catch this error in 11,000 years when it's finished failing to compile code on an M1".


I actually see it in the exactly opposite way. A couple years ago, Swift was a nicely designed language, with a manageable number of composable features which could be used to do just about anything.

With the features which were released alongside SwiftUI, it seems like that went out the window, and Swift pivoted towards being a DSL for iPhone apps.


I honestly don’t understand what were they thinking. Choose a random language’s random feature and swift has it. That’s hardly a good thing for a programming language, and it shows — there are some very nasty interactions between different features.


I still think there's a lot to like about Swift, and actually programming in some subset of Swift is some of the most fun and expressive programming I can think of, but it has undoubtedly become kind of a mess. A lot of magic happening, and not in a good way.


I agree, I was very enthusiastic at the beginning of learning it. And if nothing else, it is a great refresher of Pl features.

I’m just afraid that the language can’t be properly maintained from the insane matrix of all these features (I once got a type inference timed out error..)


> I once got a type inference timed out error..

I used to work on a large Swift program. "Type inference timed out" was an incredibly common error that we would encounter (we used a lot of operator overloading and generics and that had something to do with it). I have never encountered any other language where this was such a regular problem.


I find it difficult to empathize with that reaction. Other languages have complicated corners that I can safely ignore, why not Swift?

Rust has its macros, Swift has its result-builders.

The latter wins a nice way to declaratively-describe UI widgets, but if that's not what I'm doing I don't need to think about it.


It's not about the features it's about priorities. In principal as you say you can just ignore the bits you don't use, but there are still some glaring holes like variadic generics which didn't get attention presumably because the core team was focused on SwiftUI.

And due to the governance of the language, the "community" priorities will always take a back seat to Apple's product priorities, so it feels a bit funny to invest too heavily in the language for that reason alone.


Thank you for elaborating, that does help me appreciate your point of view better. I personally preferred that they prioritized result builders higher — I guess I agree with them that the needs of the UI layer were more urgent.

It would be nice to have variadic generics, though.


> Rust has its macros, Swift has its result-builders.

The difference is that Rust macros are general — they're useful for everybody, and they can be used to implement nearly any kind of custom syntax or code generation.

There was talk on swift-evolution going back to the Swift 2-3 days about how Swift was eventually going to get “hygienic macros” that looked something like Rust’s proc-macros. But this was a very long-term goal; everyone knew it would take years to design and implement such a system that had the flexibility, safety, and expressivity we all wanted.

In the meantime though, Apple started to add a lot of complicated features using compiler magic. This started with Codable, which added language support for serializing and deserializing objects to formats like JSON. This was awesome, and became one of Swift’s best features IMO. But the implementation bothered me a little: it’s a protocol where the compiler will synthesize a default boilerplate implementation if you don’t implement it manually. But this is exactly the kind of thing that a real hygienic macro system would be better for; e.g. Rust's serde does the same thing as Codable, but it's way more powerful and more flexible. IIRC some people pointed this out on the forums, but this was dismissed with “we want to ship this now, and macros are years down the road.” Oh well, don’t let perfect be the enemy of good, right?

But the Swift 5 release had even more compiler magic — a bunch more protocols gained automatic compiler synthesis, and then “result builders” happened. Again: a feature that’s helpful in isolation, and solves a real problem, but greatly increases the language surface area. And since these features are implemented using compiler magic, only the compiler team can add new features, and it takes a long time. So in practice, the only magic features that get implemented are those that Apple needs for their libraries. Whereas if they had spent that time and effort working on a hygienic macro system, they could have achieved Codable, result builders, and every other use case library authors can imagine. Chris Lattner has indicated that this sort of thing was part of the reason he left the core team: "It is obvious that Swift has outgrown my influence, and some of the design premises I care about (e.g. "simple things that compose") don't seem in vogue any more." [0]

It became clear (partly through this, but also because of a LOT of other stuff) that the evolution of the Swift language was being driven by Apple's internal schedules, and not the other way around. Controversial or half-baked features were frequently rushed through review in order to be ready in time for WWDC -- or even worse, shown off in surprise reveals at WWDC, with the core team saying things like "yeah we know this is a bad design, but we showed it in the WWDC keynote so we can't change it now" [1].

At this point, I started to experiment with Rust because Swift's memory-management overhead was causing me too many performance headaches. It felt like a breath of fresh air: the language feels like it evolves at a snails' pace compared to Swift, but every feature that makes it into stable is extremely polished and very well-thought-out, especially with regard to how it fits with the rest of the language, and how it works in "weird" situations like embedded environments.

[0]: https://forums.swift.org/t/core-team-to-form-language-workgr...

[1]: https://forums.swift.org/t/se-0258-property-wrappers-second-...


To clarify, while result builders may not be as general as macros, it's not accurate to imply that they're not general. The SwiftUI DSL is but one example application.


I tend to agree with much of the criticism of SwiftUI, namely that it and the underlying mechanism (result builders) is over-indexed on the UI-for-iPhone-apps use case. But I think Swift's greatest strength is what I mentioned in the parent comment: progressive disclosure. You don't need to use result builders and if you don't, the complexity of the language stays the same. I contrast that with C++ which has added some powerful, complex features that underpin most of the language, making it more necessary to understand the full scope of the feature set to get anything done.


I wish this is true. But from all the things Swift has supported. It doesn't make compiler or compiler authors happy.

I wish there are more languages that focused on happiness of the compiler authors so that they can write efficient and small compilers with well-thought language features that intentionally limits how much they need to write in the compiler. Today we have such a huge compiler maintenance burden and as a result, you saw inconsistencies everywhere with these languages, and it is after many hundreds of person-year spent.

(Yeah, Go is probably one of the language that focused on compiler authors' happiness)


The language main purpose was always to be a replacement for C, C++ and Objective-C.

It has been on Swift documentation from day one.


Officially stated goal, sure.

But read/listen to what Lattner says and you get a different take.


I do and he says exactly the same thing.

>...the goal was really to build something that you could write firmware in or that you could do scripting in, that you could write mobile apps or server apps or low-level systems code, and have it be great at all of those, not just some terrible compromise.

Taken from https://oleb.net/2019/chris-lattner-swift-origins/ to save listing to the whole interview

Feel free to actually listen to the episode as well, and also read/listen the sections on how Swift is a complete Objective-C replacement without the bad C parts.


I misread your first post, sorry. Yeah, it's one heck of a goal.


This is a helpful clarification.


Not sure what this means, but I'm learning Swift and SwiftUI right now, since i'm in the apple eco system and hoping i've made the right choice, because my time is fairly limited, being an adult in my late 30's and having a full time job, it does seem a bit of a mess though.


I tried to get into learning Swift/UI several months back and was pretty disappointed how buggy the whole thing is, even when you're trying to do pretty basic things.

It seems like a lot of the iOS veterans are still not interested in switching to it so who knows, maybe I'll come back to that in the future when it's actually production-ready like Apple claims it is.


Strange, but this was not my experience. I found it took me a while to wrap my head around the way SwiftUI works, and the documentation was really bad. But buggy, not.


I wouldn't say I found it buggy, but rather limited. For instance even something simple like a list view has limited customisability, and some of the styling is mandatory unless you want to rebuild the entire component from scratch. It's a far cry from UITableView which can do just about anything.

Also I found that once you get into stuff like animations, things get messy very quickly since everything has to be expressed declaratively. It's a nice idea for simple static views, but I really missed the ability to have separation of concerns the way you can with UIKit.

Also I did find the tooling quite buggy. Like certain things would break the preview, and not give meaningful errors or just crash the Xcode process, so I would have to fix it through trial and error with commenting out sections of code until it would work again.

I am sure it will improve, but with UIKit you can basically achieve anything, and SwiftUI still seems like it has a lot to figure out.

I think it's a shame FRP has basically won front-end at this time, because while it's been a revelation in web (since it solves a lot of the problems which make web a mess) it carries a lot of its own problems, and I think there are going to be a lot of apps which will be horrific to maintain over the next few years.


All good points. Thanks for your thoughtful reply.


Don't worry, most consumer facing software development is a mess, there's constantly changing frameworks and shiny new ways of doing things.


I have a published App written in SwiftUI and I have written many thousands of lines of Swift. But I just scratch the surface of the language, and that is OK with me. There are many parts of the language that I almost certainly do not understand. But my code works, it runs fast and it does what I want. Yes, Swift does have a large "surface area" and that is too bad. But you can get by well without bothering with many of the darker parts.


I think the fact that advanced features are available, but their existence doesn't impact most developers writing most apps, is a great thing. (Not that Swift - especially SwiftUI - is perfect.)


I’m learning Swift and SwiftUI recently, and man, as a Flutter dev I’m super overwhelmed by how many features Swift has compared to Dart… it has too many ways to do things (not necessarily bad ofc)


I too am in my late 30s and I've been writing Swift for 7 years now and the language itself is not buggy, at least anymore.

SwiftUI has it's issues and one is platform support. If you want to support anything before iOS 13 (even that has limited support) then you are out of luck. The only SwiftUI I have shipped is for WidgetKit and WatchOS, both being somewhat forced upon you.

While it can be cumbersome to do simple things in SwiftUI that were easy to do in UIKit, I haven't found too many bugs. Most issues I have run into are on the Xcode side.

If you work with any Apple platform it is worth pursuing. If you want to just learn and use Swift it does have decent support on Linux thanks to being able to bind with C libraries. Windows support is there, but is very much in its infancy.


Swift semantics can be unduly confusing and complicated.

"copy on write" value semantics mean you can have two variable references to the same memory for a long time before one mutation causes the space to be copied and mutated, leaving the other unaffected.

The function name "move" here is beyond terrible. It makes sense only to compiler writers, and even then it is ambiguous.

The function's goal/effect is to end the lifetime of a variable binding, to essentially force the COW.

(So much for scoping.)

More generally, Swift grows in fits and starts mostly driven by Apple internal needs. So if Apple needs `move(x)`, you might, too.

Poor Swift team: fully exposed in open-source, but driven and locked to Apple's needs. All the best to them.


> The function name "move" here is beyond terrible. It makes sense only to compiler writers, and even then it is ambiguous.

These are exactly the sorts of issues that an evolution proposal is supposed to shake out. It gives users a chance to weigh in on new features, including naming, before they land in the language or standard library. The name is very much not finalized yet, and comments pointing toward a better binding for the functionality would be welcome.


They can have some beers with Microsoft, Oracle, Google, IBM language teams.


Swift epitomizes easy to pickup, difficult to master.


And they keep moving the goalposts.


I like the idea.

It's a fairly fundamental change, though. I'm not a compiler person, so I don't know how much cheese it would move. It also may argue with some of the built-in optimizations.

If you want to see these in action, run the symbolic debugger in release mode.


I have wasted many, many hours tracking down reference cycles, so in some sense, I welcome this change.

However, the scope creep of Swift is becoming nightmarish - nothing just makes sense anymore.

My hopes for Swift are that it teaches a whole generation how to program really compelling applications, full stack, in a profitable ecosystem (though open source, Swift is a 'gateway drug' to get people into the ecosystem). We will see if these decisions advance or detract from that.


I had been feeling that way until I worked on a project that uses modern React and realized that a lot of other stuff is becoming this way. At least with Swift the compiler helps you along a lot and you can lookup most language concepts easily without them being so out of date.

That said I really wish the tooling would keep up. Debugging stuff like SwiftUI and async/await code is really painful right now. It feels like they keep adding this stuff without updating the tools we need to have available to really support it. This is in stark contrast to what was/is available for Objective-C and UIKit with the debugger and instruments. I'm not sure it's a good tradeoff... The high level code becomes a lot easier to read and write but keeping a mental model of what's happening is a lot harder and it becomes very thorny when things go sideways.


This is my biggest problem with the state of Apple's developer ecosystem right now. The tooling is just pathetic — I love writing in Swift, UIKit, AppKit and SwiftUI, but it can really feel like the tools are fighting you. They're just not dumping nearly enough effort into making their tooling top-notch.

There's an extremely stark contrast between Xcode and Android Studio any time you need to do refactoring, for example. Xcode's refactoring tools are both limited and very very finnicky (to the point of being basically useless).


Xcode is a bug farm.

A HUGE bug farm. Whenever it gets an update, I might as well just go out and catch a show.

When I switched to an M1-based computer, the debugging became much finickier.

I suspect that this is an OS problem, as opposed to an Xcode problem (because it happens with other System-calling apps), but sometimes, spun processes seem to hang, and I can bang away on "Force Quit," until I'm blue in the face. Nothing but a hardware restart (I can't do a soft one) fixes it.

But Xcode brings the beast out on that one. It hangs and stutters quite frequently.


I live in the Apple ecosystem as a user. But as a developer, I could not imagine tying myself to it. It would put my career too much at the whims of Apple.


Well, I have had my career pinned to it since 1986.

It has not always been fun. Lots of atomic wedgies, in the recess yard, but things have worked out OK, in the long run.


Rust and Kotlin are much more advanced than Swift when it comes to writing full stack, deployable, multi platform, and performant applications...


I have encountered very few of these, with Swift. The reference counter seems good.

I tend to declare many of my object references as weak, anyway; even if not necessary.

Of course, I'm old-fashioned (and I primarily work with UIKit), so I use classes a lot.

If I am doing it the "current" way, most of the program uses value semantics, so it should not be an issue (in theory, anyway).


Aye - same here. Was once just a novice who had no idea how an application could hog up 62Gb of ram - and have weird changes to objects that shouldn't. Then I learned about references, and what a biological messaging system really means (thanks, Alan and NotificationCenter!).


> I tend to declare many of my object references as weak, anyway; even if not necessary.

Why?


Because they don't need to be strong. I like to establish basic Quality habits. This is one.

I declare object references as weak (or, sometimes -quite rarely, unowned) by default. If they need to be strong, either the compiler will let me know, when I try assigning to it, or the runtime will let me know with a bad access crash. Much quicker debugging than chasing reference loops. Crashes are actually good. I find the cause fairly quickly. I usually don't declare local scope variables weak; only ones that escape the scope.

I have found that I seldom need to switch to strong.

I also do other "useless" things, like setting the constant before the variable, in comparisons:

    "Good Grief" == whatCharlieSays
I also tend to "in-name" incoming arguments, like so:

    func compare(thisValue inThisValue: Int, toThisValue inThatValue: Int) -> Int
I also write @IBOutlets as explicit optionals, like so:

    @IBOutlet weak var someLabel: UILabel?
as opposed to the default:

    @IBOutlet weak var someLabel: UILabel!
This forces me to unwrap everywhere. I avoid implicit optionals like the plague. I use a lot of guards. I will unwrap implicit optionals in system-provided resources, even though it isn't necessary.

I don't really care whether or not they are "necessary," or what others think of me. Every little bit helps. I'm very conscientious about the code I write.

I write it very quickly, too. Once something becomes habit, it moves rapidly.

"We are what we repeatedly do. Excellence then, is not an act, but a habit"

- Incorrectly attributed to Aristotle


> Crashes are actually good. I find the cause fairly quickly.

I agree, but I can't really square that with the rest of your comment. When you declare everything as weak, things don't crash, you just lose objects and they turn into nil out from under you. Like, the reason your IBOutlets can be weak is your view controller holds onto its view, and then that either directly or indirectly references the thing you actually care about. But if you actually care about it, you should also take responsibility for its lifetime, no? Why rely on some implicit chain to keep the object alive? If you're not careful, you might end up silently dropping the object if you temporarily remove it from the view for layout reasons. It seems very much the opposite of the part where you seem to like manually unwrapping things rather than letting this happen implicitly (which I FWIW don't really agree with, but I don't really want to get into unless you want to hear it :P).

I should also note that using weak references comes with a performance penalty over strong ones, but that's a tangential issue.


The performance thing is a valid point, but, if I were working on something with margins that tight, I’d probably use unowned, or strong, anyway.

There are a significant number of folks that would sneer at me for even using optionals, in the first place, because, in their world, we should not be using classes, at all. It’s always fun to encounter that.

And, like I said, it really only applies to escaped/closure arguments, or class/instance properties. Locals are almost always strong, and I always unwrap in a guard/if before accessing, anyway, even if it is a local optional (which could introduce performance issues in tight spaces).

Reference loops are a huge pain to debug. It takes a lot less time to find the cause of a crash or unexecuted code, than a loop.

An important companion to always using weak, is also using lots of guards and tested unwraps. Without those, always using weak is just a bad habit, and sweeping bugs under the rug.

People are always happy to judge my coding habits, such as insisting on heavily documenting my code, and they are not always wrong, but I’ve been at this for a while. I get a lot done, quite quickly, and at a very high Quality level. It’s hard to argue with results.


I guess I just feel that the silent failures that come from things being weak generally outweigh the leaks that come from reference cycles


Not sure if declaring weak objects all the time is a good practice, you can introduce silent bugs due to an object deallocating unintentionally in scope. Apple actually recommends using strong with proper architecture and relationships between objects.

See https://developer.apple.com/wwdc21/10216


Reference loops are also “silent” bugs. They are the worst kind of silent, because they usually stay hidden until long after the bug was introduced. My way forces me to deal with the bug, very close to causing it.

The bugs aren’t really silent. If the code doesn’t get executed (because of all my guards and whatnot), it’s still a bug. It becomes apparent, quite quickly, and is easier to find. I just put a breakpoint in the unexecuted code, and watch it not happen.

Lot better than trying to read tea leaves with Instruments, a month after causing the loop.

The issue still happens, but it does so, in a way that is predictable. It’s like deliberately filing weak points into a support strut, so that breaks happen in a known place. The problem isn’t the break, but the orthogonal shear that happens at that point, and the weak point simply forces the break to happen in a place that it can be contained.

It almost never happens, but when it does, it saves me a great deal of time.

That’s the old McConnell thing about finding bugs near the point of introduction.


Even if we ignore ownership semantics for a moment, I find myself missing an explicit per-identifier end of scope ("undeclare") surprisingly often. The workaround is anonymous blocks, at least if you're one of those who don't like chopping up code into countless single-use functions. But that forces you to do early declaration when aiming for overlapping scopes and I really don't see a reason for this limitation. I guess the main reason for the absence of "undeclare" in most languages is that features affecting only local scope are beneath most language designers?


> I guess the main reason for the absence of "undeclare" in most languages is that features affecting only local scope are beneath most language designers?

Actually it's because approximately every damn compiler framework or textbook assumes that local variables go into and out of scope in a stack/LIFO order. (So if you declare `foo` and then `bar`, `foo` can't go out of scope until `bar` does, even if `foo` was a random temporary and `bar` is needed for the entire rest of the function.) And all the algorithms for working with local variables bake in this assumption, so you have to redesign everything to get back the sane semantics you get for free in assembly.

At least that's my experience with implementing proper local variable scoping; YMMV.


Right, I completely forgot about the "stackness" of scope on the implementation side. Sides, actually, because both compile time and run time tend to be stacks. If course they'd still be stacks, just with an unavailable marker attached to the "undeclared".

Fortunately, these days languages that are imperative at their core pick up "functional spillover" like that kotlin let seem to solve that nuisance-grade problem of a dangling `foo` temp just fine.

(the kotlin let is basically a list transform applied to a single instance, without all the pointless ceremony of wrapping as an Optional. I wanted to like scala more than kotlin, but I've come to respect kotlin a lot, for striking a balance between "language designer high" and the pragmatic mundane that must never be allowed full reign or else we get another groovy)


> Sides, actually, because both compile time and run time tend to be stacks.

Actually, runtime does not use a stack in the relevant sense (on non-toy compilers anyway). It allocates a stack frame on the call stack on a per-function basis, with possibly some other bits to handle variable-length arrays, but the actual local variables are allocated inside the stack frame in mostly the same way they're allocated in registers (but with less spilling). So the same 'undeclare'/register-coloring semantics apply.


I'm not sure I buy the rationale for not choosing drop() here, but then I'm not a Swift programmer.

It's true that this is a bunch of magic, while Rust's drop() is so non magical its implementation, which you could copy-paste if you wanted, is supplied in the documentation:

  pub fn drop<T>(_x: T) { }
But the reason Rust programmers write drop(x) is going to be much the same as this proposal would have Swift programmers write _ = move(x) -- to explicitly tell the compiler they want to destroy x right here. So I think as a programmer the drop(x) would make sense.


Why the word 'move'? Is there a historically, deeper or technical resin to use a word, that to me, doesn't mean what is getting used for?


Read the “Alternative spellings” section of the article.


It’s a term of art from e.g. C++.


Move should have been the default…


Swift is ref counted so I’m not sure how having move semantics be the default would really work with that? If it weren’t, I’d agree, but that was one of the core tenets of the language as a successor of ObjC


What do you mean? Like using a variable automatically sets it as unusable anymore except if you opt-in to mark it as still usable? It kinds of remind me of linear logic, but wouldn't that be a real pain to program with?


So, Rust has this very awesome thing where all variables are move by default, but types may opt in to be copied by implementing the Copy trait (a trait being the Rust equivalent of a Swift protocol).

This works exceedingly well. It essentially maps to what I want to do in almost all cases: Things that can trivially be copied (because they don't contain a resource) will usually work as you'd expect in any other programming language, while things that cannot (because they contain a resource like a file descriptor, or an owned pointer to some allocated memory) will indeed mark the old variable as unusable.

This is a really pleasant model to work with. So much so that I miss it terribly whenever I work in another language.


I believe this was in the swift roadmap to have an opt-in borrow semantics for a block of code, if you want to do low level systems programming - just like rust.

But I guess it didn't fit to migrate from current obj-c ecosystem so they left it for later.


good idea, but it's stupid to put it in the standard library

it should be built in


Very, very little of the swift surface language is not part of the standard library. "Basic" types like `Int` and operations like `==` are defined in the standard library, rather than built-in.


They are in the runtime, not in the standard library

std lib means you'll have to import a module

    import Move

    move(a)


You are incorrect. Swift's standard library is named "Swift" – and it is automatically imported. You can refer to something in the standard library implicitly `let x = Array<Int>()` or by specifying the standard library's module `let x = Swift.Array<Swift.Int>()`.


No. The swift stdlib is the API that is available by default without any import statements (consisting of the Swift, _Concurrency, and _StringProcessing modules). The runtime is a separate library that programs do not explicitly interact with; calls into the runtime are implicitly generated by the compiler when necessary to perform tasks like allocation or retain/release.


Thanks for the information


One of the stated goals is which enforces this by causing the compiler to emit a diagnostic upon any uses that are after the move function, so they may say it will be part of the standard library, but the compiler will have to know what it does, just as a C compiler can assume what strlen (if #included from the right system header) does and inline it any way it wishes.


Are there existing tools that help debug compiler time properties (e.g. reference counting) easily? I know of many interactive debugging techniques for investigating runtime errors, but when I'm faced with a compiler error it seems like the best interface I have is just an error message and then meticulously reading the code. I'd find something like that really useful for type-related errors too.

Even something as simple as breakpoints and printlns let me inspect the intermediary state of these systems.


Since reference counting is automatic for the most part, the language tries to express the semantics of references itself - for instance, you can capture self weakly for a callback later, and have to check that you still have a valid capture before use.

After that, your issues then are typically reference cycles, which can really only be found by tooling at runtime or by certain linting tools (e.g. warn that the type definitions make reference cycles possible, even if your code isn't making cycles). There are tools such as Instruments included with Xcode to help detect cycles at runtime.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: