Hacker News new | past | comments | ask | show | jobs | submit login
Swift Regrets (belkadan.com)
173 points by tempodox on Sept 22, 2021 | hide | past | favorite | 200 comments



I guess I was in the minority who loved Objective-C and it's selector syntax. I think my main objection to Swift is that it seems to be written by people who hate Objective-C and made the calling syntax much more complicated. I dearly wish a more mature F-Script had been the next Apple language.

A lot of the other problems is Apple not providing good documentation and making sure sample programs continue to compile. Bringing a new language into the world requires massive amounts of high quality documentation.


I'm with you.

Despite the verbosity of the NS/Cocoa/etc APIs, Objective-C is a small, flexible, and unambiguous language. It performs very well at runtime, and is amazingly quick to compile. With version 2.0 and ARC, it became productive in addition to everything else.

I get it, it was long in the tooth, and the C legacy was both a blessing and a heavy weight holding the language back, but we lost a lot of the simplicity in the move to Swift.


> Bringing a new language into the world requires massive amounts of high quality documentation.

Very much agree. Not just a new language. Any complex endeavor requires this. I have written a server-based framework, including spending a great deal of time developing API documentation, and was relatively recently introduced to Postman. I have been using Postman to communicate the API to another engineer that is not RTFM (I've come to learn that folks don't like reading stuff, these days. I am looking at ways of communicating information in formats other than longform text).

I like Postman. The main issue with using it as a documentation source, is that it can easily become "cluttered"; especially as I use it in "back-and-forths," over particular commands.

I would have been lost, in my Swift education, without StackOverflow, although I hardly ever consult it anymore (mostly because it's rapidly becoming less useful; not because I don't need the help).

SwiftUI has some of the worst documentation I've ever encountered. Obviously, it was headerdoc-style, and no one was writing header docs, so I was constantly encountering empty pages. It was so bad, a generous individual developed this site and companion app[0]. I understand that the SwiftUI documentation is being rapidly improved. I haven't really started into a big learning drive on SwiftUI, yet, so I hope it is in better shape, by the time I get to it.

[0] https://swiftui-lab.com


I was there with you too. Objective-C may have been verbose, but it was readable and easily modifiable.

It definitely seemed like the people who wrote Swift hated Objective-C. The "let", "func", etc syntax is ugly and unnecessary, along with the question marks.

I know I'm swimming against the tide, but I'll be writing Objective-C for as long as I can.


> along with the question marks

I'd take the question marks any day over tracking a nil passed around a dozen source files before landing in a dictionary literal.


Absolutely this. Having accidental null values floating around a program should not be a thing in the year 2021. And that’s exactly what some languages are fixing, like Swift. This problem is also one of the reasons why I got so frustrated using clojure.


The question mark syntax just moves this burden to some other layer of abstraction.


I disagree. Knowing if something could be nil is a very different thing than something unexpectedly becoming nil. If you have values that are guaranteed by the compiler to never be nil, that removes the burden completely.


> that removes the burden completely.

In practice this gives you improperly constructed objects with bad state, returned from code some random person wrote in a hurry. Yes, it is bad code but so is most code in existence you have to interface with.


I'm by no means an OOP guru, but if anything is passed through a dozen source files, I start to feel a bit queasy.


> I think my main objection to Swift is that it seems to be written by people who hate Objective-C

I mean that's the vast majority of people

>and made the calling syntax much more complicated.

How is this true? Swift is very straightforward, and similar to other languages

Meanwhile Objective-C is slow, unsafe by default, with super ugly syntax only a mother could love. More than half of Apple's CVE/iOS vulnerabilities lately have been from parts of the OS that still use ObjC, the sooner they get rid of that garbage the better.


> Meanwhile Objective-C is slow, unsafe by default, with super ugly syntax only a mother could love. More than half of Apple's CVE/iOS vulnerabilities lately have been from parts of the OS that still use ObjC,

Objective-C is quite fast–in many cases faster than Swift–and as (memory) safe as Swift for the most part. Most of Apple's CVEs come from code written in C or C++, not Objective-C.


Right! If it’s so slow, how did the original iPhone work so slickly, when all the apps and most of the frameworks were written in Objective-C?


If you look at the original review videos (or even the keynote), you may notice that the first iPhone lags quite a bit. Of course, it's not Objective-C that causes it, but rather the limited hardware of the time.


It raised the bar a lot, at least. The lag was nothing compared to what you got on the typical Java-enabled flip phone, or the original Android G1!


Only for those that never had a Symbian or Windows/PocketPC phone on their hand.


That’s why those both continued to be successful, right?


Their failure was due to management failures, stupid decisions like the "burning platforms" memo, or introducing new WP versions without backwards compatibility.

Nothing to do with the greatness of iOS that has created revisionists of the mobile computing platforms history.


I think you’re forgetting how barebones iOS1 was. I had to jailbreak to get MMS and copy-paste.


For the same reason python is fast for machine learning: because the performant parts were written in C.

Ignoring slick animations (which were written in C, if not in hand-tuned assembly) typical UIs of apps on the original iPhone could have run (a bit slowly and in monochrome) on an original Mac, that is in 128kB RAM on a 8MHz CPU.


> because the performant parts were written in C

Objective-C is C. More specifically a strict superset of C.

> Ignoring slick animations (which were written in C, if not in hand-tuned assembly)

Nope. The reason the animations were smooth (not necessarily fast) is that they were processed by the GPU and orchestrated by a separate process.


Yep -- and more than that, I’d say that animation orchestration worked well because of the very elegant design that leaned on Objective-C’s strengths (like dynamic key-value observation to track animatable properties).

And using reference-counting rather than garbage collection helped to minimise jankiness even on low-memory systems.

I think Android in particular had a very hard time matching iOS here because Java just didn’t lend itself to that dynamic programming style, and because the GC was just inherently janky. (It’s a lot better now that hardware is vastly faster.)


Animations were slick (compared to other mobile platforms) because they were hardware accelerated.


The original iPhone was very slick mainly because everything else on the market at that time was so terrible.


Because the original iPhone was so stripped down that it could run on that thinly little slow cpu. It took Apple years and years to add features back into iOS frameworks. Every one very considered and every attempt to not destroy the battery of the iPhone. Multi tasking only around in iOS 4 right? Like on iPhoneOS 1 every app was closed when you hit that home button. That it worked so smoothly was the result of extremely focused UX.


> Objective-C is quite fast–in many cases faster than Swift–and as (memory) safe as Swift for the most part.

Ehhhh. If you write Swift the same way you write Obj-C, it will generally be as fast or faster. You only fall off the happy performance path when you use Swift features that don't even exist in Obj-C. That's not Swift being slower, that's Swift letting you use abstractions that aren't possible in Obj-C. But you don't _have_ to use them.

And given that Obj-C is a superset of C, you can't reasonably claim both that "most CVEs come from code written in C or C++" and also that Obj-C is as memory safe as Swift.


>That's not Swift being slower, that's Swift letting you use abstractions that aren't possible in Obj-C. But you don't _have_ to use them.

What happens in practice is that the libraries you're consuming use them, and then you have to use them.


You can use the existing Obj-C libraries instead, which by definition do not use them.

I find it hilarious that Obj-C devotees object to this, since it precisely mirrors their argument for decades in the face of bogus performance complaints: “you can just use normal C functions”. They were right then, and you can just use Obj-C libraries today.


I think it’s a fair comparison to look at the idiomatic way to do something in a language and use that as the reference. Doing things on arrays in Swift often means people will call filter and reduce and map a bunch, while an Objective-C programmer might write a loop. It just so happens that LLVM can sometimes optimize the latter better. I mean, I could make the exact same argument you’re making about Objective-C being “as fast as C”: it literally is C. But when people use it they write methods and create objects, which are not things they can do in C. So I think it’s fair to call it “slower” if it needs to go through dynamic dispatch to enable the code that a normal developer would write.


This shouldn’t be true, ObjC’s only innate advantage is compile time, because the compiler is simply doing less and is capable of producing far worse (unsafe) code.

At runtime Swift can utilize static dispatch, where objective C is mostly dynamic. Good swift code should generally be faster.


> This shouldn’t be true

Performance isn't about what you believe should be true, but about what actually is true. Kinda like science. (versus religion)

> ObjC’s only innate advantage is compile time

Objective-C has a bunch of advantages. Compile time isn't really one of them, except when compared to Swift, which is ridiculously slow to compile.

And there are languages with very comparable feature sets to Swift that are way faster to compile.

> At runtime Swift can utilize static dispatch

"can"

> where objective C is mostly dynamic.

Not true. The C part of Objective-C (it is most of the actual language) is very static.

Also, Swift has some pretty amazing dynamic performance pitfalls. For example protocols. You see, protocols in Swift can be adopted by both structs and classes. Meaning that when you call a function via a protocol, the compiler doesn't even know the size of the arguments or how to access them or copy them into the function's scope. So even that has to be handled by a small vtable, and you haven't actually done anything with that argument yet!

As this is one of the many places where Swift can lose a cool few orders of magnitude of performance, you obviously need the optimiser to specialise the function for specific callers. Which it can do, sometimes, at some cost, as long as it can actually see the caller and callee at the same time, in the same compilation unit.

IIRC, Uber had an OOPSLA paper describing the extra compiler pass they had to write to get their app's performance to at least somewhat acceptable levels, because the existing optimiser wasn't good enough.

And when you want to do separate compilation, you're sort of hosed, because you don't have access to the caller when you compile the callee.

> Good swift code should generally be faster.

That turns out not to be the case.


Agree with most of your comment, but Uber’s thing was probably just as likely to be a team unable to say “no” to new code rather than one that “needed” people to anneal compiler passes


Uber jumped on the bandwagon way too soon


> I mean that's the vast majority of people

Maybe...

https://news.ycombinator.com/item?id=15421073

It's an old article by now, but check out the responders- Marco Arment, Steve Troughton-Smith, Marcel Weiher- pretty eminent iOS devs among them.


Yes, that article is nearly four years old. Since then, Swift has evolved considerably.

Steve Troughton-Smith has tweeted recently about his work in converting all of his apps to Swift over the last year [1]. "I will remember ObjC and the times we had together fondly, but after a year of being Swift-only I prefer making apps with it" [2]. "I also wouldn’t change the timeline in which I adopted Swift ... I don’t feel I lost out on anything positive by waiting" [3].

[1] https://twitter.com/stroughtonsmith/status/14392341761636638... [2] https://twitter.com/stroughtonsmith/status/14216157982091100... [3] https://twitter.com/stroughtonsmith/status/14216193780265820...

Marco Arment has talked about using Swift in new development for Overcast, although I don't believe he's rewriting existing code. He seems more open to Swift these days than when that article was written. (Can't find a quotable source at present)

Marcel Weiher has continued working on a language now called Objective-S, which sounds like the "more Smalltalk-y" language that other commenters have wished for: "Objective-S includes an Objective-C compatible runtime model, but using a much simpler and consistent Smalltalk-based syntax." [4]

[4] http://objective.st/About


> > and made the calling syntax much more complicated.

> How is this true? Swift is very straightforward, and similar to other languages

Swift is objectively one of the most complex languages out there[1]. Just recently, they adopted basically the entirety of Smalltalk syntax as an edge case of an edge case[2].

Just the rules for initialisers are more complex than many languages and still don't cover all the cases[3].

> Meanwhile Objective-C is slow,

The only people who believe this are those who have never measured (and have uncritically accepted Apple propaganda on this topic)[4]. Objective-C is a language you can easily write very fast code in (languages by themselves aren't fast or slow)[5]. Objective-C code is almost invariably faster than Swift code, and often by quite a lot.[6]

> unsafe by default,

Also not true. The id subset is quite safe[7] (and pretty fast), as are primitives. The parts that make C dangerous, particularly strings and other raw pointer accesses are abstracted away behind safe NSString, NSArray, NSData and friends.

> ugly syntax

Keyword syntax is actually highly elegant (Smalltalk's fits on a postcard) and extremely functional. So functional that Swift just recently added pretty much all of it as an edge case of an edge case of its closure syntax. Oh, did I already mention that?

> More than half of Apple's CVE/iOS vulnerabilities lately have been from parts of the OS that still use ObjC

Citation needed. Also: way more than half of the OS is still written in Objective-C, so you'd expect that.

[1] https://www.quora.com/Which-features-overcomplicate-Swift-Wh...

[2] https://blog.metaobject.com/2020/06/the-curious-case-of-swif...

[3] https://blog.metaobject.com/2020/04/swift-initialization-swi...

[4] https://blog.metaobject.com/2014/09/no-virginia-swift-is-not...

[5] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...

[6] https://blog.metaobject.com/2020/04/faster-json-support-for-...

[7] https://blog.metaobject.com/2014/05/the-spidy-subset-or-avoi...


> The parts that make C dangerous, particularly strings and other raw pointer accesses are abstracted away behind safe NSString, NSArray, NSData and friends.

True, and just like in C++, the C tribe that came into the language just ends up using char * most of the time, instead of the safe variants.


I have jumped into several badly written ObjC projects and I skimmed the source code of every CocoaPod that I've used, and I have never seen ObjC code that used C string/memory handling "most of the time". Not even when ObjC required manual reference counting.

Except for a couple of malloc'd C arrays (as an optimization over using NSArray<NSValue>), most of it was for interop with C libraries, the same code where in Swift you'd have to use UnsafeMutablePointer.


What can I say, lucky you.

Now try the same with a couple of offshored projects from some famous consulting companies.


Fair enough, but I also don't see why we should judge languages based on the absolute worst projects written in them.

For example, the offshore Swift programmers I've encountered will add "?? 0", "?? []" etc. into expressions involving optionals until they compile. That doesn't mean that Swift's nullability is pointless, it's still a great tool for diligent programmers.


I agree in principle, but that doesn't change the fact that such kind of code might land on an application I depend on for some reason.

Hence why I always advocate for better industry regulation, as I get to see how the sausage is made, and would like everyone gets more healthier.

Most managements don't care about diligent programmers, only delivery.


How is this true? Swift is very straightforward, and similar to other languages

That's the argument, most people want to program in JavaScript or C++, learning is such a bother. The Swift syntax is more characters than ObjC's syntax.


> The Swift syntax is more characters than ObjC's syntax. How so?


well, no - Swift's call syntax actually results in the same or more characters:

  somePoint.moveBy(x: 2.0, y: 3.0)

  [somePoint moveByX: 2.0 y: 3.0];

  somePoint.moveBy(x: 2.0, y: 3.0, z: 4.0)

  [somePoint moveByX: 2.0 y: 3.0 z: 4.0];


Now do a string format example


C++ is of course a vastly more complex language than Swift. I don't think anyone would want to code in C++ these days if they can avoid it.


I want. I do it. It gives me a level of control and a number of libs that no other language can give me, including all the C libs available as well.

Yes, it is not pretty sometimes, but if you take a look at well-written C++ code you would be very surprised at how clean it can look. With its quirks from time to time, but very clean:

- mark virtual overrides with override

- use move semantics to increase performance

- take advantage of RVO

- write GUIs, CLIs, or servers

- use lambdas and ranges

- use smart pointers to get rid of most memory management

- tune your server to allocate and deallocate in the desired patterns via polymorphic allocators or just plain allocators

- take advantage of SIMD and parallelism (via parallel algorithms library, among many examples)

- create generic infrastructure in algorithms with zero overhead penalty that is not even possible in other languages

- keep things working for the next 2 or 3 decades without touching the code

- Program in HPC environments

C++ is much better than what most people that do not use it all the time think. It is not as bad as they put it, and, more important, it is very high performance. I admit this is one of the big reasons why people use it, but C++ for application programming does not look to me like crazy either.


If besides best possible performance you also value ergonomics, security, maturity and tooling then IMO nothing comes close to C++, especially C++17 and later. Not that there is much choice really, only C and Rust can realistically get you comparable performance, each with their own problems.


a) I wouldn't be so sure that C++ is actually more complex at all, never mind vastly. It may be less forgiving... :-)

b) With C++, you're at least getting something for all that complexity: performance and control.


Actually plenty of people do want to and do so.


...and of course it was simple to interoperate with Objective-C in C++, in a way that it isn't for Swift, sadly


Which is super annoying when you have an engine written in C++, perhaps that runs on multiple platforms, and want to run it with a GUI under macOS.


You can use Scapix Language Bridge to automatically generate bindings for various languages directly from C++ headers:

https://github.com/scapix-com/scapix

Disclaimer: I am the author of Scapix Language Bridge.


No it’s not. If you have multiple parameters Obj-C is much longer and ugly.


Where are you getting this from?

Swift’s mandatory named parameters come from Objective-C. For the most part only the bracket placement is different.


Mandatory names parameters?

You have been able to provide anonymous arguments since I first started playing with Swift in 2.0. The function just has to define it that way.


Swift supports default and objc doesn’t. This makes calls smaller. Swift also has argument hiding with the underscore before the parameter. Swift also ask variable arguments. Hiding named parameters and variable arguments make calls much smaller. My point stands.


That's not really "much" longer. And in some cases the brevity saved could really increase obfuscation and confusion.


Yes it is; especially, if there’s long parameter names. And I disagree with the obfuscation and confusion, but that’s a discussion on Swifts goal of self documentation not code length.


you can define ObjC method parameters with empty selector names if you want.

This for example is perfectly legal ObjC, callable as [foo gimmeAString:10 :"hey "].

  - (NSString*)gimmeAString:(NSUInteger)count :(NSString*)piece {
      return [@"" stringByPaddingToLength:count withString:piece startingAtIndex:0];
  }


Yes, indeed. Long time ObjC programmer (and C++).

However I've come to appreciate Swift a lot more recently. What had taken pages of code can now be coded in just a page of Swift. That's nice.

I still like OBjC, nonetheless. But I'm liking Swift a bit more. The fact that it's stabilizing is a key feature to this. My objections to Swift have been how changes would pull the rug out from under what you'd already learned and coded before 5.0.

I agree with the documentation issue. A lot of features can't be found short of watching WWDC videos. And even then...

The other issues are "bad faith" and "bad intent" with privacy lately. I've actively removing and abandoning most of my use of iCloud because I can't really trust Apple even on that anymore.


I can't stand Objective-C, but I'm pretty apathetic towards Swift. As far as I can see, Swift simply swapped Obj-C's terrible syntax for its own, equally terrible syntax.

I probably would have been more enthusiastic about "Obj-C 2.0" than Swift.


I didn't mind Obj-C, it had its own elegance and logic, and Apple really made strides in making it more ergonomic with features like ARC and of course the excellent xcode editor. I still miss using my apple mouse's horizontal scroll to navigate between files.


I don’t understand why Apple’s developer documentation is so terrible


I think that apple hated that at the end of the day, a lot of the system frameworks had to be written in objective-c++ which is not great.


NeXT had another point of view on that matter though, the original DriverKit was Objective-C.

Objective-C++ main point of existence, just like POSIX, was only to bring other software into the platform.

It is quite telling that only old timers have access to Objective-C++ docs.


Are you and GP talking about the same thing?

The obj-c drivers from next/OpenStep were replaced with a restricted subset of C++ (not objective-C++) IOKit system. I'm not sure what system GP is referring to based off Obj-C++ but I assume it's not the drivers?


Not really, I am talking about NeXTSTEP and DriverKit.

Ironically I think IO Kit userspace replacement, DriverKit, got its name as homage to the Objective-C one in NeXTSTEP.


E.g. AVFoundation.


Unlike many commenters here, I came to praise Swift. Of course there will be regrets -- nothing's perfect -- but on the whole I think the designers (and community) did an amazing job with Swift. I say that after having written multiple apps in Objective C -- I would never consider going back.

Optionals and null safety, first class enums, first class functions are just lovely in swift, all in a language that is super readable when well-written.


Swift has way too many shorthand syntaxes. It feels nice from a language design perspective but when teaching Swift, you can see how confusing it is for the students. In larger codebases a lot of shorthands are forbidden by the organizations for consistency. So I'm not sure who's the real beneficiary for all those shorthands?


The shorthands mystify me as well. I have taught a few courses using Swift and the shorthand notation, as well as implicit variables like oldValue and newValue in setters, are confusing. You can try to ignore them, but eventually a student will find them.

In my experience, teaching courses using Racket have had better outcomes. At the end of the Racket courses, I can quickly transition students to other languages, including Swift. It hasn't worked out so well with Swift. I am still gathering data so I can come to a better conclusion as to why.


> Swift has way too man shorthand syntaxes. It feels nice from a language design perspective but when teaching Swift, you can see how confusing it is for the students.

100% agree. As a late adopter of Swift, I spent a lot of time looking at code that was so stingy with information that it was challenging to even formulate a Google-able question from it.


Very much this. The original discussions around Swift talked about the value of being explicit rather than implicit and reducing surface area (for instance, no ++/-- increment/decrement operators). That goal appears to have been completely lost recently.


My impression following swift forums is that original core team members were very concerned about relying a lot on first principles, whereas the new generation is more about building powerful constructs, in a fast paced way.


The new generation doesn't seem to have a need for extracting substrings from strings then. The existing Swift syntax for slicing strings [1] is horrible.

[1] https://www.cocoaphile.com/posts/string-slicing-in-swift


You can find a few extensions on stackoverflow that makes subscripting work with integers on strings. I'm still unsure why this isn't part of the stdlib..


This is a product of UTF-8 encoding, isn’t it? What’s the third index of “Füß”?


There is no reason they can't come up with a more readable API that works. Python says it's 'ß' and I'm fine with it as long as it's consistent:

  >>> "Füß"[2]
  'ß'


That article is terrible. Simple wrapping a string in Array() converting it to characters achieves what the author desired and makes the rest of the article pointless.


The article is only for demonstrating how standard library works (it was the first Google result for me). The point is we shouldn't need to convert a string to an array of characters to manipulate it.


> The point is we shouldn't need to convert a string to an array

Yes we should. Unicode strings have ambiguous lengths. I would encourage you to try slicing Unicode strings sometime to understand why this is true.

Consider the black Santa emoji. What is the length of that emoji? Backspacing an emoji deletes it, so it could be one. But then how would you substring slice out the skin color component of the emoji?

Converting a string to an array yields the symbols on the screen separated by the cursor position, but that’s not a true representation of the data.


This is a typical example of overengineering.

When one speaks about "string slicing" they don't mean stripping the skin colour of an emoji. If there is a black Santa emoji in a string, I want to keep that Santa black. I am not interested whether it's a composite emoji or not.

The simple syntax should cover 99% of use cases and there should be a special, more advanced syntax for the rest, one that you'd use if you are writing an Emoji parser.


No, it's not overengineering. It's working with unicode. You have to let go of the idea that a string is a sequence of characters separated by where your cursor pauses. It's not. Black Santa was a simple metaphor for more real-world scenarios such the chinese language, in which multiple characters form to create a new character.

Imagine if you had a Color key on your keyboard, and a Santa key on your keyboard. Each key individually has meaning. When the two characters are positioned adjacent to each other, you get SkinColorSanta as one single character.

What is the new length of your string? Two for each original Color and Santa characters? Or 1 because there is only one icon the represents the two together? Remember Swift interops with C, so what is the size when you need to drop to char*? Hope you don't plan to use 1!

How would you solve this problem? Swift is open source. Make a PR if you have a better solution.


I'm not sure if we are on the same page. I am simply talking about syntactic sugar, not semantics.

To restate, I still don't understand why you think

    str.dropFirst(1).dropLast(2)
handles unicode better than

    str[1:-2]
What does dropFirst(1) drop if the first _element_ is a Black Santa? Why can't whatever it does be implemented using the bracket syntax?


> So I'm not sure who's the real beneficiary for all those shorthands?

Coffee shop and college project app developers who are chasing the cool (be it tricks or frameworks).

That’s not a knock, but that’s who I see using these things.

We were hiring a guy for a Flutter app that had no previous experience in Dart, but turned us down because Dart lacked some whiz bang feature he had too grown accustomed to. I did not see this as a loss.


Could I ask what feature they were looking for? I haven't used Flutter in a while, but it left a good impression. I remember being far less annoyed using Flutter than I am using other declarative frameworks like SwiftUI.


I’m an embedded C guy, so you’ll have to forgive my ignorance on this. All this high level stuff is magic to me.

But I want to say it was something larger than a conditional style or pattern matching and probably a state management framework. IDK, I had a good laugh about it and moved on.

EDIT: As to good impression, yes, we’re very happy with Flutter.


i think there's a growing concern about the language complexity. I hope next releases are going to be about trimming down the language, and consolidating the not-so-sexy but very important other aspects (multiplatform, compilation speed, spm, tooling, etc) following golang's example.


I don’t know if this is Swift’s fault, per se but I find Xcode to be a real pain to use. There’s something about the Apple Way for UI/UX that just doesn’t jive well with an IDE experience. I find myself constantly jockeying around the various windows and panes within Xcode that I usually never have to bother with inside of Visual Studio.


Probably depends a good deal on what you're used to. By far, the IDE I've spent most time in through my career is Xcode (or its predecessor Project Builder) and have found it's not too bad once you've got a grip on the things it doesn't "like".

In fact I enjoy working in it more than I do the much-vaunted Jetbrains IDEs, even when configured to use Xcode key shortcuts (thus short-circuiting the familiarity problem a bit). The faster autocomplete is nice, but a lot of the "smarter" parts end up getting in my way or doing the wrong thing more frequently than they are helpful. I can't really compare with Visual Studio since I've never really seriously developed for Microsoft platforms.

I will say though that Xcode got noticeably worse after Interface Builder got merged into it, and I think much of its woes stem from that merger even today. IB should've been left a separate tool, which is particularly evident now that XIBs and Storyboards are yielding to SwiftUI in the nearish future.


Agreed. I’d love to see Apple break the core of Xcode out into a separate app with a pared down feature set (text editor, file explorer, debugger, console, etc) give it LSP support - and then make it extensible and blazing fast. Basically VSCode but mac native.

I really don’t want all the visual editors and previews. Interface builder, Core Data Model Editor, even Swift UI previews feel so half-baked that they’re effectively unusable anyway. Swift packages are a delight because they’re purely text based.

As coders, we’re working with text all day. It doesn’t make sense to context switch to dragging boxes around a canvas to wire things up.

I’d rather they invested the engineer time into the core editor and made it rock solid.

My hunch is that they spend so much time on them because they look great in demos, but when it comes to the end of the beta cycle and the tool is still broken, it leaves a bitter taste. Simplify it.


If you develop for iOS|macOS professionally, you may be interested in Jetbrains' AppCode [0].

Note: I'm not connected in any way to Jetbrains, and I have not used AppCode myself, but I used other tools from them, and they're quite competent, I'd say.

[0]: https://www.jetbrains.com/objc/


Apple professionals use Xcode. Using different tools than the ones Apple uses leads to a lot of problems that Apple are not interested in fixing. The days when a company like Metrowerks or Symantec could be a viable third party IDE supplier are long gone.


I don't do Apple development as much anymore, but I used to write in AppCode a lot and used Xcode only to build.


Alongside Xcode I find BBEdit very useful for editing data files, diagnosing merge problems, writing shell scripts, etc. I still do almost all my real coding in Xcode to use the autocomplete. Tower is useful for anything non-trivial in Git. For basic operations the Xcode Git support is useful.


AppCode uses xcodebuild etc under the hood and xcodeprojects to configure the ide. You aren't going to get any bugs in the compilation etc process than you would in Xcode.


I believe AppCode uses XCode behind the scenes for most of the magic stuff anyway. It's just a more capable facade.


Xcode as an IDE...and at a certain scale a build system like bazel or buck.


Xcode's faults preceded Swift by many years.


Some, yes. But in my experience, Xcode is/was amazingly responsive right until the first Swift file or IBDesignable class is added to a project. Then something happens in the background and everything starts to crawl.


There is swift language tooling for vscode, if you’re so inclined.


Not an apple user, but I've heard that xcode is an ide+build system. Which means that you can't compile an application if you don't have the exact version of xcode installed. How true is that?


I'm not sure what you mean by "the exact version of xcode installed" but you can compile most applications with any recent version. Of course when new SDKs are released you need to update the Xcode version but it's not a big deal (and maybe a once a year thing).

It is an IDE + build system but you can totally write code in a different editor and compile it on the command line with xcodebuild.


They're part of Xcode package but they can also be installed separately IIRC.


I'm surprised he doesn't bring up minor features that lead to huge compile time issues, like operator overloading and imports being module sized vs file or folder wide and type inference in some cases. Not to mention how the language doesn't actually scale that well with core count vs. many, many other programming languages. Throwing 64 cores / 128 threads at a C++ code base speeds up builds in a linear fashion during the compile step, in swift it does not and it tends to effectively max out at around 8 to 16-ish build threads. Even with 3 view toy apps made with SwiftUI on a M1 macbook is slow to build & run relative to it's size!

Honestly you would have gotten %80-%90 of the benefits of swift by making Obj-C use a new syntax that looked like swift and continued to improve Obj-C as a language than what you would have gotten with swift. A lot of the ugly of obj-c could have been translated away with very simple 1:1 syntactic sugar macros. And you would actually have fast, responsive compile and indexing times, unlike swift.


> Honestly you would have gotten %80-%90 of the benefits of swift by making Obj-C use a new syntax that looked like swift and continued to improve Obj-C as a language than what you would have gotten with swift. A lot of the ugly of obj-c could have been translated away with very simple 1:1 syntactic sugar macros.

This is a common talking point I hear in the Objective-C community, but nobody has come up with a credible design or implementation of "incrementally evolve Objective-C and drop the C part" beyond stating that it's trivial, etc.



“Hypothetical” being the key word here, though. These proposals are all thin on details, and if it was so easy someone would’ve made such a language by now.


No one is going to write a detailed estimate to paint your house if you've made it public knowledge you're not interested in your house being painted.


That's a bit bad faith, making a language is never easy and takes a ton of resources.

Swift is the bandwagon at Apple due to how Chris Latner started the project and his clout and I think it would be politically untenable to make ObjectiveSwift going against that, especially now at Apple.

Nobody else but apple would make ObjectiveSwift too, because for better or worst ObjC & Swift are languages that are for the apple platform and nothing else. If you don't need to make something for apple, and you're going to do it without major tech company sponsorship, then you have freedom and you go make things like Rust, Nim or Elm instead.

With all of Swift's problems, I would never recommend it for a server side platform, and it's adoption shows that reality.

Because of all of the above, you'll only see hypothetical proposals, no implementations.


> Nobody else but apple would make ObjectiveSwift too, because for better or worst ObjC & Swift are languages that are for the apple platform and nothing else.

What about all the work done to support Swift on Linux? I had thought that was mostly done by the community rather than from Apple itself, but I could be wrong


It is as successful as the work done to support Objective-C and GNUStep.

It is interesting that it works, and some folks might even create some products that use it, but it won't ever take the world by storm.

Similarly like Mono was never that much relevant, with Miguel and others ended up creating Xamarin and focusing on mobile instead.

.NET nowadays has a good story on Linux, because now it matters to Microsoft to make it relevant, and yet most UNIX shops would rather go with Java, Rust, Go,....


I agree that it isn't super commonly used, but I was responding to the assertion that no one would bother making an Objective-C replacement for non-Apple platforms. Clearly some people are interested in spending time extending Apple languages for non-Apple platform purposes, so the idea that the only thing stopping people from making a "better" Objective-C is lack of support for third party platforms seems kind of strange to me. If it really were not that difficult to make a better Objective-C and enough people were interested, it seems like it would have happened regardless of official support by Apple for third party platforms. Either the interest isn't there in the first place, or it's not nearly as easy a problem as suggested.




The main point of Swift was to create a memory-safe language Apple could use across its OSes. Trying to change Obj-C's syntax doesn't help with that goal.


Swift does not provide memory safety for concurrent code. It has comparable pitfalls to Go. (You can use Thread Sanitizer to try and diagnose these issues, but that's best-effort and runtime-only; it does not make your code totally safe.)


Concurrent memory safety is definitely a goal. Try the new ‘-warn-concurrency’ flag to see what I mean, it is comparable to rust and quite different than thread sanitizer. There’s also a new runtime sanitizer this year with swift intrinsics, not best effort like tsan was.

That said. Swift is in the tough position of trying to be a lot of things at once to users with competing needs. Applications, systems, performance, education, prototyping, etc. While there’s broad agreement concurrency safety is important, not everybody thinks it is important enough to bury your working build under a thousand errors (though that view is represented)

Ultimately swift’s philosophy is that safety is practice and not theory. Some people do turn on ‘-warn-concurrency’ and fix their errors, others would want to ignore them and find some escape hatch to squash them which doesn’t appreciably improve safety, still others might not upgrade if that was required and maybe the ecosystem as a whole becomes less safe for it. Swift feels responsible for these kinds of outcomes.

It’s a tough problem but it does lead to interesting ideas that make safety more practical and productive. Remains to be seen how much of both worlds you can have, but swift/clang/llvm have a long history of doing stuff like that better than you expect.


That is true, but the great majority of memory safety issues are not concurrent ones. In practice Swift and Go are huge improvements over C, C++, and Objective-C in terms of memory safety.

I'd also say they are in a reasonable space in the state of the art of safety in general. While they do not make concurrency safe and Rust does, in Rust you need to use unsafe to write things like doubly-linked lists and graphs (or resort to things like indexes-in-an-array), which you can do safely in Swift and Go. So there are interesting tradeoffs all around - our industry has not found a perfect solution here yet.


In my experience post-ARC almost all memory safety issues are thread-safety related.


Except ARC only works for Cocoa like classes, everything else is traditional C like memory management.


Yes, but most of the time you’re not using that in your Objective-C code.


If I had Go's build speed in Swift, I wouldn't have minded a clean break language like Swift.


It does now with actor based concurrency


What do you mean specifically about memory safety? A clean break new-syntax Objective C could have dropped the C part of ObjC, added strict nullability, dropped header files, add non ABI breaking features like enum ADTs and a bunch of other tweaks and still kept most of the same language compiler code without all the downsides of swift. Swift became a silver bullet homer car as far as languages go.


That would not have helped with use-after-free, though, which Swift does solve.


That's solved by ARC, not Swift.


It was really solved by reference counting, way back when Foundation was introduced. The ARC increment was tiny.


Tiny? It made it so you’d never have to deal with an overrelease ever again…


Yes, tiny.

Not sure what you're on about with the "overrelease" bit, but all ARC did was automate things with additional compiler support that were already automated.

And it caused additional crashes in code that shouldn't even be able to crash:

https://blog.metaobject.com/2014/06/compiler-writers-gone-wi...


For Cocoa like classes, it did not solve anything else for the rest.


My stack-allocated integers don't need it.


Yeah, but those heap allocated structs surely do.

And although there is now partial ARC support for that scenario, partial is the keyword here, as it needs to obey specific access patterns to actually work.


But why on earth would I heap-allocate structs in Objective-C? That's what I have objects for. If I am going to dumb things down to structs, the reason is that I don't want heap allocation.

Unless I have no clue whatsoever as to what I am doing.

As in, I just wrote pure C (no Objective- at all) and for some reason changed the extension of the file to .m

Anyway, not using the solution that is there is not the same as the solution not existing, it certainly doesn't qualify as "the rest".


You "mpweiher" might not do it, but I assure you plenty of enterprise programming cogs do.

They code mostly in a C like way, and only use Objective-C at the level it is required to call into Apple specific APIs.

When using C++, then their code looks like what I call C+.

The same set of people that are to blame for Objective-C conservative GC never working in practice, when mixing Frameworks compiled with different modes.


Er...no.

I mean, sure, I believe you that you've seen this. But I've looked at quite a number of iOS/macOS projects and this was never a problem. Not even close. If anything, people were extremely hesitant if not actually afraid of using C features outside of the very basic ones like control structures, arithmetic and assignment.


Doesn't change the fact ARC doesn't work with structs, and people actually allocate them on the heap, hence why on 2018 Objective-C got some support for also applying ARC to structs.

Maybe time to also review the WWDC session and the points made there?


Oh right, good point. Other types of memory safety are more relevant here then.


C++ pays an extraordinary price for the fully parallel builds, concealed in the "One Definition Rule".

The price is, if you violate the ODR the standard says your program is not valid C++ (and thus has no defined meaning) but there is no requirement for a diagnostic (ie a compile or link error) and the build might complete.

This has sometimes been described as "False positives for the question is this a program?" and is a pretty serious penalty to pay for the benefit of improved compile time.


Modules mitigate in a big measure ODR violations. It will be quite more difficult. Compile times will also be better, at least incremental ones.


Could you have ODR and proper warnings / errors handling the issue when it pops up without much penalty?


No. ODR allows the compile step to be embarrassingly parallel. Since all the separate definitions of X are, by fiat, identical we needn't detect inconsistency at all and so there's no interaction, it's like merge sorting the files, you can spin up as many threads or processes as you like and process more source files.

If we decide "Oh, but I do want to detect inconsistencies, my users would want me to warn them about that" then we can't have the parallelization because we need to rendezvous constantly to verify the definitions are consistent.

There are a bunch of tricks that people do today to get some semblance of ODR checking for not too high a cost, and to avoid some ODR pitfalls that might defeat the checks they have. C++ programmers have accepted this danger, if you don't like it then C++ isn't the language for you.


This is a whole series of articles, and he does address type overloading in one of them, which is maybe what you're looking for.

https://belkadan.com/blog/2021/08/Swift-Regret-Type-based-Ov...


Before Swift/Kotlin our iOS team was way more productive than android. Everything was done in half the time.

They both switched to new languages.

Now it has flipped. Android does things in half the time the other team needs.


I've found similar results, although at the end of the day, iOS development is still faster overall.

Kotlin does a better job of getting out of the way when you need it to (compared to Swift), and Jetpack Compose is the real game-changer in terms of productivity.

That being said, there's a lot that you 'get for free' in iOS' frameworks, and APIs are generally more interoperable / batteries included. Both of these factors are what ultimately speed up iOS development.


I haven't had a chance to use Jetpack Compose yet but this roughly tracks with my experience. Needing to hunt down third-party libs for everything slows development down considerably and makes for complications down the road when those libraries inevitably stop being supported.


Totally.

To add to this: when looking for third party libraries (Swift/Objective-C or Kotlin/Android Java), aside from those from large organizations (Square, Airbnb, etc.), I've found the selection and quality of those available for iOS to be better, on average.


Strong agree here. This is also tied into the "batteries included" approach of Apple platforms, because many if not the majority of Swift/Obj-C libraries are mainly wrappers around system frameworks that either bring them to a higher level (in the case of C or C++ APIs) or add a sprinkle of syntactic sugar and light-to-moderate gap filling functionality. So naturally, these libraries are easier to maintain in the long term simply because they're doing so much less.


I think it helps that C and C++ just work on iOS, so you can easily use or wrap existing code in those languages.

Unless things have changed a lot recently, C/C++ sort-of-works on Android but it’s a pain to use and the tooling support is very weak.


There's a tool called "scapix" that's popped up recently that fixes this and is really impressive.

It's zero-boilerplate (IE, don't need to manually write bindings like most tools) bindings from C++ code for both Swift/ObjC and Java. The idea being you can write C++ code and then consume it on both mobile platforms, making your life easier.

  https://github.com/scapix-com/scapix
It also is really useful if you want to use Java from C++. Manual JNI invocation and type translation is awful, this just smoothes over all of it.

  https://github.com/scapix-com/scapix#java-link
Good usecase being when you want to extend a Java app with some native capabilities.

I've had a scenario where another app handed me the pointer to a window to render to, and so the only way to use it from JVM was to write some bridge code. Where C++ started the JVM & invoked my Java app's entrypoint, sending over the window pointer, and then from there I could render/paint into the window.


Sounds great, I’ll check it out!

I’ve always wondered why Google didn’t make something like this themselves. I assume the answer is that they just don’t care about native code, or at least native/Java interoperability.


Because as per Android team, you should only be writing C or C++ code for implementing Java native methods, Vulkan, real time audio, integrating native libraries from other platforms, or anything else game development related.

But yes, not even caring about a better JNI tooling story is a big pain, specially given Android Java, they surely could have come up with a better FFI story.


I find it hard to believe that anybody is faster coding in objc than swift, if you control for garbage code. It was easy to quickly write buggy messes in objc - swift made that a lot harder, and is also a much more ergonomic and productive language in my experience. I feel like there has to be some other factors at play if that’s the results you’re seeing. (Source: I’ve been doing objc since 2009, and swift since version 1.0)


A well jelled team, with senior engineers can be much more productive with Objective C.

But for beginners, swift looks less intimidating due to its syntax familiarity. The language pitfalls become apparent as you dive into it.

Perhaps that’s why swift in ML and other areas failed. People don’t like the language. In iOS you are forced to use it.

I hope the swift team does some hard thinking and start slashing features and make it Simpler. What we got is something more complex than even Java. This is the same reason that Scala didn't take off (even though some people like it).


This just isn't true. Given equal familiarity, Swift will be vastly more productive at any skill level. Of course you can always tunnel deeper into more sophisticated solutions using Swift, since Obj-C lacks most of its capabilities, but that's hardly an equal comparison.


I have a lot of experience in both languages, and TBH it's just not true. I'm less productive in swift in many ways, mostly because of it's slow build and indexing times. It's infuriating.

You are still coding against the same UIKit and other apple libraries in both. SwiftUI has a chance to make it better, but it's incomplete and has bugs/gotchas that make it not as productive as UIKit when you run into that, which is pretty easy. But in an imaginary world where we got ObjectiveSwift instead, SwiftUI could have existed there and given the same productivity benefits.

Binary sizes ballooned because of the language. Binaries were significantly smaller in equivalent Obj-C programs than the swift ones, even when the standard swift library was finally baked into the OS.


It would be impossible to evolve Obj-C to add most of Swift's capabilities unless you drop binary and source back compatibility. At that point it's just another new language and you've lost the things you say made Obj-C simpler.

I have a hard time believing any engineer is productivity bound by Swift's build times and especially indexing times, since indexing doesn't affect your ability actually write code. Typically developer productivity is bound by thinking of an appropriate solution to a problem and expressing that solution in a language. Given how much easier that expression is in Swift, especially if you spend a bit of time optimizing the language to your problem, the build time becomes rather incidental. Add to that Swift's ability to catch far more issues at build time and its language features and the Obj-C solution falls further and further behind. Even if your Obj-C solution builds faster it will likely be an inferior solution anyway.


I see you haven't worked in a swift code base that is more than a few engineers. Work in a project that is 20-80+ engineers and you slam into these issues very quickly. Last time I checked the inflection point starts around 100k lines of code.

Also 'thinking' of solutions often involves writing something and then trying it out in a build-edit-run cycle, and then integrating your thing into a larger app context, which also involves many build-edit-run cycles. Most people do not think of their solution in whole cloth in their head other than in a broad strokes manner and then start writing code. This means indexing and building matters greatly for actual productivity.

Indexing issues means your ide is not auto completing, click to definition doesn't work, and in the past, even syntax highlighting failed. Also when you have some half formed code, a lot of indexing functions start breaking in an annoying way, while I don't really recall this behavior in Objective-C code bases.

In practice I've found swift being 'able to catch issues more' is mostly strict nullability, better array / dictionary strictness and enums, which would actually be relatively simple to implement in an extended ObjectiveSwift. Strict generics would be more work, but you can add it to the language also as a version update. Dart showed how it's possible to add big changes like strict nullability to a language with their dart v1 -> v2 update. Objective-C could have gone down that path instead.


  > Last time I checked the inflection point starts around 100k lines of code.
if your using swiftui + generics for your view models, depending on what your doing, be prepared to hit that limit at around 2000 lines, and then your app wont even compile...


indexing doesn't affect your ability actually write code

What! Of course it does. If it doesn’t impact coding at all, what is it for?


You are talking out of your arse… and so am I.

Unless we have well researched data, it is just a opinion.

My opinion and experience is definitely different than yours, and I have seen that objective c leads to more productive teams if they are mostly senior people. (With a couple of libraries (just some helper categories on strings and arrays) and some sparse macros objective c becomes a very productive language. But it takes a senior team to do that well).

Meanwhile with swift you feel you have to please the compiler at every! step? And it really doesn’t lead to much safer language anyway (swift bugs are different) and you feel you have to fight the compiler in every way.

Objective c is less fuzzy, and while it lets you do dumb/fast things (especially important during prototyping) it gives you a clear ‘you are doing x wrong’ warning which gives you the best of both worlds

Fast prototyping when you need it, and safety when you need to ship (you can have a production build fail on just warnings)


Nullability isn't that big of an issue in practice. Every call in objective-c is equivalent to a variable?.thing call anyway, and in swift codebases most things become non-null fairly quick. It's probably the top feature of swift for me, which is ironic because it's relatively cheap (in compile time) to implement compared to many other things swift has.

And if you're doing nullability, you might as well add ADT enums, since that is usually how it's implemented.


These are the same stupid points people brought up when Swift was first released. I used to believe it too. Then I actually used the language and integrated it into existing codebases. The jump in quality and performance was notable and immediate. What you state here about both languages doesn't match any experience I've ever had or seen from other teams.


I have a rather complex application that is a mix of C++, ObjC and Swift. I humbly consider myself an expert in all three languages and worked on AppKit while employed at Apple. Perhaps Swift is slightly more accessible when you begin development. If you aren't familiar with ObjC syntax, Swift my look more familiar at first, but syntax quickly becomes familiar.

When you mention more sophisticated solutions using Swift, I am not sure what capabilities Obj-C lacks that would hamper it. Are you referring to the functional parts of the Swift language? Combine?


Lacking capabilities also means fewer decisions. If you want to model something in ObjC, it's a class, full stop. In Swift you have the choice between classes and structs. There are generally more degrees of freedom.

Just observe how much the iOS community talks about Swift patterns. Objective-C was more like Go, in that you accept your fate, keep things simple, and write some boilerplate code here and there. It seems to drive some people crazy, but it also avoids C++-style meta discussions about language features.

Whether this trade-off increases or decreases productivity depends on your team.


You’re correct, the post isn’t completely true- Swift syntax isn’t more familiar to beginners than ObjC is, because in their pursuit to take out the C, they also replaced several of the long-standing C patterns such as lack of prefix/postfix operators, standard for loops, parameter name and type ordering.


They're not saying developing in Swift is faster than OjbC, they're saying Java for Android was slower than ObjC for iOS, and now Kotlin for Android is faster than Swift for iOS.

Both teams move faster in the new languages, but the Android team moves more faster.


I code several times faster in ObjC than Swift, and this is after writing two released Swift apps. Just a simple thing like writing string to num to string that works in different bases is a nightmare in Swift.


Hope this eases your nightmares:

  String( Int("ff", radix: 16), radix: 10)


Actually that simplistic approach crashes if the value of the string goes out of range or the string contains invalid characters for the base. For commercial quality code dealing with user input you need to do your own checking char by char and assemble the result using multipliedReportingOverflow and addingReportingOverflow. The char by char processing is a pain given you can't really take the value of a Character the way you can in other languages and end up having to feed them through a map. In the end my correct bulletproof implementation is only about a page of code, but it was awkward to write.


You yourself said "Just a simple thing like...", so I thought you were looking for a simple solution. Your complaints are no different if you were using strtol/atoi, so what's unique about swift here? Are you expecting "value of character" to be in some range that's not within 0..<radix?

I have also written commercial code that does this, and I'm just going to paste it here because it's actually not that much code. Sums two strings of arbitrary length, well outside of the overflow limit, for the given base. Returns the sum as string in the base given. No mapping or overflow checks needed. Invalid characters for the base are treated as 0, but you can easily modify it to throw an error or whatever.

   func sumStrings(_ left: String, _ right: String, _ radix: Int) -> String {
        var left = Array(left)
        var right = Array(right)
        var carry = 0
        var result: String = ""

        while left.isEmpty == false || right.isEmpty == false {
            let charLeft = left.popLast() ?? "0"
            let charRight = right.popLast() ?? "0"
            let intLeft = Int("\(charLeft)", radix: radix) ?? 0
            let intRight = Int("\(charRight)", radix: radix) ?? 0

            carry += (intLeft + intRight)
            result = String(carry % radix, radix: radix, uppercase: false) + result
            carry /= radix
        }
        if carry % radix != 0 {
            result = String(carry % radix, radix: radix, uppercase: false) + result
        }
        return result
    }


the swift string / substring api is still incredibly convoluted (or poorly documented, or both), but my feeling is that it's because it does a lot.


I think GP is saying that ObjC is faster than Java for development, but Kotlin is faster than Swift. No comparison between ObjC and Swift.


I prefer Swift on balance, but there’s still a lot I miss from Obj-C.

For one thing, Obj-C compiles much faster. Swift compilation is ludicrously slow.


Recently had to update an old objc ios codebase, and the first time i built i thought something went wrong, because it completed way too fast.

Swift compilation is an order of magnitude too slow, especially in incremental built, compared to objc.


Debugging with LLDB often feels broken or incomplete in Swift than with Objective-C, as well.


That's just your team then. Swift is far superior to Objective-C(rap) in terms of both speed and overall productivity.


Don't forget kotlin. It is a really game changer.


Yeah, this sounds like a tooling issue. I have no difficultly believing Kotlin + Android Studio might be faster or more reliable than Swift + Xcode. Hard to say with any certainty though, it could be any number of things, if the perceived difference is even real.


From my outside looking in perspective:

* Android studio is reliable and just works.

* Kotlin is simple, easy to learn and productive.

* xcode is buggy and with their rapid pace of development it is just getting worse.

* Swift is complex. They had to rethink and redesign multiple times. Also the language was constantly changing under them.


> Android studio is reliable and just works.

How I wish this would be true.

There is a reason why so many rather use InteliJ with Android plugins, or even drop into VSCode with Gradle instead, only starting Studio for workflows that have to be done inside it.

There is hardly a "stable" release without regressions.


Swift hasn't changed much in the last 2.5 years, and was stable for the year before that, so it really shouldn't be an issue any more.


Yes, but kotlin has been much more stable for more years.


I think a part of the problem with Swift is Swift's community apparent devotion to the "look, ma, advanced Swift!" mentality. The shorter, the more compressed and the more arcane the code, the better programmer one is perceived to be. For example, I've been criticized by code reviewers for using simple, short if/else block runs, while convoluted, multiply-composed higher-order function chains with platform API mixed in were recommended instead, yielding entire micro-features that no part of a problem statement even asked for.

Moreover, if you also consider that Swift heavily uses type inference (if a compiler is allowed to infer something, your brain will have to as well, when you look at the code) and includes lots of edge-case syntactic variation (say, multiple trailing closures) or special-purpose features (say, SwiftUI builders), it is no wonder that even an advanced programmer can look at a piece of Swift and go: "WTF does this thing even do??". On top of all that, add a complicated generics system (i.e., Protocol-Oriented Programming) with conditional interface adoption and default implementation inclusion and you get a Frankenstein that only Apple fanatics can love.

Swift seems to be great for only two kinds of programmers: the small snippet "Hello World" crowd and absolute Swift experts. Intermediate programmers probably just get lost in the language with a sinking feeling.

Regardless of the fact that I have over 10 years of C++ under my belt, I experienced so much unexpected, recurring frustration with Swift over the course of a couple of years that I started migrating away toward Python. The devotion to simplicity, predictability and elegance expressed by the Python community feels, by comparison, revitalizing to me.

Not a rant, just my $0.02.


My main takeaway is that Swift is complex and tries to solve many problems. As a result it is going to frustrate almost everyone in one bit or another.

I'm not an expert in the Swift Evolution process but I must praise it in terms of ease of reading. Back when I was invested in learning more about Swift, reading some proposals helped me understand it better. But I believe the Swift team/community will need to define a boundary some time in the future on what must be added or not to the language as its complexity grows. In the long run, even if a proposal makes sense, is implemented and was reviewed, it usually will also add a new thing to the language that implies in a bigger documentation and another way to code/solve a given problem.

In the AnyObject dispatch regret [0], the author wrote "As I’ve mentioned many times (though not so much on this blog), Swift had to be as good as Objective-C at making most Cocoa apps, or people wouldn’t use it no matter how good the new stuff was". My first real contact with Cocoa was while learning Swift and I can imagine all the hard work that was put into it. Then SwiftUI came on together with a bunch of new features to a language that I was still getting used to. After getting frustrated with the lack of documentation (combined with the lack of knowledge about some patterns) and no clear way to solve small problems while studying it, I've decided to dive again into Objective-C to not only grasp most of in one week but also to find that I did not need most of the features Swift delivered.

And this is something that is part of Swift. By design it offers you multiple ways to tackle your problems and by design it is only going to offer you more ways as time passes. I consider this a trait that is bad for the language itself in the long run. Looking back 7 years after the introduction of Swift I guess Apple would be better served if it just kept investing in Objective-C. But who knows what will Swift and its ecosystem look like in 2028.

[0] https://belkadan.com/blog/2021/08/Swift-Regret-AnyObject-Dis...


A lot of these things are odd cases that I would never encounter, but I'm surprised that there aren't bigger, more idea stuff in here. For example, not having an expression oriented syntax is really hard for me (I.e. everything returns a value, even if that value is void or whatever).

Swift claims to have a lot of functional features, but it actually misses some of the most basic things.


I remember when the creator of Swift joined google and there was a lot of noise about Swift and Tensorflow. i told people while it seemed like a nice idea, it was never going to replace Python/C++. Eventually Lattner lost interest and the project died.


Honestly, I can't get my head around how anyone could prefer Objc over Swift. I suppose it's just years of writing Objc and not wanting the change?

Like Swift is not a perfect language and Apple's developer docs lack useful examples a lot of times, but Objective-C has to be the least approachable language in all of app development.


This introduction post to the series of swift regrets is placed somewhere in the middle of all the posts.

If you want to read these, it might be easier to start here: https://belkadan.com/blog/tags/swift-regrets/


So is Objective-C kinda going away after all?

If I did a greenfield iOS or OSX app would it be malpractice to do it in Objective-C?

Because I actually adore that language. Haven’t used it lately, but I do love it.


So long as interest in the language remains and Apple keeps pumping out incidental improvements to it as they update Swift, it will never be gone:

https://news.ycombinator.com/item?id=28623128

https://news.ycombinator.com/item?id=28582752

That, and the banal fact that large corporations including Facebook, Google, Netflix, Snap, Twitter, Pinterest, Spotify, Flipboard, and Apple itself continue to have large amounts of legacy code in Objective-C.


Yes, you should learn Swift. The naming of things in Apple's APIs still comes through strong in Swift (due to named parameters), so it feels at home to programmers who were familiar with ObjC.

Swift protects you from many crashes/bugs with its null safety and more strict compiler. In ObjC many clear coding mistakes are only marked with warnings, which is not helpful.

Apple has clearly moved to Swift for all future development of new libraries, etc. So it would be foolish to stay in the past.


Metal is a mix of Objective-C and C++ (MSL C++14 dialect).

So it will stay around for a while.


What in MSL is based on Objective-C?


Nothing and I haven't said otherwise.

MSL is a C++14 dialect, while everything else in Metal is Objective-C.

Metal demos done in Swift make use of language bindings.


I thought Objective C was dead - it'll still be supported (probably) but https://developer.apple.com/ calls out Swift a lot but makes no mention of Objective C.


Then Metal is dead as well.


  > So is Objective-C kinda going away after all?
did c and c++ go away when obj-c landed?

(my guess is, obj-c will be the same: not the main one, but still around for where its appropriate)


Looks like it, Apple has been pumping out new frameworks that you just can’t use from Objective-C




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: