Hacker News new | comments | show | ask | jobs | submit login
My misalignment with Apple's love affair with Swift (monkeydom.de)
139 points by mgrayson 5 months ago | hide | past | web | favorite | 140 comments



Its interesting how languages divide opinion. For me I think Swift is the best language around. One of the issues not addressed in the article (I believe) is performance, Swift is faster than Objective C because of the static type checking, which is a good thing.

It has the advantages of C# as a language but no garbage collection, granted the reference counting in Swift can be problematic, but it seems to be improving with each release.

The functional programming aspect of swift I find useful. There are a number of features in swift that decreases the amount of code - e.g. if let/guard. The use of optional types increases program safety.

The fact that you can use classes declared elsewhere without any consideration of dependency I find very powerful, and of course with great power comes great responsibility.

The power of swift collections is fantastic, and the syntax compared to Objective C is so succinct.

Many more...


I'm a big Swift fan as well, but the performance part simply isn't true. Swift was supposed to be potentially faster than Obj-C because of types, but in practice I have never seen that.

Since it is missing mutable arrays and strings, your code often ends up copying a lot of memory around if you're not extremely careful, and you quickly get accidentally quadratic complexity.

A lot of APIs are just wrappers around ObjC APIs, with lots of memory management calls required for bridging. You really need to be careful what calls you put in a tight loop.

And that's before you even start with the good stuff. Protocols with associated types are so slow that they are unusable for anything performance sensitive. Theoretically the compiler should be able to optimize it incredibly well since it's all statically typed, but in practice the compiler seems to never be smart enough. Call stacks get very deep very quickly (protocol witnesses, closures, ...).

In my experience the easiest way to make Swift code fast is to rewrite hotspots in ObjC.


Sorry but this is completely false, Swift is certainly faster than ObjC. There are several WWDC talks, independent reviews, and the language designers themselves supporting that.

In practice I agree it's easy to write code that's slow, but you can't knock the language itself for user error. They are still working on improving the user experience, such as how 4.2 will remove redundant generated retain/release calls. And hopefully they make the compiler smarter at figuring out which specialization functions to output for generics/protocols.


Nothing the "language designers" claim can change the fact that I need to rewrite critical sections in ObjC.

There is a subset of Swift that is fast. But if you tell people that Swift is faster than Obj C, they will think that they can rewrite code in Swift and it will be faster. That may be the case at some point in the future, but right now that's just not true yet.

In my experience, the opposite is the case. Swift is a great language, it's safe, and nice to work with. But those bits where you are processing data in a tight loop -- you still may have to do that in Obj C.

Obviously this depends a lot on your problem, but I have seen 2x speedups and more for things like string processing (rewriting things from Swift in Obj C)

And as I said, that's without using the fancy parts in Swift (things like type erasure absolutely kill performance)


This is false, that is false, I don't know who to believe.


Simple: Swift is clearly fast for some applications, but slow for others. It is not a silver bullet. There are no silver bullets.


There is a lot of maturity left in the language for it to grow into. I typically say that it takes 10 years for a language to be mature enough to avoid stupid crap like you describe.


> performance, Swift is faster than Objective C

No it's not.

> because of the static type checking, which is a good thing

That's a reason why you think it should be faster. Actual performance is measured, not argued.

When you actually measure Swift, it is horribly slow.

UPDATE: details in other posts here. In general, my measurements have found Swift code to be 50% to 50x slower.


I've found it to only be slower in Debug, not Release builds. Release is often significantly faster than the equivalent ObjC, and only a tiny bit slower than native C.


Hmmm...you do realize that Objective-C is a full superset of C, so "Objective-C vs. native C" is not a distinction that makes sense?

If you are comparing primitive types in Swift with objects in Objective-C, then you are comparing apples to oranges. You need to compare objects in Swift with objects in Objective-C and primitive types in Swift with primitive types in Objective-C.

Rephrasing, my findings are similar:

Swift in debug mode is so incredibly slow that even using primitive types with Swift is slower than Objective-C objects. In release mode, it approaches, but rarely reaches native Objective-C speed.

More details here:

http://blog.metaobject.com/2014/09/no-virginia-swift-is-not-...


> [...] no garbage collection, granted the reference counting in Swift can be problematic, but it seems to be improving with each release.

nitpick: reference counting is a form of GC in standard CS terminology. It saves some memory compared to other GC methods but is slowish.


reference counting is deterministic, GC isn't, which was always the apple engineers objection to GC - no pause while the collector runs.


There exist pauseless GCs, Apple's objection to GC is that GCs need more RAM, and that would reduce their profit margin on iPhones. A fair reason if you ask me.


Reference counting has pauses, as object deallocations cascade. You can address this by having a deallocation work list that runs a bit at a time, but then again there is also a large body of work on other kinds of GCs that address pauses.


Faster than ObjC? Unless your objc code is written using nothing but id and reflection, I’m not sure how can this be the case...


> Unless your objc code is written using nothing but id and reflection

That's how all Objective-C code is written–every method call goes through objc_msgSend.


To be fair to objc_msgSend it's orders of magnitude faster than most reflection APIs. https://www.mikeash.com/pyblog/performance-comparisons-of-co... has some benchmarks, it's a bit old but I don't think objc_msgSend would get slower.


Yup, I'm not saying objc_msgSend is slow: it's just that it's not faster than a direct call, and I believe about on-par with a virtual function call.


And let's not forget that ObjC is downwards compatible with C. If you need a particular loop or function to be fast, you can write it in C, in the same ObjC source file.


For one, ObjC can't treat objects as values or initialize them on the stack. Also, the everything-is-an-object paradigm is terrible for arrays of values, since every NSNumber or NSValue has to be individually malloced and released. (Though maybe not the case anymore? It was certainly a chokepoint several years ago.)

Both of these factors have forced me into using lower-level Objective-C++ code on numerous occasions.


NSNumbers are generally tagged pointers, unless you’ve got a really large number.


This didn't help me in my app, unfortunately, because I needed to encode CGPoints as NSValues. I suppose I could have resorted to ordinary, interleaved CGFloats, but it was way easier and more productive to just use C++ vectors. I knew exactly what I was getting with those, and I didn't have to do the boxing/unboxing dance.


The performance improvements aren't exactly because of static typing. Obj-C method calls are slower because it uses dynamic dispatch (literal message passing vs a C style function invocation IIRC), whereas Swift generally only does this for interop with Obj-C.


As someone with a significant amount of experience in both ObjC and Swift (4+ full-time years in each), I don't agree with this article. Yes, Swift isn't perfect, it has its flaws. Totally agree there. It's not perfectly integrated with Cocoa either, as the author mentions.

However, to say that Swift is a regression over Objective C seems very short sighted to me. Can you imagine if Apple continued with Objective-C? There would be no way to reverse decade old decisions that simply aren't the right decision anymore. Every new feature would need to be built on top of code from 1984. Swift was very future focused from the get go, and is a bid to ensure that developing for Apple devices in 2025 is not an archaic mess.

Like anything new, it's not perfect at first. That is the world we live in now. However, Swift is getting incrementally better at a very good pace. There are some reasons left to prefer Objective-C right now, but I'm sure that in a few years these reasons will be far fewer.

My own opinion is that I'm much more productive writing Swift, after spending a similar amount of time with each language. If you think the same is true of Objective-C, then that's great - there are certainly still some upsides to developing with Objective-C. However, to say that Swift is a mistake is something that I can't reason with, since it's one of Apple's most forward thinking decisions to date.


> There would be no way to reverse decade old decisions that simply aren't the right decision anymore. Every new feature would need to be built on top of code from 1984.

Of all the reasons to prefer a new language, this is the worst one from a platform perspective.

C and POSIX allowed Unix to conquer the world. They allow code written since the 1980s to run with minimal modifications even on modern systems. The other major operating system family -- Windows -- is also strongly associated with backward compatibility.

Everybody always wants to throw away the legacy code, because maintaining it is expensive. But throwing it all away and starting over from scratch can be more expensive. Especially when you're dealing with millions of lines of third party code. Which is why the only platforms that are still popular after 30+ years are the ones that didn't force everybody to do that.


You mean allowed security firms and black hat hackers to prosper, turning Internet on a Swiss cheese of security with an endless number of attempts of compiler saniters, process sandboxing, CPU extensions, CS research to get out of the mess.


> You mean allowed security firms and black hat hackers to prosper, turning Internet on a Swiss cheese of security with an endless number of attempts of compiler saniters, process sandboxing, CPU extensions, CS research to get out of the mess.

I mean take over the world.

I know it's popular to hate C these days, but remember the context. The competition in the 80s and 90s were operating systems that didn't even have memory protection or the concept of user accounts. Unix was never going to lose over security.

And we're not talking about the language to write daemons or kernels in, just the system API. A system should make it easy to write your code in any language you want regardless of whether any other part of the system uses it. There isn't supposed to be a One True Language. System APIs that advantage any specific language are problematic -- which is why the C family is popular for APIs. It's simple enough that it's easy to wrap using nearly any other language.

For example, Java was the Next Big Thing around the time when IPv6 was new, and Java is "safer" than C. That would have been an opportunity for an operating system to deprecate the BSD sockets API, never extend it to support IPv6, and require internet-facing code to use a Java API in order to use IPv6.

The problem is that's just making trouble for everyone. People who want to keep using their C programs, or use any other language (like Rust) that isn't Java, will either get saddled with some awkward and slow kludge to use the Java API from another language, or will just not bother to support IPv6, making IPv6 less popular than it is already. People who do want to use IPv6 but not Java will prefer other operating systems, making a system that does that less popular in general.

It's no use to make something nominally more secure in a way that just causes people to not use it.


> The competition in the 80s and 90s were operating systems that didn't even have memory protection or the concept of user accounts. Unix was never going to lose over security.

Really? Apparently we lived in different planets.

VAX/VMS, OS/360, Burroughs, AS/400, ....

UNIX was available with source code for a symbolic price to universities, that was the difference.


> VAX/VMS, OS/360, Burroughs, AS/400, ....

On mainframe hardware similar in price to a single family house. The competition on hardware individuals could hope to afford was the likes of DOS and MacOS Classic. And even on mainframes, the ones that have survived like OS/400 are the ones obsessed with backward compatibility.


Just to be 100% clear, Rust can expose a C ABI for exactly this purpose. You still have to deal with unsafe around the boundary, but can keep the core fully safe.

Doing so was some of the earliest production uses of Rust, and is still among the largest.


That wasn't what the comment was about, thought, was it? Newer code base on newer language doesn't equate better security. That's a completely different discussion.


I'd say a goal of swift is to be more secure - so maybe not so far from this discussion


It's expensive, but it looks like Apple can afford it.

Objective-C was hailed as the one true OO language by its smug fans, then Apple said one day that it's legacy software and it became legacy. Many still don't completely understand that and try to argue that it should be somehow brought back.

To those I ask: did Apple bring back the floppy, DVD, thick laptops and non-butterfly keyboards? No. Objective-C is destined for oblivion and it's time for you all to accept that.


> Objective-C was hailed as the one true OO language

Where? When?

Objective-C was always a reasoned compromise. No more, no less.


And yet it’s also the source of a huge amount of severe vulnerabilities today. It’s not as cut and dried as you make it appear, even if I do agree with your sentiment.


I certainly agree that deprecating millions of lines isn't a great way to win over developers. Apple hasn't deprecated Objective-C yet though, and I think both Swift and Objective-C will be supported side by side for a long time yet.

Apple is usually quick to deprecate (which does have its own advantages), but given that a large part of their own tech stack is built on Objective-C, I think they'll play the long game here. I would be surprised to see it deprecated before the end of next decade (although I wouldn't put it past Apple either...).


We have 40 years of legacy code in C but essentially all of it fails catastrophically at random. If we ever want a platform that actually works we have to start from zero undefined behavior.


If you can’t think of any way that Apple could have created a modern Cocoa App development language with type safety, type inference, and a syntax that doesn’t stink then that’s your failing, not theirs.

No, wait… that was their failing as well. They took a compiler engineer’s vanity project to invent Rubberized C++, and made it their App development “platform of the future”. And in so doing, trashed three decades’ investment in their Cocoa Applications development platform, which is the actual valuable part, simply to reinvent the same old wheel they had before. While not actually adding any fresh value. To what end; stroking nerd ego?

Languages are ten-a-penny; you create a language to fit the problem space, not vice-versa. If you can’t get a 10x efficiency on developer time/LOC, don’t even bother. You’re just wasting everyone’s time.

Tony Hoare called NULL his billion-dollar mistake. Swift is easily a million man-hour mistake, and still counting. Google must be laughing right now.


> Every new feature would need to be built on top of code from 1984.

Um, no. Macs were programmed in Pascal in 1984.


So then 1985 (founding of NeXT) rather than 1984 (creation of Objective-C?)


IMO Swift is a language written by a compiler guy to solve compiler problems. the syntax is dense because it forces you to make a lot of decisions that could otherwise go unmade in objc.

If a variable is read-only, you're forced to think about that by deciding between let/var. contrast this with objc, where all vars are writable unless you do the extra work of adding the const keyword. objc makes us do more work to get the faster (and safer) behavior.

Similar with Optionals, you're now forced to decide right away if a parameter can ever be nil, whereas with objc you didn't need to declare nullability. Again, it makes the safest and fastest choice easier to make, and allowing nullability is actually more work.

Generally Swift forces you to pass as much information to the compiler as possible at compile time, and it does it with a delightfully readable syntax. This theme repeats itself throughout the language. more information at compile time is always going to result in safer and more predictable behavior.

full disclosure: i'm an iOS dev working in objc and swift. i love both languages, and i think swift is the obvious path forward.


You've hit the nail on the head, but I take the opposite conclusion.

Swift is designed around premature optimization. In my experience, improving programmer productivity is more important. The more time spent with compiler enforced busywork is less time in Instruments. The same goes for compile times. If you want people to write fast code then help them to iterate quickly.


Great comment!!

One of the problems I have is that I love python, but I was hoping that swift would be like python on steroids. Instead, I can’t make the leap because python lets me iterate so freaking fast. If I could have something like that with the ability to incrementally improve my code and compile to a binary, I’d be happy. I thought awift would be it, but your comment helped me understand why it isn’t working for me.

Full disclosure, I’m not a full time dev, so YMMV.


Most developer time is spent finding and fixing bugs. That’s what makes developing in Swift much more productive than in Objective C.


Well yeah pretty much anything is better than ObjC.


> IMO Swift is a language written by a compiler guy to solve compiler problems.

I'm not convinced; the examples you give require _more_ work for the compiler writer, not less. In general, a stronger type systems means that the compiler has to do more.


I see it in the sense that if the target for both compilers is to produce a correct program, then that target is much easier to achieve with more information (and less assumptions) passed in the source code by the programmer (ie Swift).


> require _more_ work for the compiler writer

Exactly. In Objective-C, the compiler can't do much, which is kind of a bummer if you are a compiler guy. Chris is a compiler guy.

Consider Swift a kind of public works program for your local compiler team.


I agree with the author.

"So now contrast that to Swift. First of all: Which question did it desire to answer? Think about it. There is no one clean answer."

I feel that Swift is more just like a language designed to be a syntax swap of objective-c, making few things better, yet some other ones worse.

Sure, it might be more welcoming to people used to java/javascript type of language syntax, but overall I think it is two step backwards on language design and functionality.

To folks that are proficient with Objective-C,

1) How do you like working in Swift?

2) Are you more productive (for medium or large projects)?

3) Do you enjoy it more?

In my part (if you are already very proficient with Objective-C), those answers are Nos. If you are new to iOS though, Swift is easier to get started.

https://www.hackingwithswift.com/articles/27/why-many-develo...


I started developing in Objective-C back in the early 2000s on macOS (as a kid), worked at a big company on some macOS software, then worked on a million-line iOS Objective-C project at another big company for many years.

After I left, I picked up Swift. I use it for all of my personal iOS projects. I don't disagree that there are flaws in the language, but they definitely aren't the ones the author pointed out.

> Adding compile time static dispatch, and making dynamic dispatch and message passing a second class citizen and introspection a non-feature.

I can't express how happy I am that an object is exactly what it tells me it is at compile time. The number of nuanced bugs that have bit me over the years, my lord. Let alone explaining this stuff to a new coworker? Dynamic dispatch / message passing / introspection are huge bug sources. If you can't rely on the compiler to tell you what something is, what can you rely on?! This alone dropped bugs an order of magnitude.

> Define the convenience and elegance of nil-message passing only as a source of problems.

Nil messaging is nothing but a source of problems. Can confirm.

> Classify the implicit optionality of objects purely as a source of bugs.

Implicit optionality was a massive design failure in C. Those days are behind us, it's time to move on.

(1) I love working in Swift. It opens up all sorts of great design patterns -- lightweight view model structs, enums with associated values, easy to manage protocols. Readability is dramatically better. No more copy-paste between header and source. No more dealing with unintelligible types, first-class generics, etc.

(2) Productivity boosts come from less boilerplate, less repetition, less typing, better design patterns being within reach. Static typing eliminates many classes of common bugs. No automatic nils is huge. I can't remember the last time my app crashed in something other than an Objective-C component or framework.

(3) Yes, I do.


1) I've been doing Cocoa development for 13 years, and I absolutely love Swift. I've switched to it for all my own projects (at least the ones where I would previously have used Objective-C), and I'm never looking back.

2) For medium projects: absolutely. For large projects: I haven't had the pleasure yet, but the truth is that Swift encourages a much better approach to modularity, and if you're working in a large project as a monolith, you're probably doing something wrong.

3) Hands down, yes.


I programmed mainly in Objective-C from ~2000 (Rhapsody beta) and resisted even learning Swift until Apple finally confirmed they were definitely going to open-source it. So about a decade of objc.

1.) I love working in Swift. It would not occur to me to start a new project in Objective-C. The only reason I could see myself writing more objc code in this life would be to make a quick fix on some old project. (Incidentally, this is the exact same kind of B-completely-replaces-A-for-me language experience I had with JavaScript→TypeScript. Can't think of another.)

2.) Swift confers upon me a quantum-leap kind of productivity increase -- the kind of "programmer time" optimization the Ruby guys used to talk about over Java. The core language is great. Swift enums concisely solve lots of problems that are easy-but-annoying in Objective-C. Protocol extensions are an absolute godsend and I hate the feeling (when programming in something else) of seeing how a protocol extension would solve the problem at hand, but not having them.

Incidentally, I remember vividly when there was talk of adding "concrete protocols" to Objective-C 2.0. (They let you do a similar thing; adopt a protocol to gain a bunch of implementation behavior, not just declare that you support an interface). I was really, really excited about that, too. But the rumor didn't pan out and concrete protocols were never added.

3.) Yep, love it. I changed jobs in 2016 and don't even do much Apple platform development anymore; I am mainly using Swift on Linux.

My comments apply to current Swift, so right now that means 4.2. As you go back in time to older versions, I liked it less. In the first year doing Swift, there were still times I enjoyed Objective-C more, but not anymore.

Aww, it's kind of sad when I think about it; after all those years, Objective-C is now dead to me, like NewtonScript...


> Incidentally, this is the exact same kind of B-completely-replaces-A-for-me language experience I had with JavaScript→TypeScript. Can't think of another.

I had my own instance of that with Java -> Kotlin — and this is on the backend. I can’t imagine how that transition feels like on Android


> Swift enums concisely solve lots of problems that are easy-but-annoying in Objective-C.

Curious as to what those might be, can you shed some light?

I was also puzzled by the emphasis on enums in Swift, as they are a language feature I rarely if ever use in Objective-C, not because I want to and they are inadequate, but because I generally don't see much of a need for them.


Swift enums are effectively discrimated unions, which are really, really different from C (and thus, Obj-C) enums.

For example, in Swift, you can have an enum which says its values holds exactly one of:

A) several specific semantically meaningful strings B) some other object C) several objects

In Obj-c, you would probably express this as a single object, and write a bunch of error prone logic to handle which of these states you’re in, possibly missing an important last minute addition in one of your methods for walking over all the possible state.


Thanks!

In any OO language, I’d write different classes for cases A B and C that share a common message protocol (wether formal or informal) and use them uniformly/polymorphically.


I spent 20 years writing in object oriented C, the last three in objective C. When Swift was announced I started my next project in it, and never switched back. That’s saying a lot giving how rough Swift 1.0 was, I couldn’t even release till Swift 1.2.

The entire language is designed to make the most costly part of development, quality, so much easier. That’s been it’s main purpose and theme since day one. And it’s not just optionals.

Today some code wouldn’t compile because I forgot a switch case. I remembered when I didn’t have to worry about stuff like that, Objective C would let me compile and defer till weeks later finding the obscure bug it caused in a painful multihour debugging session.


> 20 years writing in object oriented C

C++?


Think/Lightspeed C for much of it, using their object Oriented extensions.


So essentially C++?

"Version 3 and subsequent versions were essentially a subset of C++ and supported basic object oriented programming concepts such as single inheritance as well as extensions to the C standard that conformed more closely to the requirements of Mac OS programming. After version 6, the OOP facilities were expanded to a full C++ implementation, and the product was rebranded Symantec C++ for versions 7 and 8, now under development by different authors"

https://en.wikipedia.org/wiki/THINK_C


Nah, never developed using C++, always stuck to the original object oriented subset.


...which was "essentially a subset of C++".


Not really, C++ is a huge monster compared to the original ThinkC object oriented extensions. Beyond single inheritance, it was just a much simpler language. It's like saying C is a subset of C++, because it is.


To folks that are proficient with Objective-C, 1) How do you like working in Swift? 2) Are you more productive (for medium or large projects)? 3) Do you enjoy it more? In my part (if you are already very proficient with Objective-C), those answers are Nos.

It’s not about making the people who are already familiar with Objective C. It’s about the next generation of developers who find Objective C obtuse.

The current iOS developers can stay with Objective C for the foreseeable future. Microsoft had to drag developers from C/C++ and Vb6 to .Net. Apple has to do something similar.


I work with Swift on a daily basis, both on iOS as well as server-side. I absolutely love it. Historically, I've written a lot of Java, primarily for large enterprise apps. For a few years I took over enterprise iOS development and decided to use Swift for all new projects. I don't regret it for a second! Swift forces you to really think about your code. I can't count the number of times in other languages where developers fail to address null values. With Swift, you're forced to think about what you'll do if the value is null (unless you explicitly side to say, "Ehhh screw it!" and unwrap your values without checking.) It makes it easier to write "safe" code.

Recently, I've begun re-writing some of my personal server-side projects from Java EE to Swift. While the new frameworks (Perfect, Vapor, Kitura, etc.) are good, there are still some areas where they are lacking. Cryptography, for example. There's a real lack of good crypto libraries for Swift that deploy on Linux. I'd LOVE to see the equivalent of the JCE (Java Cryptography Extensions) in Swift. Right now, most of them are just weak wrappers around OpenSSL. But, from a performance and resource perspective, Swift is GREAT compared to Java. Faster, and much less resource overhead.


I also agree with the author.

I think Swift solves the problem of jailbreaking. If iOS were written entirely in Swift, there would be no jailbreak tweaks, because there is no method swizzling.

I see Swift as an evolution of C++, similar to Java and C#. Plenty of people can program in Objective-C, but not many people actually understand it IMO. I think an expert Objective-C programmer can be more productive than an expert Swift programmer. But Swift is better for large teams of average, interchangeable programmers.

I think those who say that are more productive in Swift were simply not very productive in Objective-C. People who are productive in Objective-C will despise having to work with another layer of abstraction that provides no benefit for them (it prevents bugs that they never created in the first place).


Coming from ObjC, I love the death of PDP-11-age one-pass-compiler C-isms like header files. I also love the inference and strictness of types/mutability in Swift, and error-checking from optionals and exceptions.

These are all things that could have just been added to ObjC through language and IDE updates, but they ended up in Swift, and so there I go.


As an experienced ObjC programmer, Swift allows me to expose the foundation of my code far more rapidly, and in a much more stable form. It forces me to answer critical questions about my software that I would often postpone indefinitely in ObjC. It makes refactoring feel much more safe.

I like plonking down some quick ObjC, but my Swift programs feel much, much more correct.

(Of course, when dealing with C APIs, Swift is downright nauseating.)


Swift took a well-developed language, looked hard at what was lacking/worked-around/deprecated/old/annoying, and rebuilt the language from scratch to get it right/modern this time.

Would we could do this with other languages.


That well-developed language that they tried to improve on was C++, not Objective-C.


Apple doesn't agree with you.

"Swift is a successor to both the C and Objective-C languages."

-- https://developer.apple.com/swift/


> It seems to have been driven by the needs of the compiler and the gaps that needed to be filled for the static analyzer. Those seem to have been super-charged instead of catering to app developer's actual needs: efficient, hassle free, productive (iOS) App development.

This. Even then Swift (the last I really looked like v2) was full of special one offs and smacked of design by committee.

It would’ve been much more interesting if they’d worked on making an updated Objective C language. Maybe even break C superset constraint and add in some new syntax. It’s even more sad that ObjC message passing can support distributed systems.

Probably time to move back to Linux. Too bad GNUStep never took off. Or Etoile (?).


Swift has changed a lot since v2, which was before it was open sourced and honestly before they had sufficient tool quality for medium and large projects.


Pity that KDE is the only environment with something that approaches the Framework stacks of other desktop environments.


> This. Even then Swift (the last I really looked like v2) was full of special one offs and smacked of design by committee.

How so?


Here's one link [1]. It's dated and so hopefully many of these have been resolved. Swift doesn't solve any problems for me so haven't really looked at it again for a long while.

Mainly it seemed that instead of supporting full functional concepts, they one-offed feature that _looked_ like it was functional but weren't. Unfortunately I can't find the original article that reviewed the Swift type system.

1: https://www.quora.com/Which-features-overcomplicate-Swift-Wh...


A lot of those are either fixed, or being actively worked on.


As a mobile & fullstack developper, my road also takes me to different languages but for very different reason. To me swift is at its core the best language by far. I love the way it lets me model pretty much every problem using combination of enum and struct, while keeping everything value-based and type safe. If you add null safety, there is no other (mainstream) language that checks all those marks.

But the OP is right in that Swift aims at being a silver bullet, yet the pace at which it evolves compared to that goal is absolutely scary. Concurrency hasn't moved at all, and in the meantime server side performance is a total disaster (https://www.techempower.com/benchmarks/#section=data-r16&hw=...).

There is also nothing for cross-platform development (real cross-platform, not just iOS and macOS).

So, personally, my next experiment is going to be full stack dart.


That benchmark seems to list only Kitura. I wonder what it would look like with Perfect, Vapor, or Zewo.

> Concurrency hasn't moved at all

Yeah, it hasn't moved because GCD is already pretty mature, and they don't seem to have made any effort at other concurrency paradigms. I'd love to see how something more like CSP would work (Zewo has an implementation, but I haven't had a chance to work with it yet).

> There is also nothing for cross-platform development (real cross-platform, not just iOS and macOS).

Not yet; it doesn't quite seem like it's ready to be a practical goal. I'm curious to see if that changes when they manage ABI stability.


GCD is a library, not a language construct. As such it is extremely raw, and i've got a strong suspicion that the fact that it wasn't built for server-side style concurrency is one of the reason kitura is dead slow in the benchmarks. Those kind benchmarks usually spawn thousands of simultaneous connexions, which is very far from what an app generally requires (but i could be wrong).

It also doesn't provide more parallelism regarding i/o than os-based threading. go has light threads which makes context switching fast, and node has async i/o all over the place. Maybe Swift NIO would make a difference, but it would only make swift catch up to what the first version of node.js provided.

My point was that this manifesto https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9... has been written 1 year ago and nothing has been implemented or announced at wwdc (but maybe i missed a talk...)


Vapor generally kicks ass in benchmarks FWIW, and it's using this now so clearly Apple is beginning to pay attention to stuff around server side concurrency: https://github.com/apple/swift-nio

Although as I mentioned in another comment, if you're looking for something more mature, Kotlin is a great option. Vert.x has the widest database support and is lean and fast, so this is probably what you're looking for. But if just Mongo/Cassandra/Redis is acceptable, Spring Boot w/ Reactive Web looks pretty cool, too.


I tried Dart out recently thinking it might make a bit of a comeback thanks to Flutter -- but I honestly just didn't enjoy it. I was doing a small POC project in AngularDart, and the experience I've had working with Angular w/ TypeScript was just much better.

If you want something like a more viable server-side Swift, with a much better story around concurrency, you should check out Kotlin + Vert.x

https://vertx.io


What didn't you like about Dart?

I've read that AngularDart is particularly little-used and unpopular. Maybe Flutter is where Google is placing their efforts now.


It is a minor issue, but I don't like they copied @Override annotation from Java instead of making it a proper keyword, it is not like they would be breaking backwards compatibility.


The whole situation around packages was kind of a pain in the ass, so that was a pretty immediate turn off.


> Swift code might end up being more correct in the end. It also might alert you to edge cases early on. However, the flip side is it inhibits your creativity while writing. When starting out to program, the way I enjoy working, I don't yet know how the API is best expressed. So I try out different ways to express it until I find a sweet spot. Then I go back and unify accordingly.

> Swift actively distracts me in that endeavor by making me answer questions I really don't want to answer right now. Yes, stuff might be less correct in the meantime, but heck that is what I want during the design phase. Find my concept, sweet spot, iterate, pivot quickly.

I have much of the same feelings with Rust.


It might make sense to differentiate between prototyping and implementation languages. As a design first company, they don't prototype in code, they use code to implement prototypes. With Swift they're just serving their own methodology.

This also fits into their "we don't need to collect data" mindset, they seem to iterate on assumptions, not direct feedback from customers. They create many prototypes and then implement the most viable one, and and this point they know exactly what they need.


That's a fine approach when you prototype a UI or a process, but it doesn't help if, f.i., you want to try out a specific Rust library.


Would be neat to have a language with dynamic features for prototyping, but that become warnings or errors when compiling for release.


I find myself in the same boat. I've programmed in Objective-C since 1995 on NeXTSTEP. I love Objective-C, but realize it has quite a lot of warts.

My problem with Swift is my feeling that the people who developed it don't like Objective-C. It really feels like the Java direction Apple tried to take in the late 1990s. You can argue Nil messaging but its part of the landscape.

Plus, I really find weird way they adapted message passing to the C++/Javascript like syntax. A simple obj.(selector: value selector2:value) would have made it much easier to go back and forth. This is mismatch affects things. Never mind the added punctuation that need not be there.

I should be much more productive, but I'm not and that sucks.


I love Swift but at 4.2, its still a distance from being useable for me.

Swift is suppose to be modern, only to be burden by API compatibility work, which is non trivia.

This significantly slow certain important developments like ABI Stability (5.0), Full Generic (5.0), Concurrency (Maybe 6.0)?

A year since 4.0 release and in the coming few months, 4.2 will be release and not 5.0. This mean the timeline for 6.0 get push even further back.

While Swift is modern in area like optionality, first class immutable struct and (my favourite feature) enum with associated values, it lack many other modern features we come to expect from modern language. e.g. callback are still the way for async path control (1 of the regret of Ryan Dahl in his JSConf EU talk https://www.youtube.com/watch?v=M3BM9TB-8yA)

1.0 to 3.0 was spent getting the API right. This is a significant positive investment in the long run, but as someone who have to maintain codes, it was not pleasant at all and I still have code stuck in 1.0/2.0 eras. I have crashes with getting conditional conformance working with generic. Some wasn't crashing on 1.0 or 2.0 but crash on 3.0. Swift clearing is a WIP.

---

At the same time, TypeScript happened. TypeScript turn JavaScript into optional typed language. I see JavaScript and Objective-C in similar light. Since Objective-C start getting some syntactic sugar (generic, nullability), I wonder what if they have taken the TypeScript approach instead.

TypeScript have no choice but be pragmatic (probably after seeing how Dart was not adopted by the larger community for going the Swift way).

Apple basically act like a benevolent dictator, whatever direction they take is more or less the future, we have to figure out how to work around the new "world" order, which get updated every year at June.

The best iOS/Mac developer thrive in this environment and get handsomely rewarded (App Store ranking, recognition from Apple), I tried and failed miserably.


This post is lacking in details. What does this mean?

>> “It wants to be strongly typed, but at the same time so convenient in type inference, that it falls over its own feet while trying to grasp simple expressions, and they become too complex to manage. By design.”

What is an example of an expression that is tripping over its own feet?


This is one example:

   time swiftc too-complex.swift 
   too-complex.swift:1:49: error: expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions
   let a:[Int] = [1] + [2] + [3] + [4] + [5] + [6] + [7]
              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~

    real	1m20.639s
    user	1m12.459s
    sys 	0m5.249s

    cat too-complex.swift 
    
    let a:[Int] = [1] + [2] + [3] + [4] + [5] + [6] + [7]


I think the problem it solves is developer PR with Objective C, which looks like absolute hell the first time you see it. Swift looks like a sleek scripting language.


I never understood this point of view. The first time I saw it, I was like, "Brackets. That's slightly different than I'm used to. They look like arrays or something. Oh well. It seems to work." And from then on it was a non-issue. Why do developers get so hung up on this?


I'd think most people would agree that infix notation is far easier to read and write than the Polish-like bracket notation.


All the extra symbols did make ObjC more laborious to read and write. ¯\_(ツ)_/¯


Objective-C [] over the :: and <> overload of C++, imo


I prefer the :: and <> overload of C++, than having to type @ everywhere.


> Swift actively distracts me in that endeavor by making me answer questions I really don't want to answer right now. Yes, stuff might be less correct in the meantime, but heck that is what I want during the design phase. Find my concept, sweet spot, iterate, pivot quickly.

This is also what annoys me about TDD - writing code is a process of refinement - if I know exactly what the API I am going to program against looks like, if I know the API I will offer and I know the internals of my code are before I write code then hell, I have done it before.

Often times coding is exploring.


Eclipse Java compiler had a neat feature. It could compile wrong Java code into a working bytecode. Wrong instructions were replaced by throwing exceptions. So you could run incorrect program (it wasn't 100% bullet proof, of course, curly brace mismatch could render your program absolutely useless, but it worked in majority of cases). This greatly helps with development. I can stop my unfinished work in one place and test another, for example.


Idris has a really neat feature called 'Holes' that does exactly this. I wish it could be implemented in every language on earth.



Given the way Steven Sinofsky team contributed to the political mess of Longhorn/Vista, followed by WinRT split and the mess it brought to Windows and .NET eco-system, he is probably not the best authority to pay attention about programming languages and eco-systems.


There have always been heated arguments about which language is better, but not many languages get as many "I don't like it" blog posts as Swift (maybe Go). It's probably just because people are usually free to choose a language, but are getting their arm twisted to use Swift on iOS. That's fine for people who like Swift, but leaves no option but complain to the others.


I also think it's a question of community management. Before Swift was introduced, the iOS dev ecosystem was naturally composed of people who were comfortable enough with Objective-C. Then Apple suddenly released a language that reversed almost every single design choice. Of course that was going to tear the community apart.


The author seems to not be up-to-date with modern Swift:

> Adding compile time static dispatch, and making dynamic dispatch and message passing a second class citizen and introspection a non-feature.

There was a proposal recently to make this first-class, and not just limited to Objective-C.

> Classify the implicit optionality of objects purely as a source of bugs.

Not purely, but often optionality is a source of bugs.

> At the same time it failed to attack, solve and support some of the most prominent current and future computing problems, which in my book all would be more important than most of the areas it tries to be good in:

> concurrency

Concurrency is being hashed out, and will probably be available in a later version of Swift.

> overall API interaction complexity

The Swift API is refreshingly small. Are you talking about Foundation?

> debug-ability

What's wrong with the debug-ability, especially with solutions like playgrounds?

> actual App and UI development

…this is literally the main use of Swift.

> developer productivity

Personally, I feel it's much improved.

> While Apple did a great job on exposing Cocoa/Foundation as graspable into Swift as they could, there is still great tension in the way Swift wants to see the world, and the design paradigms that created the existing frameworks. That tension is not resolved yet, and since it is a design conflict, essentially can't be resolved. Just mitigated. From old foundational design patterns of Cocoa, like delegation, data sources, flat class hierarchies, over to the way the collection classes work, and how forgiving the API in general should be.

This is almost a non-issue these days, unless you're interacting with C APIs.

> Just imagine a world where Objective‑C would have gotten the same amount of drive and attention Swift got from Apple?

The ten years before Swift existed?


> There was a proposal recently

> Concurrency is being hashed out, and will probably be available in a later version of Swift.

TFA:

> It keeps defering the big wins to the future


>The author seems to not be up-to-date with modern Swift:

> Adding compile time static dispatch, and making dynamic dispatch and message passing a second class citizen and introspection a non-feature.

There was a proposal recently to make this first-class, and not just limited to Objective-C.

> concurrency

>Concurrency is being hashed out, and will probably be available in a later version of Swift

Sounds to me like "modern swift" doesn't actually have any if these things. Did you mean to say that the author wasn't familiar with some future version of swift that doesn't yet exist?


Dynamic calls are available today. Concurrency is not.


Yea, author was way off base on productivity. I can write much faster in Swift than Objective C because I spend zero time running down wild pointer bugs, as well as a dozen other classes of obscure bugs the type system prevents.


A friend of mine have been working with porting to iPhone some Android and web apps. I recently asked his opinion of Swift. He said he had no idea as the essential libraries the apps use are in C or Objective-C, so there were no choice but to use Objective-C for the apps. This was surprising, as I thought that calling Objective-C from Swift should be transparent. But it turned out it was not as many small but important details make the whole thing rather messy. So I also wandered what exactly Apple wanted to get with Swift if it lacks even in Objective-C interoperability.


I'm not sure where your friend was getting that information, or what small details he was referring to. Calling Objective-C from within Swift is remarkably transparent -- where you might have called `[foo bar:baz];`, you call `foo.bar(baz)`.

In some cases, wordy Objective-C names will get mapped to Swift's tidier conventions, but the public APIs are all documented in both languages, and Xcode's auto-complete is pretty good at helping you figure it out.


The issue was with exceptions.


Swift was designed to have very good Objective-C and C compatibility, using the clang compiler to understand headers and generate any Swift binding/bridge at compile time. It will punt on trying to support some things, such as inline functions which are declared as C macros, but generally will try to expose everything.

Typically a maintained Objective-C library will evaluate their ability to be used from Swift, and use attributes/macros where necessary to tweak the code import to generate a higher quality Swift interface. Examples would be declaring that a method never returns a nil value, or creating a clearer function name, or switching to the newer Objective-C enum/set styles to have the enumerations generated as Swift types rather than having their API take a raw integer constant or string.

AFAIK, all of apple's frameworks are C or Objective C interfaces, and the swift interface is nearly all generated based on the code and public annotations.


Objective C from Swift is trivial, don’t rely on that friend for technical advice.


These complaints largely seem to come down to Swift not really being what the author expected it to be. His list of speculations as to Apple's motivations, in particular:

> It should scale from App/UI language down to system language.

Did I miss something where Apple or the Swift team said anything about writing a kernel in Swift? They wanted to make it fast enough for performance-critical situations, yes, but there are plenty of those in userspace -- and I haven't found Obj-C's inability to perform in those situations to be an asset.

> It should inter-op with existing Foundation and Cocoa, but still bring its own wide reaching standard library, adding a lot of duplication.

This is missing the forest for the trees. Bringing Swift-native Foundation APIs into the standard library means they're available on non-Mac platforms, which is huge. I'm also not sure what alternative would be preferable -- should it _not_ interop with Foundation/Cocoa, and totally fragment the ecosystem? Or should _not_ reimplement them with native calls, so that it inherits the performance penalty of objc_msgSend?

> It is functional, object-oriented and protocol oriented all at the same time.

Apple describes Swift as protocol-oriented. It's flexible enough to support other paradigms, but I don't see how that's a liability, and I haven't seen Apple claim "Swift is an FP language".

> It wants to be strongly typed, but at the same time so convenient in type inference, that it falls over its own feet while trying to grasp simple expressions, and they become too complex to manage. By design.

[citation needed]

This complaint is so vague as to be impossible to address, except to say that I haven't managed to come across a case of it yet. An example would really help make the case.

> It is compiled and static, but emphasized the REPL and playground face that makes it want to look like a great scripting solution. Which it isn't.

Again, I haven't heard any claim that "Swift is a scripting language". Sure, it's not, but neither is Objective-C, so... I'm not really sure what the complaint is here?

> It seems to have been driven by the needs of the compiler and the gaps that needed to be filled for the static analyzer. Those seem to have been super-charged instead of catering to app developer's actual needs: efficient, hassle free, productive (iOS) App development.

This sounds like a complaint about strong typing and static dispatch in general. Yes, you have to code more carefully, but the flipside is that a huge number of errors that crop up at runtime in Objective-C become compile-time errors in Swift. I'd much rather bang my head against the compiler for a while before I ship than have reams of undebuggable crash logs pour in.

> It is meant offer progressive disclosure and be simple, to be used in playgrounds and learning. At the same time learning and reading through the Swift book and standard library is more akin to mastering C++. It is quite unforgiving, harsh, and complex.

That's true, but that's also kind of a big part of why playgrounds exist. So again, what's the complaint? Is the fact that playgrounds make the language more accessible a problem somehow?

Also, the only way I can view Objective-C as "simple" is by ignoring the C underpinnings and looking only at the relatively thin layer of classes and message passing on top. For somebody coming to coding with fresh eyes, both Swift and Objective-C are going to have a learning curve; the difference is that with Swift you don't have to pick up K&R first.


I agree with the author on all points, except one: there isn't any reason not to just use Lua for everything.

Yes, thats right. Lua for everything. Lua on iOS, Lua on MacOS, Lua on Linux. Lua on Windows.

I absolutely love the freedom, flexibility and downright sexiness of using one language on all of the platforms. Its a beautiful, difficult, lonely place to be - but if you haven't tried it, you can't really knock it.


I am unfamiliar with mobile coding but I presume there are really good reasons to use the "blessed" language on ios / android - I would assume any scripting interfaces leave many useful (security) areas unavailable? Could you access the secure enclave for example in python or lua?


Yes, you sure can. Anything you can do in Objective-C or Swift, you can do in Lua.


Custom JIT is forbidden on iOS, how are you going to write performant code? I could see only JavaScript as a proper alternative.


> However, the flip side is it inhibits your creativity while writing.

The old adage of C devs against us on the strong type side of the fence of Algol family.

We all know how the freedom for creativity end up.

https://www.cvedetails.com/vulnerability-list/opmemc-1/memor...


> We all know how the freedom for creativity end up.

You mean, it completely dominates the programming world?

C is a massive success story, no other language even comes close.


C is garbage, a loaded shotgun with no safety. I say this as someone who spent twenty years writing object c code. The only way we survived was to build safety into a massive set of core libraries and use them religiously.


Things given for free are always a success story.

Had AT&T been allowed to charge for UNIX since the beginning and not 10 years later after being split by the government and C would have been a footnote on the history of systems programming languages.

In a world where many don't want to pay for software, free trumps technical merits.


All we are doing with Objective-C is gluing together a few JSON APIs and buttons. The assumption that a single language needs to be fit for both security-critical low-level code and high-level UI programming is exactly the problem with Swift.


There is no problem with Swift, it is just another language following the good principles of safe systems languages from 60 and 70's, going on outside AT&T walls.

Kudos to Apple.


> So now contrast that to Swift. First of all: Which question did it desire to answer?

From Swift.org: Swift makes it easy to write software that is incredibly fast and safe by design. Our goals for Swift are ambitious: we want to make programming simple things easy, and difficult things possible.

> It should scale from App/UI language down to system language.

I don't see how say Go can be a system language and Swift can't.

Much of Apple's investment has been around making App and Framework development more productive on their platforms, but Swift is open and other companies like IBM have been focused on things like Web frameworks.

>It should inter-op with existing Foundation and Cocoa, but still bring its own wide reaching standard library, adding a lot of duplication. I assume they mean the standard Swift package as the standard library, since they listed Foundation separate. Swift is tiny, basically holding core types like Int, String, Optional, and pointers, Collections (Dictionary, Set, and Array), ranges like 1..<10, and essential common protocols like Equatable and Hashable.

It isn't until you get to foundation that you get things like I/O / networking or binary data types.

> It is functional, object-oriented and protocol oriented all at the same time.

Several of the languages listed in the article as being above as positive (like Ruby) are also multi-paradigm. Swift is hardly a functional language, it just has functional influences - like nearly all the languages listed.

> It wants to be strongly typed, but at the same time so convenient in type inference, that it falls over its own feet while trying to grasp simple expressions, and they become too complex to manage. By design.

Since there wasn't an example given, all I can really say is there's nothing that requires you to use type inference. My experience from multiple languages is that usually such an expression is wrong, and the compiler cannot figure out any suggestions on how to fix it because a broken expression doesn't give any type inference hints. A dynamic language would just let it crash and burn at runtime, which (if you have a test suite capturing the issue) is such a different process for debugging from fixing type inference at compile-time that they are hard to compare.

> It is compiled and static, but emphasized the REPL and playground face that makes it want to look like a great scripting solution. Which it isn't.

REPL and Playgrounds are both for experimentation, not necessarily for scripting. You can do command-line swift scripts, but generally the compiler makes them a bit heavyweight for many things.

> It seems to have been driven by the needs of the compiler and the gaps that needed to be filled for the static analyzer. Those seem to have been super-charged instead of catering to app developer's actual needs: efficient, hassle free, productive (iOS) App development.

Can't say much here vs having a computer analyze your code is meant to be a boon, not a hinderance. My experience is that now when I write Java projects I wish I could get back the expressiveness, performance, and personal productivity I have writing Swift code.

> It is meant offer progressive disclosure and be simple, to be used in playgrounds and learning. At the same time learning and reading through the Swift book and standard library is more akin to mastering C++. It is quite unforgiving, harsh, and complex.

If nothing else, someone is underestimating the effort of mastering C++.


> My experience is that now when I write Java projects I wish I could get back the expressiveness, performance, and personal productivity I have writing Swift code.

Try Scala. Or Kotlin if you don't want to have to learn too much along the way.


> incredibly fast and safe by design.

Except it actually is incredibly slow, not fast. The only thing fast about it is the marketing, and that is mostly fast and loose.

I ran a ton of benchmarks for my performance book, and despite the fact that I didn't expect much, I was still surprised by just how badly it misses on the performance front.

As an example see how Kitura compares to other web frameworks in performance: https://www.techempower.com/benchmarks/#section=data-r16&hw=...

And Kitura actually implements the central HTTP parser in C. Porting that same code to Swift made it (that code, not Kitura) 10x slower.

As a second example, see JSON parsing. Putting Swift on top of NSJSONSerialization typically adds a 25x performance overhead. Yes, that's twenty-five. And NSJSONSerialization uses NSDictionary and friends to represent the parse result, which is already very, very slow. 25x on top of that is no mean feat.

Anyway, the good folk from Big Nerd Ranch decided the problem was the interop and decided to write a parser in pure Swift, called Freddy. It was a "success", as Freddy is only 8x slower than NSJSONSerialization.

Oh, and if you think Swift serialization fixes this: it doesn't. Still around an order of magnitude slower.

And I didn't even go into the role of the optimizer. Unoptimized code is much, much slower relatively speaking, sometimes 100x to 1000x (yes, that's a thousand) slower than optimized code. Note that the default debug mode in Xcode is unoptimized and that 1000x is the difference between 1 second and 20 minutes. And of course you don't get a diagnostic if an optimization is not applied.

The "Swift is fast" meme is pure Apple marketing at its most deceptive. It has never been true in the past and isn't true now.

As to safety: there is very little to no evidence that static typing actually increases safety, studies go either way and when there is a positive effect it is tiny. See also: http://blog.metaobject.com/2014/06/the-safyness-of-static-ty...

> [type inference falling over] usually such an expression is wrong

   time swiftc too-complex.swift 
   too-complex.swift:1:49: error: expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions
   let a:[Int] = [1] + [2] + [3] + [4] + [5] + [6] + [7]
              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~

   real	1m20.639s
   user	1m12.459s
   sys	0m5.249s
This compiles fine if you reduce the number of elements. But that's of course just one of the most egregious examples. Overall, compile times are horrendously slow, again we are talking order(s) of magnitude, and that is with the horrible C compilation model and its ridiculous header files as a baseline.


    let a:[Int] = [1] + [2] + [3] + [4] + [5] + [6] + [7]
What's the reason, something like exponentially many candidates for overloading resolution?

> here is very little to no evidence that static typing actually increases safety

Well, C is statically typed, Python or Common Lisp aren't. We need better categories than "static/dynamic".


Call me a purist, but I find no reason to have gone beyond C Not C++, not ObjectiveC, nor anything else that followed I find C++ and ObjectiveC add a lot of complexity and the benefits of adding this complexity are very limited I think it would be better to stick with C and pursue other ways to improve developer tools


You’re not a purist, because there’s nothing inherently pure about C — it’s just another language.


Well there is. It's the first 'high level' language that replaced assembly programming. All the other languages have added even more high level features, yet none of them have made its existence obsolete. The basic compier translates from C, most other languages add a translational front end to a C compiler.


That is an urban myth from C fan club.

There were quite a few system languages that replaced Assembly, some of them even 10 years before C was invented.

https://en.wikipedia.org/wiki/Burroughs_large_systems

https://en.wikipedia.org/wiki/PL/8

https://en.wikipedia.org/wiki/IBM_PL/S


Sorry, I meant in the microprocessor era when programming became mainstream and stepped out of academia. Before that, there were so few programmers that languages didn't need to develop.


Burroughs, Olivetti, IBM and DEC were hardly academia and their languages were quite advanced.


In terms of number of programmers programming them compared to now, I'd guess a factor of 10000 at least


The world has grown beyond C, thank god. For most of the programming world quality is more important than to low level access and speed.


It's not low level access but simplicity and pedal to the metal programming. Sort of like the difference between stick and automatic.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: