
My Swift Dilemma - jarjoura
http://owensd.io/2014/09/24/swift-experiences.html
======
coldtea
I'm not sure I understand the point of his post.

In all the examples he gives Swift comes with better and more succint syntax
plus more safety than Obj-C, and on par with his "Obj-C 3.0" idea.

And of course it has tons of other features and flexibility he doesn't delve
into.

The whole post for me boils down to his "I hate Generics" rant at the end.

~~~
dkarapetyan
This is just a side-effect of the blub paradox disease. One of the symptoms
for those that are afflicted with it is ranty blog posts about new languages
that the afflicted person does not perceive as being better because it doesn't
really have the same semantics as the one language they are used to.

I think it was Douglas Crockford that said this. We don't get new and better
technology because the old people accept it. We get new and better technology
because the people using the old crap just die out.

~~~
mpweiher
Post
[https://news.ycombinator.com/item?id=8367012](https://news.ycombinator.com/item?id=8367012)
suggests that it is actually Swift that is the Blub language...

~~~
the_af
That doesn't suggest Swift is a Blub-like language to me.

Though not explicitly stated in the Blub paradox, I'd say it implies that a
fresh programmer, unfamiliar with both Blub and the more advanced language,
would pick the more advanced language.

The Blub paradox doesn't automatically apply to every language whose syntax
you find bizarre. It also has to be a more advanced language, and the reason
that you don't like the language's syntax and constructs must be because you
don't understand the advanced features they enable.

This is clearly not the case with Obj-C and the post you mentioned.

~~~
lerno
No? You've never ever heard Java/C++ programmers complain about ObjC's syntax?
I find that... unlikely. [http://bit.ly/1CqrYY4](http://bit.ly/1CqrYY4)

~~~
the_af
Yes, I've heard it many times.

How is that a counterexample? _Bizarre syntax alone_ is not enough for the
Blub paradox. I don't know anyone who seriously argues that Obj-C is a more
advanced language than Java/C++, so Blub doesn't apply.

~~~
mpweiher
You should get around more.

Objective-C object-orientation via dynamic messaging is much more
advanced/powerful than the Abstract Data Types available in C++ and Java. It
enables such features as target/action, NSUndoManager, Higher Order Messaging,
distributed objects...and their concise implementation.

If you don't understand dynamic messaging and see Objective-C as just a way of
doing things that you would do in Java or C++, then I'd agree that it is less
advanced at doing those things.

Blub.

~~~
the_af
This is entering highly subjective territory.

The consensus is that Objective-C's brand of OO is not particularly advanced,
and Objective-C as a language is definitely _not_ considered advanced. Keep in
mind Paul Graham's essay had _Lisp_ in mind, not a C derivative.

Nevertheless, the post that sparked this thread was a comparison _between
Swift with Obj-C_. Now, even if you consider Obj-C an "advanced language"
(which most people do not), it's completely unreasonable to think that a
programmer looking at both Swift and Obj-C would disregard the latter because
of the Blub paradox. It would be highly... let's say nonstandard to consider
Obj-C more advanced than Swift.

Not Blub.

~~~
lerno
On the contrary, it is well known that ObjC and related languages represent a
different strain of OO than Java or C++. In fact, people has gone so far as to
say that C++/Java don't really represent "real" OO at all.

It should be obvious that ObjC belongs to the Smalltalk linage, which is quite
different from C++/Java. Unless you have understood why Smalltalk is still
held in high regard, you haven't understood the language.

------
krilnon
"I look at the feature set of Swift, and I have to ask myself the question:
what’s the point? What is really trying to be solved? And does it provide
significant benefits over languages that already exist?"

One reason may be that large companies want _ownership_ of a modern,
C#/Java/Go-like language [1]. I was an intern at Adobe when it was trying to
develop such a language (ActionScript 4) + VM, and a primary reason not to
adopt an existing language is that they wanted control and to be free of
legacy baggage as much as possible. Obj-C is reasonable to use today, but its
C and Smalltalk underpinnings sometimes feel like anachronisms for people
writing App Store apps. This is probably especially true if you're a language
designer like Chris Lattner at Apple and are tasked with fixing the most
prominent pain points that your language users face.

[1] I realize these are somewhat diverse languages.

~~~
SideburnsOfDoom
> One reason may be that large companies want ownership of a modern,
> C#/Java/Go-like language

Quite true. A major reason why c# exists is that circa 2000, Microsoft wanted
_ownership_ of a modern, Java-like language.

------
interpol_p
The author's comment on the first example is:

> _what is the type of $0 and $1?_

Surely a great thing about Swift is that the compiler _knows_ the types of $0
and $1 and will prevent you from doing stupid things with them like you can in
Obj-C?

~~~
dirtyaura
Yes, and having lambdas and unnamed parameters is a great thing when the local
context is enough to explain it.

People are totally okay working with unnamed numbers and values, and it is as
important with functions when you program in functional way.

We write

    
    
      x2 = x1 + 10
    

if it makes sense in the context. When it needs a better explanation we name
the value

    
    
      width = 10
      ...
      x2 = x1 + width
    

If we would need to name EVERY number, the code would be less readable. Same
with the functions, sometimes lambda with unnamed parameters (e.g. $0 < $1) is
more readable than naming the lambda and all the parameters. Each name adds a
conceptual burden.

------
fleitz
I felt mostly the same way, until last night.

Last night was one of the few times I touched ObjC in months, since I was
starting a project from scratch, it took me a while to recall the precise
details of @property syntax.

He's right to say that swift is _mostly_ syntactic sugar. However, as he
points out, all we really needed was ObjC with a better syntax.

Also, the debugging tools suck right now. My biggest beef with the language is
not removing the return keyword. I was really hoping for a language with that
was more OCaml, less C++.

In summary, is it perfect? no. is it an improvement of ObjC? Definitely. are
there areas which ObjC shines? a few.

------
danielrakh
Great post. I'm not sure if it's because of my Objective-C bias or not but
Swift just seems messy to me. Everything from how it's spaced and structured
in the editor to how it's read. (Lack of headers, clear separations between
data structures, methods..etc). Say what you want about Obj-C but it was damn
organized.

~~~
zak_mc_kracken
> Swift just seems messy to me.

You probably mean "unfamiliar".

Give it some time, you'll soon realize how more modern Swift is than Objective
C.

------
untog
My instinct is that most well-versed Objective C programmers won't get much
out of Swift. But for people like - non-Objective C programmers - it's been
great. I've long since been put off by the alien syntax of it. I've been told
numerous times that once you get to know it you adjust, and I believe it, but
iOS programming is never going to be my full-time job, so I'm not interested
in spending that amount of time getting comfortable.

With Swift, however, I was up and running very quickly. Also running in
EXC_BAD_ACCESS errors, but hey, early days...

------
PieSquared
As he predicted, I was with him until his rant about generics. While the
example he makes support his point, that's nothing specific to _generics_ but
instead to the implementation of generics. For example, he uses the following
example as "bad" generics:

    
    
        func reverse<C : CollectionType where C.Index : BidirectionalIndexType>(source: C) -> [C.Generator.Element]
    

and the following example as "good" non-generics:

    
    
        func reverse(source: CollectionType) -> CollectionType
    

However, you can have equally clean syntax _with_ generics. For example,
consider the hypothetical syntax:

    
    
        func reverse(source: CollectionType[a]) -> CollectionType[a]
    

in which `CollectionType` is parameterized by the type variable `a` [0].

I also take issue with the idea that removing static checks isn't a big
penalty. In particular, I cringe a little at the following sentence

> Because it does not actually matter. If an Int gets in my array, it’s
> because I screwed up and likely had very poor testing around the scenario to
> begin with.

The benefits of static typing is that you don't _need_ testing of things like
that. The compiler guarantees safety, allowing you to avoid writing test case
that are mundane and boring, such as checking that you don't put an Int into a
String array.

The following paragraph also seemed questionable to me:

> Yes, in this example, I’ve moved the validation from compile-time to
> runtime. But you know what, that’s likely where many of these types of
> errors are going to be coming from to begin with because the content of the
> array is being filled in with dynamic content getting coerced into your
> given type at runtime from dynamic input sources, not from a set of code
> statements appending items to your arrays.

I think this is somewhat incorrect. You should never just be type-casting your
inputs. (In fact, I think it should ideally be impossible to do so without the
compiler generating really big flashing warnings saying "THIS IS DANGEROUS!").
The static verification here will prevent you from doing silly things, and
should ideally force you to do input validation at the location of input,
instead of blindly casting things to the type it needs.

All in all, not a bad discussion, but I think that this piece demonstrates
that bad implementations of static typing can severely detract from the good
qualities of static typing, and that it takes some getting used to to program
well in a statically typed language (not casting things spuriously is a good
example of that). That said, I know very little about Swift, so take all of
this with a grain of salt.

[0] It may at this point be clear that the inspiration here is Haskell and ML;
I am a big proponent of these languages, and believe that static typing can
eliminate many common errors.

~~~
mpweiher
>> Because it does not actually matter. If an Int gets in my array, it’s
because I screwed up and likely had very poor testing around the scenario to
begin with.

>The benefits of static typing is that you don't need testing of things like
that. The compiler guarantees safety, allowing you to avoid writing test case
that are mundane and boring, such as checking that you don't put an Int into a
String array.

This is a canard. You hardly ever (within an epsilon of "never") write tests
for types. You write tests for values that you expect, which the type-system
doesn't guarantee. Those values have types, so those get tested as a side
effect without any additional effort.

~~~
zvrba
> Those values have types, so those get tested as a side effect without any
> additional effort.

Not in the presence of implicit conversions, often insane in dynamic
languages. (like, 12 + "34a" gives 46).

~~~
herge
Often? what language does '12 + "34a" gives 46'? Not even perl does that.

~~~
ahomescu1
PHP does, I just checked.

~~~
sitkack
using PHP in a discussion about types evokes Godwin.

------
archagon
In regards to optionals, I've personaly found myself using the
`property?.member = value` syntax quite a bit, which lets you avoid the nil
check in many cases. I don't think a nil optional should be an "error case" as
the author puts it most of the time.

Overall, I've been finding Swift to be more coherent and faster to use than
Obj-C in my projects, which is nice. But I'm not a language power user.

------
spion

      toUpper array := map (x => uppercase x) array
    

Why not just `map uppercase array` ? That way you don't even really need to
define a function `toUpper array`. Isn't it customary not to mix lifting and
program logic in functional programming languages?

Filter example:

    
    
      notPrefixedWithA name := not contains (prefix name) ["a", "A"]
    

then use

    
    
      filter notPrefixedWithA names

------
tempodox
Good rant. I concur with respect to Optionals, but I don't mind the ObjC block
syntax that much.

It's a real pain trying to find out what functions are actually applicable to
a given value (bad reflection / discoverability / documentation).

My guess is that it will take ≥ 2 yrs until Swift can possibly replace ObjC,
and even then only partially. ObjC is still a fully grown C, after all.

------
aclissold
The author fails to realize one critical thing—that the imaginary "Objective-C
3" language he wants requires intentionally breaking the fact that Objective-C
is a strict superset of C… exactly what Swift has done.

~~~
lerno
I think you misunderstand then. I think he's all for a new language, the
problem with Swift is that it's not quite the language he hoped for.

~~~
aclissold
"It’s the syntactic sugar that makes the syntax modern, not the feature set
within Swift itself. A cleaner ObjC could have done the same thing."

I think this statement is false—a cleaner Objective-C could be nowhere near as
clean as Swift!

~~~
lerno
Look at Eero for to how a simple updated syntax could look. On top of that,
many of Swift's features could run on an updated ObjC as well. The problem is
that the features that required a whole new runtime and compiler model are
very few.

In other words, we could have gotten almost-Swift by building squarely on ObjC
and would have avoided most of the runtime and performance issues that still
plague the language. Plus it would have been a simpler language than Swift is.

------
bsaul
My take on swift : use it once apple release a real product made with it.

~~~
antimagic
Which is fair enough, but if you look at the amount of effort they are putting
in to bring Swift's documentation / tooling up to speed, it seems pretty
obvious that Apple themselves have high ambitions for the language.

Also, it's going to be hard to tell how much Swift code is being shipped by
Apple - the interfacing with existing Obj-C code is sufficiently clean that
it's _relatively_ simple to have parts of a product implemented in Obj-C and
other parts done in Swift, and we, sitting on the outside, may never know.

~~~
bsaul
I recently learned that core data wasn't used internally at apple. That
freaked me out, because it made me feel that choosing that technology for a
serious project was a mistake, despite the numerous documentation and support
apple provides for it.

~~~
CGudapati
Really? Not that I'm doubting you. Could you please provide me a link so that
i can educate myself?

~~~
bsaul
Unfortunately it's not a written statement ( you can imagine that nobody
working at apple would publicly say a thing like that). And it's not a direct
source either, so you can very well take that with a grain of salt.

Actually, I am still hoping that someone will contradict me by providing
concrete example of apple software using core data though, because as i
stated, i invested a lot in that technology for an important project.

~~~
scrumper
Well, the Notes app on Mavericks uses Core Data. I know it's a trivial example
but it's the first one I found.

(~/Library/Containers/com.apple.Notes/Data/Notes/NotesV2.storedata is a Core
Data SQLite store.)

------
crazychrome
when I first saw Golang, I hated it, but now I think it's the best server side
language. Galang is an insanely pragmatic language. It's not fancy, just
works.

Facing Swift, I had mixed feelings. It's a beautiful language, probably as
pretty as Ruby, but what really makes it unique, or extremely productive? I
can't find any.

Am I the only one agrees with the author that generics probably do more harm
than good. I'd argue that generic gives programmers a fake feeling of control
and easily leads programmers spending time on over design.

~~~
zoomerang
> Am I the only one agrees with the author that generics probably do more harm
> than good

It's really the same debate as 'Dynamic' vs 'Static' languages, since without
generics your code is relying on runtime type checking in anything remotely
complex, thus is effectively a dynamic language.

Personally, I'm of the opinion that people that don't like static typing only
feel that way because they've only used the shitty implementations in Java or
C#.

Regardless, This is a religious war that has raged for decades, and isn't
likely to be answered anytime soon. The answer really comes down to "It
Depends". I fall firmly in the 'static typing is good' camp, mainly because I
have hard evidence to back up my opinion that it results in significantly
fewer defects.

(Specifically, very clear reports from from issue tracking systems showing our
defect rate in production dropping by 90% (!!!) when we switched from Groovy
to Scala, with a notable increase in productivity).

Some of the developers complained, since they had to learn knew tools. But
being professionals, they learned new tools and were better off for it.

Now while I'm an extremist religious zealot about proper static typing being
the one true way, I'm quite mindful that, for many developers, that tasks they
are working on just aren't complex enough for it to make much of a difference
in practice - they are able to test all edge cases and deploy stable software
to production, just with a little more runtime testing than they'd otherwise
need.

Some languages - such as Go, or pre-enlightenment Java, do not implement
Generics, and thus require runtime casting in many cases. In these languages,
there's still a degree of compile time checking, just not as thorough as it
should be. As with dynamic languages, they can work with no perceived issues
for projects up to a certain size and provide a reasonable halfway point.
Beyond this, you're going to hit a wall.

As to your argument that Generics do "More harm"? I'd strongly disagree. If
you're unfamiliar with the gotchas generics introduce (i.e. Variance can be a
mindfuck), then they can seem difficult and problematic. But like any other
professional tool, once you've gotten over the learning curve you're more
productive with it than without.

tl;dr If I wanted to bang a few pieces of wood together, I'd feel comfortable
using a Hammer. the learning curve small, and I can connect those two pieces
of wood in no time.

My Uncle is a carpenter. As a professional carpenter, he bangs pieces of wood
together all day long, every day, for his entire career. As such, a nailgun is
a more appropriate tool. While being more complex to use and having a steeper
learning curve, he's a professional, and uses a professional tool to do a
professional job. Occasionally he might want to bang some quick project
together in his shed, and getting out the nailgun is overkill, so he uses a
hammer for the odd thing here and there.

I'm a professional programmer. I use professional tools, even if they have a
steeper learning curve and might be more complex. Occasionally I want to whip
up a quick script, so will just hack it together in Python.

~~~
modersky
There is a lot of interest in research on the benefits of static vs dynamic
typing. But unfortunately there is not a lot of hard data. There were some
experiments, e.g.

    
    
        http://dl.acm.org/citation.cfm?id=2047861 
    

which seem to support the claim that dynamic typing is better for rapid
prototyping. So if you have data that correlate typing disciplines with bug
rates, it would be hugely valuable to share it.

~~~
the_af
On the other hand, there's the 1994 Navy sponsored study which had as an
(informal) conclusion that Haskell was better at rapid prototyping when
compared to other languages of the time.

[http://haskell.cs.yale.edu/wp-
content/uploads/2011/03/Haskel...](http://haskell.cs.yale.edu/wp-
content/uploads/2011/03/HaskellVsAda-NSWC.pdf)

The experiment mostly compared Haskell with imperative languages such as C++
and Ada, but there was also at least one Lisp variant. There were several
informal aspects to the study, not the least of which being that there were no
clearly defined requirements for the system to be implemented (so it was up to
each participant to define the scope), but the conclusion is very interesting
nonetheless:

The Haskell took less time to develop and also resulted in less lines of code
than the alternatives, and it produced a runnable prototype that some of the
reviewers had a hard time believing wasn't a mockup. Many of the alternatives
didn't even end up with a working system. It should also be noted that the
Haskell participants decided to _expand_ the scope of the experiment; i.e.
they didn't "win" because they implemented a heavily simplified solution, but
in fact added extra requirements to their system and still finished earlier!

Even though Obj-C wasn't included in the study, there were similar enough
C-like languages in it, so my bet is that Haskell would have won against it as
a rapid prototyping language as well.

------
Kurtz79
I wonder if it really makes sense comparing a language that is less than one
year old to others that have decades of improvements and fine tuning behind
them .

If there was a perfect language, all of us would be using it: each one is the
result of compromises between features, performance, readability,
simplicity...

For an argument against an hypothetical ObjC3, just look the unending
transition between Python 2 and 3.

~~~
mattquiros
I'm not sure it's right to compare it to the Python versioning issues. Apple
can just break old Objective-C code for later versions of iOS/OS X if they
wanted to. Also, you can look at the adoption rate of ARC when it was
introduced. Pretty much every popular third-party iOS library jumped on to
implement it, and those that don't usually are abandoned projects.

------
mattquiros
I personally choose to give Swift a chance. I think it sucks largely because
Xcode 6 itself sucks, but that will improve over time.

