Hacker News new | comments | show | ask | jobs | submit login
Swift – Why Annoying Is Good (seanhess.github.io)
38 points by embwbam on Sept 8, 2014 | hide | past | web | favorite | 47 comments



> One new feature is Optional Typing. You can decide whether a variable is ever allowed to be null.

Ugh, Swift is really confusing terminology here. Swift has an "option" type, like "Option" in Haskell and F#. It's a wrapper around an object that may or may not be present. You can use that to avoid null-everywhere.

This is totally unrelated to "optional typing", which is a style of static type systems that support mixing typed and untyped code.

Alas, Swift called their option type "Optional" and then talks about "an optional type".


This is why I appreciate that Haskell calls this type 'Maybe'.

I think this better captures the types semantics. I mean, an 'Option' or and 'Optional' makes it seem like you are given a choice... but a 'Maybe' implies more that what you are declaring exists only with some probability!

Q. Will this function give a result? A. Maybe.

Q. Will this function give a result? A. Option.

The choice is obvious.


The choice is only obvious because you tailored the question to the Haskell type name. But that's not the question you're asking. The question is "What result will this function give me?". The Haskell answer is "Maybe a". It will give me something that might be an `a`? The Swift answer is "Optional<T>". That makes a bit more sense than "Maybe".

The name makes even more sense when you talk about function arguments instead of function result types.

Q. What argument does this take? A. Maybe a.

It takes an argument that might be an `a`?

Q. What argument does this take? A. Optional<T>

Still not natural english, but the interpretation here is a bit cleaner: It takes an optional value of type `T`.


I'm not sure I totally agree. Well I do agree that I tailored the question to suit my point! But to your reply, I think you've changed the semantics of my meaning.

My questions were more from the perspective of the caller, and even then it was a bit abstracted from the actual type and implementation ... just more of a conversational "Does it? Maybe!"

You're rephrasing is still looking at it from the caller, but now takes into account the language semantics (which mine conveniently didn't :) ).

I would rephrase your questions, and look at them from the callee's perspective: "Will I get an argument? Maybe!"

And really, shouldn't we be saying that the function takes Just a to ... and takes Nothing to ... ?

Perhaps this has stretched as far as it might go. Maybe?


Totally agree. I wondered the same way about why they chose Option over Maybe in Rust. There're so many things in the world that actually can be called Option (I could imagine <select> children or name of some business object) but pretty much none of them are about being null or not. When Maybe is so intuitive that I'm now using this word for the similar cases even in dynamic languages.


Might have got it from Java (either the Guava library or Java 8 http://docs.oracle.com/javase/8/docs/api/java/util/Optional....)


It's been interesting and amusing to watch the reactions to Swift, especially among those without much exposure beyond ObjC or JavaScript or similar. Many people encounter difficulties and automatically blame the language, as a sort of reflexive action to blame whatever changed last. Sort of like the classic, "You removed that virus last week, and now the computer's on fire, so this is your fault."

Regarding JSON, it is annoying to work with in Swift, but when you get down to it, it's a fault of JSON in general and the available parsers in particular. There ought to be ways to describe and validate the desired structure to the parser, or failing that, ways to tell the parser to fetch a value of a particular type and nicely produce an error if it doesn't match.

Pretty much as the article says, it's hard to work with JSON correctly in any language, Swift just doesn't let you take the easy way out.

In the ideal language, anything that's hard to write is so because the problem is hard. No language is ideal, but whenever we're working with any new language and something seems harder than it ought to be, we should at least stop and wonder if maybe the "easier" languages we're comparing it to are only easier because they let us do things wrong.


You're probably already aware of this, but Haskell's JSON parser (named Aeson) does exactly what you ask for. In particular, it can be operated in two modes:

1. Untyped mode. Here we parse out values of type, amusingly, Value which reflect the total, untyped structure of JSON. We can then express queries deep inside of that structure and see what get's pulled out. For instance, using the `lens-aeson` package

    v :: Value
    
    x :: Maybe Integer -- Maybe means it might fail
    x = v ^? key "foo" 
           . key "bar"
           . key "baz"
           . ix  3
           . _Integer  -- looks a bit like XPath, right?
2. Typed mode. Now we suggest to Aeson that some of the types of our program have direct translation into or from JSON (or both) and let the parser produce these translations in a type safe way. For instance, should we have a type like

    data Point = Point { x :: Double, y :: Double }
then we can introduce a translation like

     instance FromJSON Point where
       parseJSON = withObject "Point" $ \o ->
         Point <$> o .: "x"
               <*> o .: "y"
which indicates that JSON objects of a syntax like `{"x": 1, "y": 1}` can be parsed into `Point`s. Technically, the parser designed there is minimal in that JSON objects with more fields can still be properly parsed, although that can be fixed.

Finally, you can easily anoint your untyped mode XPath-like queries with typed fragments. This is especially powerful if your typed fragments include untyped JSON Value fragments inside of them. It means that we can use lenses to dive into an untyped JSON Value, parse some fragment of it into a Point in a type-sensitive fashion, then dive further into that Point. If Points contained more JSON values then that "further dive" could be untyped or typed again.


I really wish that Aeson's lookups and fromJSON returned Eithers, and not Maybes. If you try to parse a deeply nested structure represented in JSON, and just get a Nothing back, it really doesn't help you very much. Having an Either is much more helpful.

(If anyone is aware of a canonical library for this, let me know. I wrote my own JSON library which has this functionality but it's not particularly robust or performant).


Aeson's «fromJSON» returns a «Result» which is basically «Either String». Most other functions operate on «Parser», which also includes an error message if things go wrong.

The only functions returning «Maybe» are «decode» and friends, and those have «Either String» variants.

You can use «modifyFailure» and «typeMismatch» (and the «with» wrappers) to get more descriptive messages, without a bit of help they are quite useless.


You can use eitherDecode to get that behavior. http://hackage.haskell.org/package/aeson-0.8.0.0/docs/Data-A...


Thanks, I wasn't actually aware of any of this. It's cool to see how other languages handle this stuff. Better JSON parsing has been a constant thing of mine and I'm hoping to build some good tooling for Swift once it stabilizes, so it's great to get ideas from other stuff that's out there.


Rust has a good approach here as well, that's related to Haskell's approach. In Rust, a type can implement the traits `Encodable` and `Decodable`, which are general traits that represent the ability to encode to and decode from some arbitrary format, using objects of type Encoder and Decoder. Each supported format then provides implementations of Encoder / Decoder. The benefit of this approach is the Encodable/Decodable implementations can be automatically derived in a lot of cases.

For JSON specifically, the serialize::json module provides Encoder/Decoder implementations, as well as a streaming JSON parser type that can be used to consume JSON outside of the Encodable/Decodable system, and a Builder object that consumes the streaming parser to produce a generic untyped "Json" value that represents the entire structure of the Json. And of course it has some convenience functions that simplify common operations like encoding a value into a string.

Given all this, the equivalent of the Haskell operations looks like:

    let v: Json = ...; // get the Json value
    // the following traverses the Json to extract the same path as the Haskell
    // and treats the result as a 64-bit integer.
    // note: this API is slightly old and has some unnecessary stuff in here,
    // such as the find method taking allocated strings instead of slices.
    let x: Option<i64> = v.find_path([&"foo".to_string(), &"bar".to_string(), &"baz".to_string()])
                         // this should eventually become .find_path(["foo", "bar", "baz"])
                         .and_then(|obj| obj.as_list())
                         .and_then(|l| l.as_slice().get(3))
                         // the .as_slice() will go away
                         .and_then(|val| val.as_i64());
There's talk of adding support for `do` sugar which would make this much simpler as well.

The typed version is much simpler. We can define the struct with a derived Decodable like so:

    #[deriving(Decodable)]
    struct Point {
        x: f64,
        y: f64
    }
And then actually decode some Json like

    let x: json::DecodeResult<Point> = json::decode(source_str);
    // DecodeResult contains either the result, or an error.


To be clear, you can autoderive the JSON encoding for Aeson, too. I just wanted to demonstrate the full thing.

Also, the XPath implementation I noted is special over Rust's because it forms a Lens (Prism, really) so you can get and set at arbitrary positions or even sequences of positions:

    v & ix 3 . each . _Integer %~ succ
The above increments each integer in the array stored at index 3 of `v` if one exists.


pardon my ignorance, but in the last line, how does json::decode knows it should decode to a json::DecodeResult<Point> rather than a json::DecodeResult<Whatever> ?


Because of the type annotation on the `x` declaration. Rust takes type constraints on the result type into account when determining what implementation to call.


FWIW, even Java allows something similar to 2, i.e. with Jackson

   public class Point {
     public String x, y;
   }
   Point v = new ObjectMapper().readValue(io, Point.class);
It also works ok with Lists and Maps.


> if let value = blah

Language constructs like that seem wrong to me (it's the "let"). That's just one of my mental blocks against Swift.


I see this sort of thing a lot and I still can't understand it. Syntax is, to me, completely trivial. Power through it and get on with life. For an experienced programmer, looking at code should be like that scene in The Matrix where the guy points to a screen full of symbols and says he just sees "blonde, brunette, redhead."

It goes the other way too: quite a few people have described disliking Objective-C's weird bracket syntax and much prefer Swift because of that, and I can't understand that either. It's just syntax.


I will say that I've always been uncomfortable with binding new values inside an if-statement's test.

As far as syntax goes, it's a good thing that it's possible: I'm assuming here that in Swift every statement is an expression, which is a good thing, and such a feature naturally means that doing this sort of thing should be possible. But stylistically it just feels ugly to me. I don't know if Swift has an equivalent to either of the following, but I'd prefer either of them as a cleaner idiom:

Null-coalescing a la C#:

    self.imageUrl = json["imageUrl"].string 
                 ?? "http://example.com/empty-avatar.png";
Pattern matching a la F#:

    self.imageUrl <-
        match json["imageUrl"].string with
            | Some(value) -> value
            | None        -> "http://example.com/empty-avatar.png"


Both of those work in Swift too. The ?? operator exists verbatim, and the matching can be done using a switch/case statement.

Unfortunately, it is not true that every Swift statement is an expression. The if/let construct is actually a special case. That means that you can't do something like:

    if (let x = foo) && (let y = bar) { ...
Which is too bad.


I get the binding a value inside an if statement.

if (i=3*someFunctionResult()) { ... blah using i ... }

makes perfect sense to me. The "let" seems unnecessary and, for lack of a better term, unpoetic.

Aside from which, the use of let to define a constant in the middle of an if statement is about as wrong as you can get, in my opinion.

Define your constants at the top or in another file, FFs.


'let' doesn't define a constant, it binds an immutable value.

The semantic difference is enormous. A constant has infinite lifetime and must be known at compile time. A let-binding has limited scope and can be bound a run-time.

In a language like Swift that distinguishes between mutable and immutable variables, a 'let' or 'var' is needed to indicate to the compiler what kind of of variable you're trying to create. Beyond that, it also makes for a cleaner language. Creating a special syntax for let statements that only applies inside the condition for an if-statement (and presumably also loops) just so you can avoid typing a few letters approaches PHP levels of unholy grammar pollution.


Don't know about OP, but for me it flips the "GWBASIC" switch. I'll admit to my own biases, and one of those biases is based on some design decisions in BASIC. So, right or wrong, when I see old BASIC code I go "ugh". By extension, when I see a language construct that I haven't seen since I last looked at BASIC I also go "ugh".

You're right, I need to get over it and I will. But for the time being a little piece of me whinces when I type "let foo = 3". "OMG, should I be numbering my lines, too?"


It's directly related to how familiar the syntax is. If a syntax is very foreign, it's difficult for your brain to read it, and it is uncomfortable. People don't like being uncomfortable :)

The more syntax you learn, the less new forms confuse you.


Except learning the syntax of a new language is trivial compared to, say, learning a language with new paradigms.

Don't train your brain to run away from the sight of something that is temporarily uncomfortable. It's intellectually lazy and promotes a passive mindset. It's not like all your programming skills just fall apart once you leave your blub language.


> Regarding JSON, it is annoying to work with in Swift, but when you get down to it, it's a fault of JSON in general and the available parsers in particular. There ought to be ways to describe and validate the desired structure to the parser, or failing that, ways to tell the parser to fetch a value of a particular type and nicely produce an error if it doesn't match.

That's what XML had and has. JSON became successful because it did not do this. Discuss.

> In the ideal language, anything that's hard to write is so because the problem is hard.

In my experience with Smalltalk, it made problems that I thought of as hard easy.


> That's what XML had and has. JSON became successful because it did not do this. Discuss.

XML is more self describing. I think what people are talking about is more like protocol buffers or capnproto -- described by an "oracle" (like a shared description file). That makes validation trivial, and you get known types instead of "is this a '1' or a 1 or a true or a 'true' here?"


> That's what XML had and has. JSON became successful because it did not do this. Discuss.

XML tried to describe the structure in the XML itself. mikeash is referring to describing the structure in the code, and that makes a heck of a lot more sense. The code definitely should know the structure, or at the very least know what type of value it's expecting when reading values out of a JSON blob.


I assume you're talking about XML DTDs, which are completely different from what I'm suggesting. DTDs are part of the XML spec and are written in XML, making them extremely unwieldy. My suggestion for JSON validation is that it should be part of the local JSON API and that it should be written in the language you're using to call the JSON API.

I'm also going to have to stick a stonking big [[citation needed]] on your claim that JSON became successful because it doesn't do this. It's about as believable as the common headlines of the form "Stock X Rises/Falls on Event" which are nonsensical post-hoc constructions and poster boys for "correlation does not imply causation".


>>describe and validate the desired structure

Something like this? https://github.com/abiggerhammer/hammer


Go also has a set of "annoyances" (some close to the same, some totally different from Swift) and while I was initially annoyed by some of them, over time I've grown to really love Go for it because there is a payoff to the annoyances (at least in Go, I don't have enough experience in Swift to say the same):

If the code compiles, it probably works. Not 100%, of course, it is always possible you made some stupid logic error or an off-by-one, but in my experience compared to any other language I've used (including a lot of C, C++ and Java not to mention dynamic languages), Go code that compiles is SO MUCH more likely to be functionally correct, and I believe this is because it is such a nag about code correctness and eliminating any ambiguity as to what the programmer intended.


what extra correctness does Go guarantee over Java?

I can imagine less concurrency bugs, but ATM I don't recall any particular language feature that should prevent more bugs than any other statically typed language of the last decade.


With Java vs Go the little bits of forced correctness are more subtle than they are compared to dynamic languages or some of the less "modern" static languages (and less to do with types, though I could attempt to start a raging debate over the merits of interfaces versus traditional OO), but they do still exist. One example off the top of my head: if bracing.

Java:

  if (state) 
    doThis();
    doThat();
This type of error is not uncommon if you don't enforce really strict code-style standards or some sort of linting step that isn't part of the language proper.

Programmer A writes an if statement with one line because that's all that's needed at the time, programmer B comes in to expand the logic and is either just having a bad day or is more of a Python programmer dabbling in Java and so doesn't automatically see the error. Then programmer B dev-tests the code, but only when state == true, so he still doesn't notice the problem because he was expecting both doThis and doThat to be called anyway.

In Go this is an explicit compile-time error, all if statements require opening and closing curly braces. This is exactly the sort of thing, IMO, that might seem like an annoyance to someone who "knows what they are doing and just wants the compiler to stfu" (eg. younger, stupider me) but is, in the long run, actually really helpful for software correctness.


thanks for the example, it makes perfect sense to me.

That would be the same reason that when coding in java I moved from "how do I disable warnings" to "how do we automate findbugs+checkbugs for all devs" :)


TLDR: I just discover that strict type system are there for a reason


With all due respects to Stanley Elkin:

No, annoying is not good. Annoying is annoying.

Computer languages are there for me to get things done, not to get in my way. Attempts to force good behavior invariably fail. Burn some more midnight oil to figure out how the language/library can deal with it.

I once saw someone write that the 1st and foremost goal of a language was to disallow mistakes. That's nonsense. If it were, all we'd have to do is make a non-functional programming language that can't do anything. So I guess sort of the next step in functional programming?


> Attempts to force good behavior invariably fail.

Except this isn't forcing good behavior, it is forcing being explicit. Go ahead and use `!` it will work like you are used to.

> No, annoying is not good. Annoying is annoying.

Different strokes for different folks. I for one think avoiding ambiguity is good given the ratio of code written to code read that most developers experience in their day to day lives.

> I once saw someone write that the 1st and foremost goal of a language was to disallow mistakes.

You are misinterpreting that requirement. What you are doing is you are forcing the choice to be made when you write the code. More flexible compilers make the choice when designing the language.

Many people believe that you know best when writing the code so stopping you there to correct it makes you avoid problems in the future, that is what "disallow mistakes" means.

As I said it is perfectly valid to instead say "I think the writers of the compiler have good enough defaults for these situations" but I for one don't trust them that much.


>Except this isn't forcing good behavior, it is forcing being explicit. Go ahead and use `!` it will work like you are used to.

That's not correct. `!` will crash without recourse where ObjC simply ignores the message and Java raises a catchable NPE.


If you actually do a null check all of the languages work the same. Since in all but the prototype cases you will be doing a null check does it really matter?

> ObjC simply ignores the message

Don't quote this like a benefit please, it is a horrendous design.

> Java raises a catchable NPE

Only difference between this and a crash is in theory you can log. In the majority of cases a NPE is bringing you down anyway.


>> ObjC simply ignores the message >Don't quote this like a benefit please, it is a horrendous design.

Hmm..so the ? operator for optionals should removed from Swift?


If your NULL exception results in a 5am wake up call to debug... has that computer language now got in your way?


Does that happen to you? Hasn't happened to me.


Yes, it has happened to me that type errors (both NPE and ClassCastExceptions) caused problems in production that should have been prevented by a better type system.


The last back-end system I built[1] had 3 failures in a year. First was a JVM misconfiguration (JVM neophytes) so we ran out of memory. Second was a temporary network outage that caused us to get behind our feed without the network bandwidth to catch up. Third was user interface code that wasn't tested.

Considering the system it replaced had more like 3 failures a day, the reliability was seen as "pretty good", so much that our relationship with ops was essentially we drop a jar, they tell us nothing happened.

TDD for the win. Not sure how/where a better type system would have helped.

[1] http://link.springer.com/chapter/10.1007%2F978-1-4614-9299-3...


A better type system is a complement to unit testing/TDD/other techniques. It simply allows you to focus on other, more meaningful kinds of tests.

You can think of a better static type system as unit tests that the compiler writes for you. When you think of it, automating things is what computers are all about. Of course this won't cover all cases, but neither will the tests you write by hand.

Anyway, you asked if that sort of errors happens. I answered it does.

edit: the problem with arguments like "well, I write software with few bugs and I don't need a better type system" is that it's essentially "just don't write buggy software" in disguise, which is a discredited line of thought in software engineering. People doing TDD (and every other technique/tool you can think of, including strong static type systems) still write buggy software. It follows that you should be using every available tool that can help minimize bugs and enforce correctness. Relative to the criticality of the software you write, of course -- sometimes you just don't care if the software has bugs.


> A better type system is a complement to unit testing/TDD/other techniques.

I agree that it could be such a complement. I haven't seen it in practice.

>It simply allows you to focus on other, more meaningful kinds of tests.

Again, I have seen this claim made many times, and it seems plausible at first, but I don't think it actually holds and have seen no real evidence for it. I don't think I've ever written a test for a specific type. I write tests for values. This automatically tests for types as a side effect, because values tend to have types. So these less meaningful tests that the type system would render unnecessary don't exist in the first place, and checking for the types separately adds no value (well, epsilon) in terms of safety. (Yes, I know it does a proof over all possible values, but this doesn't seem to make a difference AFAICT)

Note that TFA is about Swift, and many things that are actually benign in Objective-C will crash Swift...with no recourse as there is no exception handling. So I am not getting how being strict adds to safety.

> Anyway, you asked if that sort of errors happens.

Actually, I asked whether that sort of error happens to you (the person I was responding to). My answer would be: write better code, TDD allows you to that in such a fashion that you have no NPEs at 5am in production (without redundant tests for types). Of course, in Objective-C, nils get ignored when messaged, so you actually have to actively cause a NPE. Sadly, some of Apple's APIs do so. In fact, CFRelease() even throws a SIGTRAP.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: