
Exponential time complexity in the Swift type checker - adamnemecek
https://www.cocoawithlove.com/blog/2016/07/12/type-checker-issues.html
======
im4w1l
Why does Swift use this weird constraint-based type resolution anyway? To me
it seems like it, whenever it manages to found a non-trivial solution, would
be one that is unexpected to the programmer and leads to unintended
consequences. Anyone have a motivating example of when it would be useful?

What's wrong with a type resolver that resolves (or not, with an error!)
greedily in order of precedence?

Here's how I would solve

    
    
       let a: Animal = 1 + -(2)
    

Ok, 2 is an int. If that gives problems down the line, too bad. -(2) is an
int, if that gives problems, too bad. 1+-(2) is int+int so result should be an
int. a is an Animal, try to convert int to Animal. Fail. Error!

One thing in particular that scares me with the constraint approach is that it
seems to me (though I may be wrong) that whether overflow occurs or not is not
immediately obvious.

    
    
        let a: Double = -(INT_MAX) + - (INT_MAX)
    

whats to say those INT_MAXes wont somehow be interpreted as Doubles and a will
be -2 INT_MAX ?

~~~
solidsnack9000
Swift is not unique in using constraint solving to provide bidirectional type
inference. Scala and Haskell provide the same level of flexibility; although
not with the same compile-time problems. In the case of Haskell, this is
ultimately because the language is intentionally simpler in some ways.

Swift brings together three features that can be hard on type inference:

* Overloaded literals. When overloaded literals like maps can themselves contain literals, there are actually quite a few possible overloadings to choose from.

* Subtyping. We infer not types but type bounds (ranges).

* Overloading.

Haskell has notably no overloading at all and no subtyping. Somehow Scala
manages to combine all these things and a quite astonishing conversions
system, too.

~~~
JoshTriplett
Haskell does have overloaded literals: integer literals have type `Num a =>
a`, and floating-point literals have type `Fractional a => a`. And GHC has an
extension OverloadedStrings to make string literals have type `IsString a =>
a` (which makes it easier to work with ByteString and other different string-
like types).

~~~
cousin_it
AFAIK, Haskell doesn't really have overloading in the sense that other
languages use it. "Num a => a" is just one type. Contrast with Java, where a
class can have two methods named "foo" with completely different arguments and
return types.

~~~
minitech
“Overloaded literals” only has one meaning as far as I know, and what Haskell
has is it: a literal whose type isn’t solely determined by its contents.

~~~
Sean1708
Essentially the parent commenters are talking about too different things, one
is talking about the first bullet point and the other is talking about the
third.

------
Someone
Swift is highly inspired by Standard ML. If you look at
[https://en.wikipedia.org/wiki/Standard_ML](https://en.wikipedia.org/wiki/Standard_ML),
you can see the similarities:

    
    
        fun dist ((x0, y0), (x1, y1)) = let
          val dx = x1 - x0
          val dy = y1 - y0
          in
            Math.sqrt (dx * dx + dy * dy)
          end
    

Because of that, it also has its exponential time complexity (see for example
[http://spacemanaki.com/blog/2014/08/04/Just-LOOK-at-the-
humo...](http://spacemanaki.com/blog/2014/08/04/Just-LOOK-at-the-humongous-
type/))

What makes things especially bad for Swift seems to be that it has function
overloading and uses lots of it in its standard library, as that increases the
constants in the exponentiation (for example, with 6 choices to make and 10 vs
2 possible overloads per decision point you get 10^6 = a million versus 2^6 =
64 options to consider)

~~~
trurl
I had thought about that, but I am pretty sure the reason Hindley-Milner is
exponential has to do with the generalization for let-polymorphism. I'm not
familiar with all the details of Swift, but I do not recall seeing that it
will infer polymorphic types for you.

That doesn't mean there isn't a good reason that Swift's inference is
exponential – I just think HM's exponential time behavior is a red herring.

~~~
gilgoomesh
The article focusses on function overloads, which are not a part of Hindley-
Milner (so yes, a red herring in this case) and not necessarily exponential.
The fact that Swift is exponential here is an implementation shortcoming and
little more.

Swift's type checker does also have an exponential path for protocol
constrained generic parameters which is functionally equivalent to the
exponential path for classical Hindley-Milner. However, there are "pseudo"
linear optimizations for this too that Swift should also consider.

------
unmaintainable
I can't believe something like this makes it into "production". How could
anybody value such an asinine type inference system over _compile time_?

If your compiler needs a constraints solver to determine the type of an
expression, it belongs in a science exhibit, not into a real-world toolchain.

Don't try to be clever! Give me a compiler that's _predictable_ , _reliable_
and _fast_. If that means I have to add some type information here and there,
that's just fine!

~~~
Jweb_Guru
A very large percentage of compilers for statically typed languages include
algorithms with worst-case exponential time. For instance, the celebrated
Hindley Milner type inference algorithm, which is "almost linear" in practice,
has worst-case exponential time (the reason it rarely comes up is that
programs rarely have deeply-left-nested let bindings). Anyway, type checkers
that employ constraint solving can be a lot easier to reason about and extend
than alternatives (and give way better error messages, which is probably the
second-most-important thing a type checker does).

------
smegel

        The following line will give an error if you try to compile it in Swift 3:
    
        let a: Double = -(1 + 2) + -(3 + 4) + 5
    

Reminds me when Golang said you should avoid using numbers greater than about
20k in your program to avoid GC problems. I.e. farcical.

~~~
espadrine
> _Golang said you should avoid using numbers greater than about 20k in your
> program to avoid GC problems_

Isn't this unavoidable with conservative garbage collectors? The GC looks at
your fields and believes your integer is a pointer… It cannot know better.

Swift doesn't have that issue, but reference counting has its own risks of
memory leaks through mishandled weak references, and a performance overhead.

~~~
solidsnack9000
Scanning the heap looking for values that "look like pointers" in some
indefinite way is a fairly unusual strategy. There are many GC'd languages --
Haskell and O'Caml come to mind -- without this limitation. Haskell used
"pointer tagging" where every value had two bits reserved at the top end and
thus Ints were limited to 30 bits. The two bits were used to tell the GC what
it was looking at. Later developments side-stepped this limitation but I'm not
sure how.

Does ARC have an overhead relative to GC? If if it does, it at least prevents
latency spikes.

~~~
Someone
_" Does ARC have an overhead relative to GC? If if it does, it at least
prevents latency spikes"_

It doesn't prevent latency spikes. As an extreme example, create a linked list
or tree with a billion nodes, and let its root go out of scope. That will call
_free_ a billion times.

~~~
solidsnack9000
Do that every request, and it will be the same each time.

With GC, you can see a latency spike when every request creates such a list
and they aren't freed until, due to memory pressure, many are freed all at
once. Some of the requests will get stuck waiting for a linked list node and
have very high latency relative to the other ones.

The strength of ARC for interactive applications is its consistency.

------
heisenbit
Considering how likely it is that developers are running into this particular
set of compiler bugs and are scratching their head figuring out what is wrong
it should be fixed soon. Swift can not afford to be called unpredictable from
a developer perspective.

~~~
xiaoma
Apple has demonstrated again and again that developers will put up with just
about anything. As long as Apple has control of the platform with the most
valuable customers, it will stay that way.

~~~
Sean1708
Except the last time something like this happened[1] it was fixed within a
day[2].

[1]: [https://spin.atomicobject.com/2016/04/26/swift-long-
compile-...](https://spin.atomicobject.com/2016/04/26/swift-long-compile-
time/)

[2]:
[https://github.com/apple/swift/commit/2cdd7d64e1e2add7bcfd54...](https://github.com/apple/swift/commit/2cdd7d64e1e2add7bcfd5452d36e7f5fc6c86a03)

------
Fiahil
I first, somehow, misread the title, and changed "Swift type" for "Swype".
Made me wonder how an Android keyboard would be affected by an exponential
time complexity algorithm and why would it be a big deal.

Everything makes much more sense now.

