Hacker News new | past | comments | ask | show | jobs | submit login
Why Type Systems Matter (matthias-endler.de)
79 points by somecoder on Aug 18, 2017 | hide | past | favorite | 100 comments



Some languages mix static and dynamic typing. For example MACLISP used this to great effect to make MACSYMA super fast. All the numeric code was statically typed and the compiler built code that was as fast as hand crafted assembly. Yet you could write conventional Lisp code that called this static code just like any other code. MACLISP derivatives like lisp machine lisp also implemented this stuff and used it for system code. I used it heavily.

All that survived into Common Lisp but I am not up on the current state of lisp implementations and have no idea if people bother to take advantage of it any more.


Common Lisp does allow one to declare types in order to have the compiler optimize the code. SBCL[0] in particular is known to be pretty good at that. Macsyma also lives on in Common Lisp as Maxima[1].

[0]: http://sbcl.org/ [1]: http://maxima.sourceforge.net/


Though it does not really 'live', the Symbolics Macsyma also has been ported to Common Lisp in the 80s.


C++ is in this boat now.

Most of the time I'd recommend the new type-safe union std::variant, but std::any is there.

It is kinda a static type, but as a container-type of anything in the type system, it may as well be a dynamic type.

---

> All that survived into Common Lisp but I am not up on the current state of lisp implementations and have no idea if people bother to take advantage of it any more.

Typed Racket uses the early guard theories from CL [1], and it does have an optimiser [2], though it has some quirks.

And though I can't find it now, there was a fairly recent research paper on using a macro system to static type check at compile-time and optimise for Scheme. But, I should point out that Scheme doesn't need it for optimisation - compiling Scheme to C is easy, and easy to make the result fast. Its more about safety.

And there's always Shen which uses sequent calculus [3] for their optional static type system.

[0] https://docs.racket-lang.org/ts-guide/

[1] http://www.cs.utexas.edu/users/boyer/ftp/diss/akers.pdf

[2] https://docs.racket-lang.org/ts-guide/optimization.html

[3] http://shenlanguage.org/learn-shen/index.html#10%20Sequent%2...


> Most of the time I'd recommend the new type-safe union std::variant

Neither g++ 7 nor Clang/LLVM 4.0.1 support std::variant yet :-(. We started a new (blank buffer) codebase earlier this year and decided to use C++17 as the implementation language, which has exposed us to the gaps...which are surprisingly few! But sadly this is one of them.


> Neither g++ 7

Eh? I've been using it a hell of a lot!

std::variant is listed on GCC 7's page [0], and I know that is there in at least 7.1

I don't use clang much, but they list it as supported [1], and I believe the patch [2] landed in 5.0

[0] https://gcc.gnu.org/gcc-7/changes.html

[1] https://libcxx.llvm.org/cxx1z_status.html

[2] https://reviews.llvm.org/rL288547


Hmm, have to check include paths again!


So I glanced over this, and found that std::variant is in libc++, as you would expect, but in some cases, when you get a nice modern compiler, you might not end up with it linked to a nice modern stdlib.

So maybe some sort of mis-match got in your way.

Anyways, here's hoping you get to use C++17 in its full glory!


> All that survived into Common Lisp but I am not up on the current state of lisp implementations and have no idea if people bother to take advantage of it any more.

There are implementations of Common Lisp, most notably CMU CL and SBCL, that take advantage of the (optional) type declaration of Common Lisp to increase efficiency and provide type checking.


A line from a novel by Georgette Heyer: "I look for trouble. I don't wait to have it brought to my attention."

That is, if there's a type problem, I want to know at compile time, not at run time.

But I'm in embedded systems. My stuff looks like just a machine to the customer. They don't want a type error at runtime messing them up. That may not be your world. If you'd rather things go happily along until the circumstances actually occur in execution, and if that ever happens, you then find out about the type problem, well, that's what is reasonable to you in your environment and circumstances, and that's fine. Use dynamic typing, and don't feel guilty.


Good post, but note all those things you don't need a statically typed language. Erlang for example is a dynamically typed language but it has Dialyzer - a type checker. The more precisely you define the types the more discrepancies and issues it will find in the code, exactly the kind of stuff the author mentions.

http://learnyousomeerlang.com/dialyzer

That's technically called "success typing" http://www.it.uu.se/research/group/hipe/papers/succ_types.pd...

Python has that too with MyPy. That came much later than Erlang and I haven't used yet so not sure how well it works

http://mypy-lang.org/

They seemed to have copied success typing but don't actually mention it anywhere.


> Good post, but note all those things you don't need a statically typed language.

> Python has that too with MyPy.

I don't know much about Erlang's typing, but this claim is definitely untrue. I run a code base with five engineers and I'm pretty disciplined about asking for type annotations wherever appropriate, and I still miss static typing every damn day. The grafted-on approach is definitely beneficial, and I frankly don't understand how people ran Python engineering projects of any significant size without it, but it has just enough holes in it that the type inference chain breaks often (ie just drops to type Any). It also gives false positive errors just often enough that it lowers your sensitivity to real errors.

Don't get me wrong, I'm happy it exists. But Python with types is a poor imitation of a proper engineering programming language.


I've worked in an modest Erlang codebase (~30k LOC) with a couple other people, and we used dialyzer pervasively with pretty good success. I would actually say that in some ways it was an improvement over Java's approach to types:

* there is no implicit 'null'. I.e., if your type signature doesn't explicitly include the value "none", then it's not allowed

* you have union and intersection types

* it's super easy to create new records (and types from those records)

* even if your type signature is structural, you can optionally give more meaningful names to arguments and return values in the signature itself

* success typing is still good enough to catch many "uninteresting" bugs (which I consider to be a significant advantage of static typing. When I fix bugs, I want it to be bugs with my understanding of the requirements, not bugs with my understanding about the values returned/expected by a function)

pattern matching also helps clarify the expected values of a function.


> and I frankly don't understand how people ran Python engineering projects of any significant size without it

Pretty much the first thing the Zope folks made was port Java interfaces to Python.

Considering that Zope projects were both relatively early and big in Python's life time, that might answer your question. (Plus, projects not using anything like it usually either have very large test suites, or are broken more frequently than working).


Erlang with dialyzer is a great "mix", it allows you to move as quickly you would with a dynamic language and, as long as you define the type specs, have a reassuring type safety. Of course it's nowhere near haskell's or elms type safety because, as you said, dialyzer follows the success typing model, sort of like an optimistic one ("you are correct until I prove you otherwise"), and haskell and elm follow the opposite way ("you are wrong until you prove me otherwise ").

It's a tradeoff, as everything else in computer science, and one that has worked for us. Can't recommend both erlang and dialyzer enough :)


The funny thing about "moving quickly in dynamic languages" is that it doesn't last. Refactoring code in a dynamic language gets difficult fast and is an entirely hopeless endeavour without a huge test suite (you obviously need tests on every level, because you can't refactor an interface and its test at the same time and still think it's working as intended).

Typed code may be slower and more cumbersome (to some) to write in the first place, but is usually much easier to maintain in my experience.


I find that anything beyond a very small program is faster to do with sort of manifest typing (with static analysis/type checking).

Now, I think that type inference for local variables can be nice (especially if you have good IDE that allows you to see the inferred type).

I once wrote a 600-700 line application in PowerShell. I found myself adding a few type annotations, and I expect that a program any bigger than that (especially if more than one person started working on it) would benefit from a policy of always adding type annotations.


Success typing really seems both ideal for me and the only sane way to do optional, progressive typing. I wonder if Ruby could do it? I think there are too many ways for an expression to be invoked from friggin anywhere in Ruby to make that possible.


There is never a person more zealous (and vocal in their zealotry) than a new convert. You don't see the bad points, just the good.

That's how this post comes across: "Types are so awesome, let me show you how awesome types are!"

Come back in two years, show us how the types feel after the honeymoon is over.


7 years after I first learned to love type systems and the honeymoon hasn't worn off.

They definitely have some tradeoffs. Depending on the type system sometimes something I think should be expressible in a particular way isn't and I have to do it differently. There is some extra cognitive load forced on me when reading the code.

But the upside is huge. Refactoring is easier and safer. The production launch of the application is less scary. I'm even faster at end to end development with a type system backing me up.


I think recognizing that there are tradeoffs is the best way to get me to consider your stance. It says you've used it long enough to be honest about its flaws. Anything else just feels disingenuous to me.

I totally agree with you, though. The ease of refactoring alone is a big enough win to get me on board.


> Come back in two years, show us how the types feel after the honeymoon is over.

I have been programming for more than 20 years, with a lot of different languages.

Until six years ago, most of my languages were on dinamic typing (PHP, Bash, Perl5, Python, Ruby, ...). I used C sometimes, but it was not the common case.

Then, 6 years ago, a friend show me Scala. I was lucky because I could use it in my daily job. Two years ago I started to use Rust for some stuff.

That said, after years using languages with good type systems, I can say dealing with a dinamic type system in a big project is a huge PITA. Almost every time I have to deal with legacy code (mostly Ruby) I really miss both Scala and Rust.

The honeymoon is over for me, and now I'm in a very stable and wonderful relationship :P


While I agree that type systems often get in the way, and that the author is only pointing out bad points and not good ones, aren't you doing the same? At least the author is trying to back up their claims (e.g. they provide an example where someone decided to put sizes in strings).

The honeymoon part is also an unfair assumption, not to mention incorrect. I just clicked on "About me", clicked on the author's github link, and after some scrolling and clicks, I found a rust commit from August 2015 the author made - thus, the author has been here for 2 years, and is not some newbie.

You should provide a list of cons involved with type systems that aren't so glaringly obvious that the author has probably already ran into them, and that will serve as a much stronger argument (e.g. you could bring up the fact that in large codebases with lots of generics, type systems drastically increase time spent typing, as well as cognitive overhead when there are <T>s and <T, E>s and so on all over the place).


Because my comment was not about type systems, it was about the fervor and lack of usefulness of the post itself. Plus, I'd just be repeating myself and others much more eloquent than I - why do that?

Plus, arguments about typed languages rank up there with vi vs Emacs (or is it now vi/emacs vs. Atom/VSC?) in terms of my interest in joining them. I like statically typed languages. I like dynamically typed languages. I dislike weakly typed languages. What more to add?


You may be entirely correct, but the way you have presented your position looks extremely unconvincing.

What I can see: You chime in on a discussion with a strongly critical viewpoint, then refuse to back it up saying you don't want to be part of certain kinds of arguments. Implying religious level flamewars, while the rest of us are keeping a level head and discussing these things based on technical merit. It looks like you are making excuses because it looks like you have no point. Even if that wasn't what you intended and you have strong points, that isn't what it looks like from the outside.

Please remember that not everyone has exactly your experience so if you have examples you should share them because they will make your points stronger even if someone attempts to rebut them. It also artificially weakens your point to try to back out with what from the outside look like excuses instead of substance. Again, I must say you might be entirely correct, I just can't see that from here.


I thought this was quite clear, and it's what I've always been missing in languages like Python or JS. Yet I meet many devs, especially coming from such languages, making fun of Java for its verbosity and clumsiness.

On the other hand, it leads to some people abusing the static type system: They just randomly change types until it somehow compiles, without thinking about what their doing.


Java is notorious as it has very clumsy syntax. Simple arraylist of dictionaries declaration looks like this:

ArrayList<Map<String,Object>> myArrayList = new ArrayList<Map<String,Object>>();

Could have been just:

ArrayList<Map<String,Object>> myArrayList = new();

or even better:

ArrayList<Map> = new(); <- if you don't care what the type of map is.


As others have mentioned, there's the diamond operator which has been around since Java 7.

> if you don't care what the type of map is

You can use raw types, it works perfectly fine (just generates a compiler warning). For explicitly being unspecific, you can also use a type wildcard (`?`).


That is almost literally what Java does.

ArrayList<Map> myArrayList = new ArrayList<>(); works already today.


Note in Java8, you could just use:

ArrayList<Map<String,Object>> myArrayList = new ArrayList<>();


The diamond operator is available since Java 7. Tripped me up too.


> making fun of Java for its verbosity and clumsiness

Java has a particularly verbose and clumsy type system.


What do you mean it has a verbose type system?


Person myPerson = new Person("My", "Name")

You need to specify the class twice on the same line, because Java can't figure it out on its own.


The left hand side only reserves the memory, the right hand side actually initializes the object. A bit redundant I agree, but to the compiler it is two atomic operations.


It is two separate operations, but in many other languages the compiler is smart enough to figure out that they'll be the same type (and allow you to specify it when they're not)

That's what I mean—Java's type system is verbose and clumsy, because it doesn't bother trying to figure out things it could figure out, forcing the programmer to be redundant and repeat themselves all over the place. The fact that it's 2 separate operations to the compiler shouldn't dictate anything about the language.


I agree it could be more clean, but the way it is designed allows for immediately understandable semantics when dealing with inherited types (which everyone though was the future during Java's Formative years).

This makes operations like this intuitive:

    Animal results = new Dog("Lassie");


It does, but they should have found a solution which allowed that, as well as not be redundant in the more common case where the two classes are the same. Hence Java's type system being verbose and clumsy.


Nice static type systems don't make you write out nearly as much crap as Java's does. It's pretty common for them to be able to completely infer the type of every expression in the program. If you're worried about the mere coherency of the concept of a "verbose type system", well, it was just a clever turn of phrase.


> They just randomly change types until it somehow compiles, without thinking about what their doing.

I did that, too, when I was inexperienced with C++. Especially with const, but also with * and &. But somehow it "clicked" at one point and I don't have that problem anymore.

In recent times, I've found static type systems to helpfully nudge me in the direction of correct code. I was pleasantly surprised by TypeScript. Also, in modern C++, if you use unique_ptr, shared_ptr, try to get rid of naked pointers, and use value types and RAII if possible, the code ends up a lot cleaner. I've found a couple of places where ownership was unclear, and I previously had circular references or dangling pointers as a result.

And anecdotically, the way people program in Haskell is basically, write what you mean, and then fix it until it compiles; it will likely also be correct.


If you haven't thought much about type systems but want to understand what the big deal is, I wrote a post specifically for you [1]. It draws motivation for wanting a good static type system from first principles.

[1] http://gilbert.ghost.io/type-systems-for-beginners-an-introd...

(I posted this on a similar story and it was received very well, so I thought I might post it again for those who haven't seen it.)


Strong typing is great when I've already made sense of how the logic should flow and I'm ready to solidify my work into something stable and extensible over time. It's not so great when I'm trying something new and am still trying to work out if what I want to do is even possible, since I find myself trying to identify the right type to use in a given situation rather than proving that I'm even able to do it. A lot of my daily work is prototyping.


There's no reason why static type errors can't be deferred until runtime, GHC Haskell can do this. Personally I find types invaluable when prototyping. I often fill in the implementations after I work out the types.


I wish this was adopted more broadly, e.g. by Rust and other newer languages.


What would the semantics of that be? Something like "this function contains a type error, so if you call it your code will just panic?"


Look at Idris to see how this kind of thing works.

https://jeremywsherman.com/blog/2015/10/10/idris-metaprogram...


If I understand well, Ada packages provide the necessary architecture for this kind of stuff. A package is like a Python "module", but it is typically split in two files: a specification, just containing the declaration of all the types and functions exported by the package, and a body, which actually implements the functions working on those types. When writing a program which uses that package, if you only compile it without linking, the compiler will complain about wrong types even if no function has been implemented yet.

A poor man's version of the above schema can be achieved in C/C++ as well. Just declare your functions in a header file and include it in your program, then compile without linking.


This is not the same thing as deferred type errors though. The programs are still type correct, it's just that some of the implementations are missing / producing errors.


At it's simplest, I could see it being a compiler flag. No annotations needed. Default would be strict compile-time typing. But with a flag, you could turn type errors into warnings (and runtime panics), or even into not-even-warnings, just silence (and runtime panics).


Haskell can defer them by producing thunks (values) containing runtime exceptions. But unlike a dynamic-only language, you still get to know about the errors at compile time, by way of a set of warnings.


If I'm understanding what you want correctly, Rust kinda will get that once the `!` type is stable/used in type inference correctly.


> I often fill in the implementations after I work out the types.

OK, but that's the opposite of deffering type errors until runtime.

Using "undefined"/bottom as the implemenation of a function is how we wrote Haskell before GHC added support for deferring type-errors.


Oops. I still use undefined everywhere, while my code isn't complete.

Also, typed holes. Typed holes are awesome for building implementations based on types.


Yes I don't use deferred type errors much. But the parent poster wanted the option to avoid worrying about type correct programs.


Can you give an example? I don't find this personally. I find static strong typing speeds me up because I don't have to restart the app, click some buttons etc. to figure out some code was wrong. This is true even for prototypes. Most of the time any type declarations you need are straightforward and once you know your code better you can start introducing more exotic types to capture more errors later.


A really basic example would be the "as GameObject" part of this statement in C#: `GameObject gameObject = GameObject.Instantiate(MyGameObject, Vector3.zero, Quaternion.Identity) as GameObject;".

It's annoying to have to type "as GameObject" when very clearly I am creating a GameObject type (the first thing in the entire statement). Programming languages should be smart enough to figure out that's what I'm trying to do rather than require me to type that out.


Like GP, I often wonder what untyped languages offer over typed languages for prototyping, and this "soooo much annotation!" thing just isn't a satisfying answer anymore. "Typed" does not mean "as verbose as C#/Java."


Have an upvote for being perfectly reasonable and attempting to provide examples and reason, even if I disagree with them.

Rather than deflect away from C#, I will take the unpopular stance of defending verbosity.

Does typing that really represent a large waste of time? Do you spend more time physically typing than doing anything else while coding?

I spend most my time thinking or reading old code. I might type that once and read it 100 times and pass the debugger over it 6 or 7 times and adjust the line a similar amount in the lifespan of that of the code.

For reading it is entirely clear and leaves little ambiguity about what kinds of operations are allowed on GameObject (presuming I know something about the GameObject class and the Entity Component System in place). I know its location and its rotation and I know that those are unlikely to be the source of bugs.

I might need to reference `MyGameObject` to get the specifics of the behavior, but if I am troubleshooting location or rotation errors, I know what code I am not looking at. If I am troubleshooting any other behavior I again know about whole regions of code I don't need to look at.

It can be hard to internalize all that a type system buys for you because none of it is immediate, it is more about all the time you don't spend doing unproductive things. I find that in more dynamically typed languages I spend two the three times as much time debugging than in static languages.


First, this annoyance does not come from static typing, it comes from C#'s type system. There are statically typed languages where you just construct the value and assign it to a variable, and compiler infers variable's type for you (e.g. OCaml).

Then, typing the class name is a trivial matter. My editor offers me a generic text completion command, so I don't type full "GameObject", I just type "Ga" and hit ^P. Maybe you should try using a good text editor?


> I find static strong typing speeds me up because I don't have to restart the app, click some buttons etc.

I'd blame this more on our current programming environments than anything. Smalltalk is exceptionally productive and doesn't require restarting to test things.


With a good type system, instead of writing algorithms first you would write data structures that convey the ideas behind the software's intent clearly. Then you start writing the algorithms that manipulate these data structures.

If what you're trying to do is viable, then you'll be able to express it in the domain model.


Yes! I've found it quite effective to start new problems with some (mostly composition-based) types as a means of jotting down the core inputs and outputs, then work out towards the edges in both directions with deserializers/readers/parsers in one direction, and repositories and work-doing algorithms in the other direction. A little dependency injection to wire the pieces together in complex systems and you're in business.


I usually start design with a domain model, whatever the type system I'll be working with.

(Just pointing out you can go this route regardless of your type system.)


> It's not so great when I'm trying something new and am still trying to work out if what I want to do is even possible, since I find myself trying to identify the right type to use in a given situation rather than proving that I'm even able to do it.

Finding the right type is solving most of the problem. If you can find the right type, you can solve the problem. If you're having trouble finding the right type, then you don't yet understand the problem well enough, and modelling what you do know with types quickly points out what parts of the problem you haven't fully understood, without having to run broken, half-specified programs that cover only part of the problem space.

What makes you think being able to run such half-specified programs for part of the problem will actually help with answering this question?


> What makes you think being able to run such half-specified programs for part of the problem will actually help with answering this question?

Perhaps rocky1138 thinks this because they have experience answering this question by doing so?

I've found it can be hard to find the right type before I've written partial solutions to a problem. Just to be clear, I'm talking about recognizing the monads or functors or what have you. Even the general mathematician doesn't start by drawing the right diagram. (Though some do.) For me, "the right type" comes from abstracting partial solutions.

I think part of it is preference for certain thinking styles, like how some people like puzzles based on group theory and some on topology. If finding the right types ahead of time works for you, great! But don't be surprised that people manage to solve problems by other means.

(It could also be that the type systems in programming languages don't capture the intuitive types[1], and perhaps rocky1138 keeps them in his head while prototyping. Just because the types aren't written down doesn't mean they aren't there --- i.e., there is no need to obsess over making things real. But, it is nice to make them real so you don't have to keep them in your head anymore, or to communicate them to others more effectively.)

[1] I sort of mean intuitive in the sense of intuitionism.


> Perhaps rocky1138 thinks this because they have experience answering this question by doing so?

They have experience with their problem solving style, they don't have experience with every problem solving style, or even necessarily with my problem solving style. You can't infer much from that, and certainly not that typing is "not great when prototyping".

> Just to be clear, I'm talking about recognizing the monads or functors or what have you. Even the general mathematician doesn't start by drawing the right diagram.

Absolutely. But you define the type that seems to match part of the problem, then realize it doesn't fit another aspect, and you refactor it until it covers all of the properties you need. Sometimes this involves writing some code to ensure you've captured the right properties for the problem when there's an algorithmic part, certainly, but to think you've captured anything meaningful or coherent without type checking is bizarre.

Like you later say, "abstracting partial solutions" is probably a pretty common approach, but those partial solutions only come together in a coherent whole if they're typed. Otherwise they likely won't mesh well, and you're left with a mishmash of partial solutions, not a solution.


> They have experience with their problem solving style, they don't have experience with every problem solving style, or even necessarily with my problem solving style. You can't infer much from that, and certainly not that typing is "not great when prototyping".

I tend to grant the first person who states their opinion in absolutes the nicety of automatic insertion of "it is my opinion/experience that," because this is what they usually mean. Countering absolutes with absolutes is just confusing to me, so sorry for any misunderstanding.

> but to think you've captured anything meaningful or coherent without type checking is bizarre

I want to say "speak for yourself" here---this is rather dogmatic. Sometimes I use a formal type system, sometimes I don't, and yet in both cases I somehow manage to produce working programs. Sure, a computer-checkable type system is nice to have and gives me peace of mind when I use one, but I believe formal type systems are a posteriori describing particular safe ways of manipulating data out of all the possible ways of manipulating data. Is in inconceivable that a programmer can check their types manually, using an ad hoc intuitive type system?

> but those partial solutions only come together in a coherent whole if they're typed. Otherwise they likely won't mesh well, and you're left with a mishmash of partial solutions, not a solution.

Check your types: "If they are not typed, they won't come together into a coherent whole because they likely won't mesh well."

Truthfully, it sounds to me like your argument is the Fred Brooks quote about how the data structures imply the code. This is not exactly the same as having machine-checkable type systems, though such systems do force the issue.


> Is in inconceivable that a programmer can check their types manually, using an ad hoc intuitive type system?

Sure, if you don't mind an ad-hoc, informally specified, bug-ridden, slow implementation of half of a type checker that no one but you knows (and so is "intuitive" only to you... until you read this same code 3 months later).


What are you addressing? My position is that automatic type checkers are nice to have but not essential, and it is possible to prototype software without one. I am not arguing against their use, just what seemed to be your assertion that it is impossible to prototype/develop software without an automatic type checker. Anyway, I understand the pitfalls of not using one, but I also understand how type systems tend not to capture everything. There is no reason to be dogmatic about it: use whatever works as a tool for thought.

As I said before, "intuitive" is in the sense of intuitionism; not "easy to understand" or "obvious." A point of view: formal type systems comes from explaining what we see with mental perception (intuition). What do you do if your automatic type checker doesn't check every possible thing you want to verify is correct? Surely you aren't just blind to the types they "should" be. Seeing the invariants which must hold is seeing the intuitive type.

> if you don't mind an ad-hoc, informally specified, bug-ridden, slow implementation of half of a type checker that no one but you knows

I can't tell if I was understood or if you just wanted to quote Greenspun's tenth rule whether it completely worked or not: I was talking about a programmer with a type theory, not implementing an automatic type checker from scratch.


> My position is that automatic type checkers are nice to have but not essential, and it is possible to prototype software without one.

Sure, it's possible to prototype in the untyped lambda calculus too, or in Brainfuck. Why would you want to though?

> I am not arguing against their use, just what seemed to be your assertion that it is impossible to prototype/develop software without an automatic type checker.

I never said it was impossible, I merely implied that it was bizarre to even want to do so (among other things). Programming is hard enough as it is, why waste your time and mental energy checking properties that can be checked for you?

> I was talking about a programmer with a type theory, not implementing an automatic type checker from scratch.

A type theory in their head, in the intuitionistic sense, that they check as they're programming, not one that's actually checked by a tool is what I assumed.

And certainly you're working within limitations dictated by your language, but you can take typically it further than most think. See for instance, the paper lightweight static capabilities.


> If you can find the right type, you can solve the problem.

Do you actually believe this?


What's the most advanced type system you've used?

It's quite difficult to explain how much work a very strong type system can do for you if you're used to something like C as your definition of "static typing". I mean this comment comment completely straight and polite, and I'm trying to help people answer your very reasonable question by asking you for some details that will help calibrate the answer.


Technically that would be something like Haskell's, but I only have surface-level fluency. More to the point would be the most complex stuff I've had reason to do with a type system, which caps at around this generic state machine:

https://play.rust-lang.org/?gist=3fdb20c34d589a2576c6bb137b9...

(TL;DR: A type that encodes a state machine (and is generic over the implementation) that allows you to guarantee reaching a terminating state.)

In practice I rather my types looking much plainer, though:

https://github.com/llogiq/bytecount/blob/master/src/lib.rs


Absolutely. It's little different than asking whether you can solve a problem once you have the appropriate data structure. Solving the problem when the right data structure is available becomes almost trivial -- most of the difficult of programming is recognizing what data structures are needed.

Very few of the problems we encounter in every day programming don't fall when the right data structure is available.


It's not the same at all; data structures frequently actually are half the problem, though that's not to say that insight buys you anything.

Let's say I want to stream logging data at a massive rate (~peak core-core bandwidth) to another core and perform complex queries and visualisations on it in sub-frame times. If I have the data structure for that, the rest is bookwork. If I have the type... what have I gained?

Let's say I want to find the best way to represent my data in a way that's amiable to GPU operations. If I find a data structure for that, I'm left with the other half of the job in mapping the transformations I need to GPU calls. If I write some types for it, it's not like GPUs start being any less buggy.

Let's say I'm writing an extension for a Java program with an API not designed for my use-case, and I want to work out how to access stuff that's not directly intended to be exposed. A data structure design isn't much good, since it's an exploratory problem, but neither is assigning types to random things. I still need to figure out how to abuse the API.

Let's say I'm trying to solve the Halting Problem. If I have a data structure for that, I've pretty much proven FALSE, and literally everything is provable. Conversely, I can already give you the type and I'm no closer to solving it.


> Let's say I want to stream logging data at a massive rate (~peak core-core bandwidth) to another core and perform complex queries and visualisations on it in sub-frame times. If I have the data structure for that, the rest is bookwork. If I have the type... what have I gained?

It depends on how you're using "type" vs how I'm using it.

If you mean the "publicly visible type", ie. the left hand side of "newtype Foo a = ...", my claim is weaker. Even so, the public interface for your type specifies the operations and implies their approximate time and space complexity (1), which points rather suggestively at the types of internal data structures that will be needed to satisfy these properties (2), which as you've said, is at least half the problem.

If by "type", the actual data type definitions needed to satisfy this interface, ie. the right hand side of "newtype Foo a = ...", which is what I originally meant, then you're already at (2) above, from which the conclusion follows almost trivially.

Hopefully that clarifies your follow up points.


> Strong typing

I think you mean 'static' typing, not strong? Python is already strongly typed.

> It's not so great when I'm trying something new and am still trying to work out if what I want to do is even possible

I completely agree here. I find it very useful when iterating to run the program and verify just the code path that gets executed, without worrying about whether the rest of the program is also correctly typed. Once I have settled on a set of types though, it would be nice if Python told me all the places that now need to be fixed up.


The conflation between 'strong' and 'static', and 'weak' and 'dynamic', is probably terminal at this point, but maybe I can do something to, at least, explain the position of the people who don't conflate those terms:

Strong typing is about creating, expressing, and enforcing a contract which determines which operations are valid on which values. Not variables, values. Having the semantics of the value in the compiler or the runtime ensures that errors are handled predictably, with explicit detection and possible reporting.

Weak typing is a lack of those semantics. In the most extreme case, you have languages such as B, where the only type is the machine word, which isn't a type at all because it doesn't imply anything about semantics: You can do anything to a machine word, so nothing can possibly be invalid, so there's nothing to enforce or detect or report. Similar "size specifications", such as int, or long, or float, are only loosely describable as types for the same reason: They specify how many bits a value has, not what's valid to do to it.

So a language such as Python is strongly typed because it can detect violations of the contract inherent in the types it knows about at runtime. C is less strongly typed, because, first, it focuses on its "size specification" types, and, second, you can subvert even that type system totally with nary a peep. Languages such as Ada, which inherited the "size specification" types from Algol, are only strongly typed to the extent you can augment their type system with types which are actually semantic, as opposed to size-based.


> The conflation between 'strong' and 'static', and 'weak' and 'dynamic', is probably terminal at this point,

Type system of C is weak and static.

Type system of JavaScript is weak and dynamic.

Type system of Lisp is strong and dynamic.

Type system of OCaml is strong and static.

Type system of Python is dynamic. It may be strong or weak, depending on your opinion about duck typing (does it weaken the type system or not?).


The "weak" terminology has a flaw because it sometimes means "leaves type errors undetected, with undefined behavior" and sometimes it refers to a situation whereby a character string like "3.14" can be treated as a number and such.

Awk and Perl are not weakly typed in the same sense that C is weakly typed.


There is also the related aspect of conversions. Ada is said to be stronger than C even if we ignore the "holes in the type system" of C because it doesn't allow implicit conversions (conversions without an explicit conversion operator or casting syntax). Though C won't treat a "3.14" character string as a number, it implicitly converts among numbers of different kinds, and converts between void * and object pointers. Narrowing conversions silently discard data ("implementation-defined result"), and so do conversions between same size but different signedness. Out-of range conversions between integer and floating-point values can trigger undefined behavior. (These issues are still there when the conversions are explicit, but when the conversion are implicit, the issues can sneak into the code more easily by accident).

Though Lisp was listed in the grandparent posting as "strongly typed", it also has something similar to C's implicit conversions:

  [1]> (sin 42)
  -0.91652155
  [2]> (sin 42.0)
  -0.91652155
  [3]> (+ 1 2.0)
  3.0
Unlike C, it won't go from floating to integer:

  [4]> (evenp 3.0)

  *** - EVENP: 3.0 is not an integer
Whereas if we had an

  bool evenp(int arg);
function in C and called it as

  evenp(3.0);
or even

  evenp(3.1);
it would work.


Honest question:

Is Python-with-type-hinting-in-function-definitions (and maybe type assertions) equally as "strong" as common style Python?

It's not static typing -- it's not Haskell -- but it's already a step further.

Edit: I find myself doing a lot of "assert isinstance(x,foo)".


It seems like it's about as strong, in terms of what types it knows about, but it allows more error detection at "compile time" or, at least, the time at which you run the static type-checking code on the codebase. As Guido said, this is like a super-linter.

https://www.python.org/dev/peps/pep-0484/

(I remember C linters trying to expand the C type system by doing things like complaining if you used non-boolean expressions in a boolean context, well before C had an actual bool type.)


Haskell types don't exist at runtime. If you use a static analyzer like mypy, Python types are “static” in the same sense as Haskell (statically verified as correct in advance), though mypy’s type system is less robust than Haskell’s.


I'm not knowledgeable of this stuff, but I'll keep taxing your patience until you tell me to stop.

Haskell types don't exist at runtime... because Haskell code (like with any other compiled language) gets translated to a lower-level machine language that works with untyped memory addresses?

Or are you saying that (for example) Fortran types are closer to the metal somehow?


Haskell types don't exist at runtime, because you can't check a type at runtime.

Any type-checking has to happen at compile time, because that information doesn't make it through.

So, something like Python's isinstance or type functions can't exist.


Sorry if dumb question. If Haskell types don't exist at runtime how do type class functions work? I'd think the type needs to get inspected to choose the implementation to call?


Personally, I did never feel that way. Either types come out mechanically and just mean some more typing slowing me down because they don't say anything interesting, or creating the types is exactly "proving that I'm even able to do it".

I tried, but I can't recall any single time when thinking about types distracted me from thinking about the problem.

Moreover, writing down the types by creating stub functions is a great method of designing.


When I'm still prototyping I usually pick the part of the project I am the most uncertain about, then write a big function with lots of anonymous types. Once I feel more familiar with it I refactor into types.

But types can also be really useful when you are in the process of learning your domain. If I'm using javascript and I realize I could better represent my code if I did some refactorings I usually don't touch it because I fear breaking something. On the other hand if I'm using a statically typed language I feel free to refactor to my hearts content with the knowledge that the chance I break something is far smaller.


My dream is a static-first (like with inmutable-first) language with a way to do dynamic.

The thing is not available (as far I know) is to build a dynamic object and "close it" for further modification so the compiler can optimize well. Other, that requiere good metaprograming, is to build a dynamic object AT COMPILE TIME (macro?) and close it, then the type-system work after that (F# type provider is almost this)

A use case is reflect a database/data storage like JSON or relational table. I wish, like with python, to do:

    class Customer = @build(table("Customer"))
and after this line :

    c = Customer()
    print c.name
to 'c' be a static type. If at compile time, it check the type, let say the field change to c.fullname, to mark as a type error.

If in runtime, like a interpreter, to "close" Customer and be certain that it never will mutate.

AKA: Inmutable types/clases, but the posibility to mutate it in some discret places.

Make sense?


You may be interested in Crystal.

https://crystal-lang.org/


This sounds similar to F# type providers.


I'd be interested in hearing what the community has to say regarding static types and whether or not they're compatible with object-oriented programming. I've posted this link a couple times before, but it hasn't been actively discussed: https://essencesharp.wordpress.com/2014/07/17/static-typing-...

From that article:

"Static typing inhibits interoperability. It does so between class libraries and their clients, and also between programming languages.

The reason is simply because static typing violates encapsulation in general, and improperly propagates constraints that aren’t justifiable as architectural, design or implementation requirements in particular. It does so either by binding too early, or by binding (requiring) more than it should."


Many OOP languages don’t make it easy to achieve encapsulation and composability in the presence of their particular static type systems. It sounds like he’s just arguing that nominal typing is worse than structural typing, and early binding is worse than late binding; the comparison to static type systems is a false equivalence.


Counterpoint I wrote a while back that focuses on compilation speed: http://benhoyt.com/writings/language-speed/

I've used Scala a fair bit recently, and the compiler is so dog slow it's painful. Maybe it's just my relative fluency with Python, but I find I'm much more productive with Python's almost instant edit-compile-run sequence.

Edit: I do like many of the benefits of static typing, though. I think mypy is coming into its own in the Python community, especially now with Guido behind it. Maybe this kind of optional typing is the best of both worlds...


There's no reason why a statically typed language cannot also use an interpreter and avoid expensive optimisation and code generation upfront. For example, GHC Haskell has an interpreter, runHaskell, that will run a Haskell program straightaway with little delay (type checking is fast).


Check out D (https://dlang.org/). It has a reasonably powerful static type system and the dmd compiler is lighting fast.


I would prefer mandatory typing with a dev mode that makes it optional for speed. With optional typing, you end up with the third party ecosystem not having typing, which makes it hard to enforce type safety with a codebase that is fully type safe itself.


that's one of the things go got right - they had fast compilation speed as a first-class goal right from the beginning.


They also have a really, really simple type system that can't play very many of the games that type system experts would like to play, so it doesn't cost them a lot of runtime at compile time.

I'm sort of curious what the abstract minimum penalty is for the very advanced type systems. Are GHC (Haskell) and rustc already within, say, 5x of the optimal possible speed, or might we be able to have a very advanced type system and faster compiling? Time shall tell, I suppose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: