All that survived into Common Lisp but I am not up on the current state of lisp implementations and have no idea if people bother to take advantage of it any more.
Most of the time I'd recommend the new type-safe union std::variant, but std::any is there.
It is kinda a static type, but as a container-type of anything in the type system, it may as well be a dynamic type.
> All that survived into Common Lisp but I am not up on the current state of lisp implementations and have no idea if people bother to take advantage of it any more.
Typed Racket uses the early guard theories from CL , and it does have an optimiser , though it has some quirks.
And though I can't find it now, there was a fairly recent research paper on using a macro system to static type check at compile-time and optimise for Scheme. But, I should point out that Scheme doesn't need it for optimisation - compiling Scheme to C is easy, and easy to make the result fast. Its more about safety.
And there's always Shen which uses sequent calculus  for their optional static type system.
Neither g++ 7 nor Clang/LLVM 4.0.1 support std::variant yet :-(. We started a new (blank buffer) codebase earlier this year and decided to use C++17 as the implementation language, which has exposed us to the gaps...which are surprisingly few! But sadly this is one of them.
Eh? I've been using it a hell of a lot!
std::variant is listed on GCC 7's page , and I know that is there in at least 7.1
I don't use clang much, but they list it as supported , and I believe the patch  landed in 5.0
So maybe some sort of mis-match got in your way.
Anyways, here's hoping you get to use C++17 in its full glory!
There are implementations of Common Lisp, most notably CMU CL and SBCL, that take advantage of the (optional) type declaration of Common Lisp to increase efficiency and provide type checking.
That is, if there's a type problem, I want to know at compile time, not at run time.
But I'm in embedded systems. My stuff looks like just a machine to the customer. They don't want a type error at runtime messing them up. That may not be your world. If you'd rather things go happily along until the circumstances actually occur in execution, and if that ever happens, you then find out about the type problem, well, that's what is reasonable to you in your environment and circumstances, and that's fine. Use dynamic typing, and don't feel guilty.
That's technically called "success typing" http://www.it.uu.se/research/group/hipe/papers/succ_types.pd...
Python has that too with MyPy. That came much later than Erlang and I haven't used yet so not sure how well it works
They seemed to have copied success typing but don't actually mention it anywhere.
> Python has that too with MyPy.
I don't know much about Erlang's typing, but this claim is definitely untrue. I run a code base with five engineers and I'm pretty disciplined about asking for type annotations wherever appropriate, and I still miss static typing every damn day. The grafted-on approach is definitely beneficial, and I frankly don't understand how people ran Python engineering projects of any significant size without it, but it has just enough holes in it that the type inference chain breaks often (ie just drops to type Any). It also gives false positive errors just often enough that it lowers your sensitivity to real errors.
Don't get me wrong, I'm happy it exists. But Python with types is a poor imitation of a proper engineering programming language.
* there is no implicit 'null'. I.e., if your type signature doesn't explicitly include the value "none", then it's not allowed
* you have union and intersection types
* it's super easy to create new records (and types from those records)
* even if your type signature is structural, you can optionally give more meaningful names to arguments and return values in the signature itself
* success typing is still good enough to catch many "uninteresting" bugs (which I consider to be a significant advantage of static typing. When I fix bugs, I want it to be bugs with my understanding of the requirements, not bugs with my understanding about the values returned/expected by a function)
pattern matching also helps clarify the expected values of a function.
Pretty much the first thing the Zope folks made was port Java interfaces to Python.
Considering that Zope projects were both relatively early and big in Python's life time, that might answer your question. (Plus, projects not using anything like it usually either have very large test suites, or are broken more frequently than working).
It's a tradeoff, as everything else in computer science, and one that has worked for us. Can't recommend both erlang and dialyzer enough :)
Typed code may be slower and more cumbersome (to some) to write in the first place, but is usually much easier to maintain in my experience.
Now, I think that type inference for local variables can be nice (especially if you have good IDE that allows you to see the inferred type).
I once wrote a 600-700 line application in PowerShell. I found myself adding a few type annotations, and I expect that a program any bigger than that (especially if more than one person started working on it) would benefit from a policy of always adding type annotations.
That's how this post comes across: "Types are so awesome, let me show you how awesome types are!"
Come back in two years, show us how the types feel after the honeymoon is over.
They definitely have some tradeoffs. Depending on the type system sometimes something I think should be expressible in a particular way isn't and I have to do it differently. There is some extra cognitive load forced on me when reading the code.
But the upside is huge. Refactoring is easier and safer. The production launch of the application is less scary. I'm even faster at end to end development with a type system backing me up.
I totally agree with you, though. The ease of refactoring alone is a big enough win to get me on board.
I have been programming for more than 20 years, with a lot of different languages.
Until six years ago, most of my languages were on dinamic typing (PHP, Bash, Perl5, Python, Ruby, ...). I used C sometimes, but it was not the common case.
Then, 6 years ago, a friend show me Scala. I was lucky because I could use it in my daily job. Two years ago I started to use Rust for some stuff.
That said, after years using languages with good type systems, I can say dealing with a dinamic type system in a big project is a huge PITA. Almost every time I have to deal with legacy code (mostly Ruby) I really miss both Scala and Rust.
The honeymoon is over for me, and now I'm in a very stable and wonderful relationship :P
The honeymoon part is also an unfair assumption, not to mention incorrect. I just clicked on "About me", clicked on the author's github link, and after some scrolling and clicks, I found a rust commit from August 2015 the author made - thus, the author has been here for 2 years, and is not some newbie.
You should provide a list of cons involved with type systems that aren't so glaringly obvious that the author has probably already ran into them, and that will serve as a much stronger argument (e.g. you could bring up the fact that in large codebases with lots of generics, type systems drastically increase time spent typing, as well as cognitive overhead when there are <T>s and <T, E>s and so on all over the place).
Plus, arguments about typed languages rank up there with vi vs Emacs (or is it now vi/emacs vs. Atom/VSC?) in terms of my interest in joining them. I like statically typed languages. I like dynamically typed languages. I dislike weakly typed languages. What more to add?
What I can see: You chime in on a discussion with a strongly critical viewpoint, then refuse to back it up saying you don't want to be part of certain kinds of arguments. Implying religious level flamewars, while the rest of us are keeping a level head and discussing these things based on technical merit. It looks like you are making excuses because it looks like you have no point. Even if that wasn't what you intended and you have strong points, that isn't what it looks like from the outside.
Please remember that not everyone has exactly your experience so if you have examples you should share them because they will make your points stronger even if someone attempts to rebut them. It also artificially weakens your point to try to back out with what from the outside look like excuses instead of substance. Again, I must say you might be entirely correct, I just can't see that from here.
On the other hand, it leads to some people abusing the static type system: They just randomly change types until it somehow compiles, without thinking about what their doing.
ArrayList<Map<String,Object>> myArrayList = new ArrayList<Map<String,Object>>();
Could have been just:
ArrayList<Map<String,Object>> myArrayList = new();
or even better:
ArrayList<Map> = new(); <- if you don't care what the type of map is.
> if you don't care what the type of map is
You can use raw types, it works perfectly fine (just generates a compiler warning). For explicitly being unspecific, you can also use a type wildcard (`?`).
ArrayList<Map> myArrayList = new ArrayList<>(); works already today.
ArrayList<Map<String,Object>> myArrayList = new ArrayList<>();
Java has a particularly verbose and clumsy type system.
You need to specify the class twice on the same line, because Java can't figure it out on its own.
That's what I mean—Java's type system is verbose and clumsy, because it doesn't bother trying to figure out things it could figure out, forcing the programmer to be redundant and repeat themselves all over the place. The fact that it's 2 separate operations to the compiler shouldn't dictate anything about the language.
This makes operations like this intuitive:
Animal results = new Dog("Lassie");
I did that, too, when I was inexperienced with C++. Especially with const, but also with * and &. But somehow it "clicked" at one point and I don't have that problem anymore.
In recent times, I've found static type systems to helpfully nudge me in the direction of correct code. I was pleasantly surprised by TypeScript. Also, in modern C++, if you use unique_ptr, shared_ptr, try to get rid of naked pointers, and use value types and RAII if possible, the code ends up a lot cleaner. I've found a couple of places where ownership was unclear, and I previously had circular references or dangling pointers as a result.
And anecdotically, the way people program in Haskell is basically, write what you mean, and then fix it until it compiles; it will likely also be correct.
(I posted this on a similar story and it was received very well, so I thought I might post it again for those who haven't seen it.)
A poor man's version of the above schema can be achieved in C/C++ as well. Just declare your functions in a header file and include it in your program, then compile without linking.
OK, but that's the opposite of deffering type errors until runtime.
Using "undefined"/bottom as the implemenation of a function is how we wrote Haskell before GHC added support for deferring type-errors.
Also, typed holes. Typed holes are awesome for building implementations based on types.
It's annoying to have to type "as GameObject" when very clearly I am creating a GameObject type (the first thing in the entire statement). Programming languages should be smart enough to figure out that's what I'm trying to do rather than require me to type that out.
Rather than deflect away from C#, I will take the unpopular stance of defending verbosity.
Does typing that really represent a large waste of time? Do you spend more time physically typing than doing anything else while coding?
I spend most my time thinking or reading old code. I might type that once and read it 100 times and pass the debugger over it 6 or 7 times and adjust the line a similar amount in the lifespan of that of the code.
For reading it is entirely clear and leaves little ambiguity about what kinds of operations are allowed on GameObject (presuming I know something about the GameObject class and the Entity Component System in place). I know its location and its rotation and I know that those are unlikely to be the source of bugs.
I might need to reference `MyGameObject` to get the specifics of the behavior, but if I am troubleshooting location or rotation errors, I know what code I am not looking at. If I am troubleshooting any other behavior I again know about whole regions of code I don't need to look at.
It can be hard to internalize all that a type system buys for you because none of it is immediate, it is more about all the time you don't spend doing unproductive things. I find that in more dynamically typed languages I spend two the three times as much time debugging than in static languages.
Then, typing the class name is a trivial matter. My editor offers me
a generic text completion command, so I don't type full "GameObject", I just
type "Ga" and hit ^P. Maybe you should try using a good text editor?
I'd blame this more on our current programming environments than anything. Smalltalk is exceptionally productive and doesn't require restarting to test things.
If what you're trying to do is viable, then you'll be able to express it in the domain model.
(Just pointing out you can go this route regardless of your type system.)
Finding the right type is solving most of the problem. If you can find the right type, you can solve the problem. If you're having trouble finding the right type, then you don't yet understand the problem well enough, and modelling what you do know with types quickly points out what parts of the problem you haven't fully understood, without having to run broken, half-specified programs that cover only part of the problem space.
What makes you think being able to run such half-specified programs for part of the problem will actually help with answering this question?
Perhaps rocky1138 thinks this because they have experience answering this question by doing so?
I've found it can be hard to find the right type before I've written partial solutions to a problem. Just to be clear, I'm talking about recognizing the monads or functors or what have you. Even the general mathematician doesn't start by drawing the right diagram. (Though some do.) For me, "the right type" comes from abstracting partial solutions.
I think part of it is preference for certain thinking styles, like how some people like puzzles based on group theory and some on topology. If finding the right types ahead of time works for you, great! But don't be surprised that people manage to solve problems by other means.
(It could also be that the type systems in programming languages don't capture the intuitive types, and perhaps rocky1138 keeps them in his head while prototyping. Just because the types aren't written down doesn't mean they aren't there --- i.e., there is no need to obsess over making things real. But, it is nice to make them real so you don't have to keep them in your head anymore, or to communicate them to others more effectively.)
 I sort of mean intuitive in the sense of intuitionism.
They have experience with their problem solving style, they don't have experience with every problem solving style, or even necessarily with my problem solving style. You can't infer much from that, and certainly not that typing is "not great when prototyping".
> Just to be clear, I'm talking about recognizing the monads or functors or what have you. Even the general mathematician doesn't start by drawing the right diagram.
Absolutely. But you define the type that seems to match part of the problem, then realize it doesn't fit another aspect, and you refactor it until it covers all of the properties you need. Sometimes this involves writing some code to ensure you've captured the right properties for the problem when there's an algorithmic part, certainly, but to think you've captured anything meaningful or coherent without type checking is bizarre.
Like you later say, "abstracting partial solutions" is probably a pretty common approach, but those partial solutions only come together in a coherent whole if they're typed. Otherwise they likely won't mesh well, and you're left with a mishmash of partial solutions, not a solution.
I tend to grant the first person who states their opinion in absolutes the nicety of automatic insertion of "it is my opinion/experience that," because this is what they usually mean. Countering absolutes with absolutes is just confusing to me, so sorry for any misunderstanding.
> but to think you've captured anything meaningful or coherent without type checking is bizarre
I want to say "speak for yourself" here---this is rather dogmatic. Sometimes I use a formal type system, sometimes I don't, and yet in both cases I somehow manage to produce working programs. Sure, a computer-checkable type system is nice to have and gives me peace of mind when I use one, but I believe formal type systems are a posteriori describing particular safe ways of manipulating data out of all the possible ways of manipulating data. Is in inconceivable that a programmer can check their types manually, using an ad hoc intuitive type system?
> but those partial solutions only come together in a coherent whole if they're typed. Otherwise they likely won't mesh well, and you're left with a mishmash of partial solutions, not a solution.
Check your types: "If they are not typed, they won't come together into a coherent whole because they likely won't mesh well."
Truthfully, it sounds to me like your argument is the Fred Brooks quote about how the data structures imply the code. This is not exactly the same as having machine-checkable type systems, though such systems do force the issue.
Sure, if you don't mind an ad-hoc, informally specified, bug-ridden, slow implementation of half of a type checker that no one but you knows (and so is "intuitive" only to you... until you read this same code 3 months later).
As I said before, "intuitive" is in the sense of intuitionism; not "easy to understand" or "obvious." A point of view: formal type systems comes from explaining what we see with mental perception (intuition). What do you do if your automatic type checker doesn't check every possible thing you want to verify is correct? Surely you aren't just blind to the types they "should" be. Seeing the invariants which must hold is seeing the intuitive type.
> if you don't mind an ad-hoc, informally specified, bug-ridden, slow implementation of half of a type checker that no one but you knows
I can't tell if I was understood or if you just wanted to quote Greenspun's tenth rule whether it completely worked or not: I was talking about a programmer with a type theory, not implementing an automatic type checker from scratch.
Sure, it's possible to prototype in the untyped lambda calculus too, or in Brainfuck. Why would you want to though?
> I am not arguing against their use, just what seemed to be your assertion that it is impossible to prototype/develop software without an automatic type checker.
I never said it was impossible, I merely implied that it was bizarre to even want to do so (among other things). Programming is hard enough as it is, why waste your time and mental energy checking properties that can be checked for you?
> I was talking about a programmer with a type theory, not implementing an automatic type checker from scratch.
A type theory in their head, in the intuitionistic sense, that they check as they're programming, not one that's actually checked by a tool is what I assumed.
And certainly you're working within limitations dictated by your language, but you can take typically it further than most think. See for instance, the paper lightweight static capabilities.
Do you actually believe this?
It's quite difficult to explain how much work a very strong type system can do for you if you're used to something like C as your definition of "static typing". I mean this comment comment completely straight and polite, and I'm trying to help people answer your very reasonable question by asking you for some details that will help calibrate the answer.
(TL;DR: A type that encodes a state machine (and is generic over the implementation) that allows you to guarantee reaching a terminating state.)
In practice I rather my types looking much plainer, though:
Very few of the problems we encounter in every day programming don't fall when the right data structure is available.
Let's say I want to stream logging data at a massive rate (~peak core-core bandwidth) to another core and perform complex queries and visualisations on it in sub-frame times. If I have the data structure for that, the rest is bookwork. If I have the type... what have I gained?
Let's say I want to find the best way to represent my data in a way that's amiable to GPU operations. If I find a data structure for that, I'm left with the other half of the job in mapping the transformations I need to GPU calls. If I write some types for it, it's not like GPUs start being any less buggy.
Let's say I'm writing an extension for a Java program with an API not designed for my use-case, and I want to work out how to access stuff that's not directly intended to be exposed. A data structure design isn't much good, since it's an exploratory problem, but neither is assigning types to random things. I still need to figure out how to abuse the API.
Let's say I'm trying to solve the Halting Problem. If I have a data structure for that, I've pretty much proven FALSE, and literally everything is provable. Conversely, I can already give you the type and I'm no closer to solving it.
It depends on how you're using "type" vs how I'm using it.
If you mean the "publicly visible type", ie. the left hand side of "newtype Foo a = ...", my claim is weaker. Even so, the public interface for your type specifies the operations and implies their approximate time and space complexity (1), which points rather suggestively at the types of internal data structures that will be needed to satisfy these properties (2), which as you've said, is at least half the problem.
If by "type", the actual data type definitions needed to satisfy this interface, ie. the right hand side of "newtype Foo a = ...", which is what I originally meant, then you're already at (2) above, from which the conclusion follows almost trivially.
Hopefully that clarifies your follow up points.
I think you mean 'static' typing, not strong? Python is already strongly typed.
> It's not so great when I'm trying something new and am still trying to work out if what I want to do is even possible
I completely agree here. I find it very useful when iterating to run the program and verify just the code path that gets executed, without worrying about whether the rest of the program is also correctly typed. Once I have settled on a set of types though, it would be nice if Python told me all the places that now need to be fixed up.
Strong typing is about creating, expressing, and enforcing a contract which determines which operations are valid on which values. Not variables, values. Having the semantics of the value in the compiler or the runtime ensures that errors are handled predictably, with explicit detection and possible reporting.
Weak typing is a lack of those semantics. In the most extreme case, you have languages such as B, where the only type is the machine word, which isn't a type at all because it doesn't imply anything about semantics: You can do anything to a machine word, so nothing can possibly be invalid, so there's nothing to enforce or detect or report. Similar "size specifications", such as int, or long, or float, are only loosely describable as types for the same reason: They specify how many bits a value has, not what's valid to do to it.
So a language such as Python is strongly typed because it can detect violations of the contract inherent in the types it knows about at runtime. C is less strongly typed, because, first, it focuses on its "size specification" types, and, second, you can subvert even that type system totally with nary a peep. Languages such as Ada, which inherited the "size specification" types from Algol, are only strongly typed to the extent you can augment their type system with types which are actually semantic, as opposed to size-based.
Type system of C is weak and static.
Type system of Lisp is strong and dynamic.
Type system of OCaml is strong and static.
Type system of Python is dynamic. It may be strong or weak, depending on your
opinion about duck typing (does it weaken the type system or not?).
Awk and Perl are not weakly typed in the same sense that C is weakly typed.
Though Lisp was listed in the grandparent posting as "strongly typed", it also has something similar to C's implicit conversions:
> (sin 42)
> (sin 42.0)
> (+ 1 2.0)
> (evenp 3.0)
*** - EVENP: 3.0 is not an integer
bool evenp(int arg);
Is Python-with-type-hinting-in-function-definitions (and maybe type assertions) equally as "strong" as common style Python?
It's not static typing -- it's not Haskell -- but it's already a step further.
Edit: I find myself doing a lot of "assert isinstance(x,foo)".
(I remember C linters trying to expand the C type system by doing things like complaining if you used non-boolean expressions in a boolean context, well before C had an actual bool type.)
Haskell types don't exist at runtime... because Haskell code (like with any other compiled language) gets translated to a lower-level machine language that works with untyped memory addresses?
Or are you saying that (for example) Fortran types are closer to the metal somehow?
Any type-checking has to happen at compile time, because that information doesn't make it through.
So, something like Python's isinstance or type functions can't exist.
I tried, but I can't recall any single time when thinking about types distracted me from thinking about the problem.
Moreover, writing down the types by creating stub functions is a great method of designing.
The thing is not available (as far I know) is to build a dynamic object and "close it" for further modification so the compiler can optimize well. Other, that requiere good metaprograming, is to build a dynamic object AT COMPILE TIME (macro?) and close it, then the type-system work after that (F# type provider is almost this)
A use case is reflect a database/data storage like JSON or relational table. I wish, like with python, to do:
class Customer = @build(table("Customer"))
c = Customer()
If in runtime, like a interpreter, to "close" Customer and be certain that it never will mutate.
AKA: Inmutable types/clases, but the posibility to mutate it in some discret places.
From that article:
"Static typing inhibits interoperability. It does so between class libraries and their clients, and also between programming languages.
The reason is simply because static typing violates encapsulation in general, and improperly propagates constraints that aren’t justifiable as architectural, design or implementation requirements in particular. It does so either by binding too early, or by binding (requiring) more than it should."
I've used Scala a fair bit recently, and the compiler is so dog slow it's painful. Maybe it's just my relative fluency with Python, but I find I'm much more productive with Python's almost instant edit-compile-run sequence.
Edit: I do like many of the benefits of static typing, though. I think mypy is coming into its own in the Python community, especially now with Guido behind it. Maybe this kind of optional typing is the best of both worlds...
I'm sort of curious what the abstract minimum penalty is for the very advanced type systems. Are GHC (Haskell) and rustc already within, say, 5x of the optimal possible speed, or might we be able to have a very advanced type system and faster compiling? Time shall tell, I suppose.