I started off in PHP. It was wild. Anything could be anything. Refactoring was a nightmare. Our codebase was littered with mystery variables like $hold, $hold1, and $holda, which were re-used all over the place. (Granted, this two other problems entirely orthogonal to types.)
Then I got a job at a Java place. My god, it was beautiful. I could suddenly know what arguments functions were expecting, even if I hadn't seen them before. I could be instantly aware if I was trying to use the wrong variable somewhere, or pass in invalid data.
It was as if someone lifted me out of a dank cellar where I had previously subsisted on whatever mushrooms the rats didn't deign to eat, into a brightly lit dining room full of steak and pastries.
But there’s some truly awful stuff in the ecosystem; in the underlying language and the platform’s DNA; in the compromises TS (rightly) makes to be a productive real world tool; in the commonly used tooling; and just peppered throughout everything you can expect to encounter in common third party libraries.
Overcoming all that awfulness requires a lot of additional effort, is inherently limited, and isn’t common in the community (although that too is improving as TS becomes more popular, and as safer patterns become more idiomatic).
I think if I were building a new greenfield project with my choice of platform today, it would be a difficult choice whether to take all that I’ve learned in Node/TS and accept those trade-offs, or to invest in learning another platform.
What other platform do you have in mind?
But I think off the top of my head, languages that I’d look at as first contenders include F#, C#, Kotlin, Swift. I’m sure there are other good choices, for the kind of space I tend to work, that are equally productive and have enough of a community for me to be comfortable adopting them, but I would need to spend some more time researching options to really say with confidence what else I would consider.
All tied in with one singular company each that calls all the shots - For good and for bad. You kinda get all their other stuff showed down your throat along with the languages and things can get bloated fast. But at least things are streamlined. Nodejs had an uprising with io.js in 2014-15 and is now governed by a foundation. I like that a lot. I sleep soundly knowing that no company can pull the rug from under me on a whim.
for (var i = 1; i != Math.pow(2, 16); i <<= 1)
(It's been a while since I coded PHP, but my recollection is that PHP tries to pull a strings-are-integer tricks a few times).
Yeah, the implicit casting in PHP is bonkers, especially when it comes to comparisons. The comparison table looks like the scribblings of a madman: https://www.php.net/manual/en/types.comparisons.php
That said, after ten years of use, I still see something like:
function_returning_false_or_int() > -1
Now, nobody should write that shit in the first place, but turns out it’s equivalent to ‘!== false’.
‘(int)false’ is zero, but ‘false > -1’... is not true.
Then I tell them about strings and ints. You can represent 66 both with strings and ints, but one is usable for computation where the other one is simply usable to write Alice in Wonderland with it, or simpler forms of text. Sometimes I get the question then: why can't a string both be computational and as a means of character display? And then I list the upsides and downsides of such a system, just like the upsides and downsides of mixing brick and wood to build the outer wall of a house.
I'm curious if people can think of other analogies that they use for teaching.
Personally, I avoid analogies because they usually mean I don’t really know how to teach the topic, and I’m “hand-waving” on the fly. I either find a way to build on the student’s existing knowledge or I “park” the topic for discussion when the student has enough knowledge to give a correct answer.
Run what you have so far. Print the last value you computed. Crash. Verify the data has the "shape" you expect it to. Write a bit more code. Repeat.
On the other hand, Python’s repl was nice for a small handful of tasks (although I mostly used it for figuring out what the actual type of some variable was, which is obviously a non-problem in the statically typed world).
We get editor time feedback of mistakes with optional type hints + light inferance via clj-kondo and a data modelling system that we can export out as database schema, JSON schema etc
And dynamic enough constraint system to express something like all human names must be "Tony" only on Tuesdays
The same constraint that can be shared server side and client side without writing it twice without learning more syntax
Additionally check what the expected inputs for a function are by checking the function specs I think guardrails pro will make this more ergonomic when released
And finally ask the constraint system to generate valid examples of the constraints great for mocking data
I don't miss type systems but I also understand if you're not using solutions here then you're in trouble
None of that is to say OCaml is "superior" to Clojure in some way. I disagree with a lot of the ways that the OCaml type system has evolved and I wouldn't be surprised to see people who have moved to Clojure from OCaml. However, having programmed professionally in Clojure (although it's been quite a few years so I'm not familiar with the latest advancements in e.g. spec) I still think a Clojure-like language could benefit from a static type system.
I don't think it'll work for Clojure itself because of a variety of patterns and choices in the standard library (which is in part why I think core.typed died, we also found core.typed painful to use in some of our experiments at my old job both in how it interacted with Clojure and the tooling around it). And philosophically Rich Hickey would probably kill Clojure before he ever considered designing Clojure around a static type system. However a programming language based off the same data driven ideas could maybe do it.
While a REPL and hot code reloading are absolutely huge productivity boosts, they are more or less orthogonal to the benefits provided by a good static type system (see e.g. hot code reloading with Elm or Purescript which comes quite close).
The thing that static type systems provide over tests and runtime contracts is the ability to constrain users of the API of a library. We use regression tests to make sure regressions in code we write doesn't happen again. Likewise types are effectively regression tests at the API level to make sure certain regressions in code that calls our code doesn't happen again. That is an extremely powerful capability that I consistently miss in dynamically typed languages.
That's what I find interesting about it. Not all language benefit from a type system in the same ways, some, like Clojure, actually get crippled. Now, it can mean that you need to find the right kind of type checker that provides the correct ergonomics for Clojure and maybe that would work. But it's still quite interesting.
For example, Erlang has a bit of a similar thing, Dyalizer made specific choices to work within Erlang's design. Had it not done so, it probably wouldn't have found adoption. Same with TypeScript.
So what's interesting here is that you have an apples to apples comparison where a language is found to be better without the constraints of static type checking.
When you look at other statically typed languages, they're oftened designed around the static type checker. That's the main focus, and the language itself revolves around that. So obviously in such a language, the type checker would be a necessity, as it's the main draw. So it's interesting to look at Clojure for a counter example.
You can go crazy and spec everything, in fact that can help, but:
- In practise, nobody does it
- Specs come with no guarantees. They could even be wrong.
- The official implementation stubbornly insists on not checking return types, so half of your annotation may just be glorified documentation (although you can use third party libs like orchestra)
Just imagine: Add a new required field to a spec, and get a convenient list of source code locations that you need to review. That's the promise of a statically checked system. It's not a silver bullet, but not having this leads to what I like calling "refactor anxiety" (i.e.: did I handle all cases?)
I still love clojure no matter what. I think in practise you can express so much, so elegantly, and with far less code, that your project size is always sorta manageable.
Ultimately I think people just weren't given any credit for making good, reusable types, because then the next dev who submits a better feature faster using your work gets a raise, but you look like a kook ranting about best practices who doesn't do anything "business".
Bad type systems need you to use the escape hatches frequently - you can't write much C without using casts.
I haven't yet used a language with no need at all for an escape hatch - but some languages need them far less often than others.
Java's type system is better than it was. The main places it still has weaknesses are around exceptions (you will need to wrap checked exceptions in runtime exceptions in places you'd have preferred to specify an exception type via generics, eg) and the occasional cast after an instance type check.
You can phrase good development practices in business terms: what are the risks to the business due to sloppy code? Are they greater than the risk of being slow to market?
Sloppy code has accumulating costs. A good type system can help greatly with refactoring to address those costs, but it cannot help with the attitude that those costs aren't real and don't need to be paid.
Help a lot to start with something SO wrong :)
My first dynamic language was python, and the quality of the docs and overall apis of major libraries soft the landing.
But later I note that was parts of MY code that become a mess. The problem of delay of good design? is that good design GETS delayed.
And then fix that later is a problem. Major libraries and core APIs have the time frame and focus to polish them, but the rest of the code most do?, not much, so it stay in the awkward phase of "later will be refactored, maybe. Perhaps..."
Then later I move to F# and rust and can't delay bad design for long.
Is a chore to slow down at the start of the coding phase but later the speed up is huge: i don't need to fix my past mistakes by the truckloads...
What it doesn't support is any sort of readability as a code base scales. It's often hard to know how the function you're calling will behave because the param can tweak some flag and totally change the behavior.
Using PhpStorm, I've worked with 100 kloc PHP codebases where almost every type was inspectable. And the language support and tooling are only getting better all the time.
At the moment, you have to put a $ in front of most variables, and no adornment for when you want to call something as a function. Reminds me of Common Lisp with its two name spaces for functions and other variables.
A few decades later, I really value them. My code now lives for decades, not for weeks. Lines are measured in the hundreds of thousands. Other people work with me. Types eliminate a set of errors. They're worth the effort.
I think time is where the tradeoff lies. How long are you writing for? If your code is only for this week, types maybe are a net loss. If you're writing for a month, maybe it's a toss-up. But if you're writing for a year, types are a net win.
TypeScript is great, it lets me express many things I want, but it's more or less bloating and inconsistent. I always have to worry if the signatures are hard to read for the team members.
Interestingly, TypeScript reveals a lot of use cases to the mass - there's more to type systems. It's not possible to write a typesafe 'printf' in most of the statically typed languages. Maybe a full-sized dependent typed language like Idris will come in sight in the following decades.
Ruby flexibility is amazing. I love the language and syntax frameworks built around it. Amazing.
Except for investigating and refactoring. Everything in Java, even in worse written systems, was fairly easy to understand and find using intellij and friends. Meanwhile in ruby if someone gets messy it might be impossible to figure that code out.
I like the concept of typescript best. Types when you want them. No types when you just prototyping. It was actually my favorite aspect of Adobe flex when I had a short stint coding in it 12 years ago.
The great thing of Java was garbage collection. No more malloc/free hell and programs still worked.
The great thing about Perl and the other scripting languages was no type declaration (plus garbage collection.) And programs still worked.
After 30 years I kind of agree with the author of the OP: the time lost annotating programs with types of not offset by the benefits. In almost all cases type discovery is something the computer should do, not me. Same as for memory management: automatic, not manual.
I loved them because, well the things you mentioned.
But I hated them because in C++ it was required to open a separate header file to declare them.
But for the majority of the code we write, inital speed isn't that important. Understanding the code and maintaining it are orders of magnitude more important for any non-trivial code.
Types are not only a way for the compiler to understand your code and impose constraints. They're also your API to other programmers. When they see a sum type, they can understand its possible states. When they see a product type they can understand its possible values.
Understanding other people's code is at least half the job of a programmer, whether it's understanding a library or understanding code you have to maintain, or understanding your own code that you wrote 6 months ago.
Types help you do that.
The cognitive load of a dynamically typed (or unityped, or “untyped” or whatever) language is massive, yet the common argument is that types «increase» the cognitive load??? How does offloading a large majority of the trivial reasoning of a program over to a type system, ”INCREASE” the cognitive load???
It’s just so endlessly easier to program with types
* People who are learning to code are writing lots of code but not reading very much code. I think types do the most work when trying to understand existing code.
* Lots of people's first experience with typed languages was something like C++ or Java back when they had much worse error messages.
* The kind of mistakes you make when first learning to code make the type checker feel like a pedantic nitpicker instead of a protective ally.
* Programming instruction tends not to teach technique very much. If you invent techniques that leverage the strengths of types, then great for you. If you don't, then you might program for years until you are exposed to the benefits types can provide.
These days I will often have to glue together some tiny part of two or three enormous APIs, some of which are "auto generated" from some other system. Think LINQ-to-SQL or WCF.
It's amazing when you can take a 100 MB chunk of code, and simply "navigate" to the thing that you want using tab-complete, in the sense that "somefactory.sometype.someproperty.subproperty.foo" is almost self-evident when you press tab and cycle through the options at each step.
And then when you finally get "foo", if it's the exact unique type you were looking for, then you can be certain that you did the right thing! There's practically no need to reach for the debugger and start inspecting live objects. Just tab, tab, tab, tab... yup, that's it, move on.
If you're working with a "blank slate" PHP app (or whatever), where you've personally written most of the lines of code involved, typing can feel unnecessary.
If you're glueing together Enterprise Bean Factory Proxies all day, then strong typing is practically mandatory.
I’ll easily admit that it’s not as easy to reach the same kind of benefit in those languages.
The magic sauce really is (ideally as global as possible) type inference combined with programming primarily via expressions instead of primarily via statements, with appropriate language support of course.
> If you're glueing together Enterprise Bean Factory Proxies all day, then strong typing is practically mandatory.
Might that not be a problem, though? Shouldn't more software systems be small, elegant and well-architected rather than a spaghetti nightmare navigable only through an IDE?
I like types, I think they are great, I like editor tool, I think it is great. I even like a lot of the luxuries modern systems afford me. But … maybe we could stand a little simplification?
Office and Office 365 is also a behemoth that covers entire suites of business products, front-end and back-end.
If you start to seriously talk about integrating these things with a bunch of third-party components, you're talking tens of gigabytes of binaries.
> But … maybe we could stand a little simplification?
Always. Unfortunately, that runs up against the limitations of our squishy meat brains. Especially when they're numbered in their tens of thousands. Simplification, refactoring, and code reuse requires coordination.
I too am horrified that a mere database engine no longer fits on a standard DVD disc... when compiled into a binary.
But I can download the ISO image in a matter of minutes, and use the system with a few button clicks in an IDE to produce functional software.
I know that the raw numbers should qualify as a nightmare, but at the end of the day, things get done anyway and it doesn't seem to matter that much.
I guess we're just horrified because we know how this particular sausage is made...
Five years ago when switching jobs I’d pursue “full stack” positions because I’d done a fair amount of front end dev in the past. No more, me personally, I’m backend all the way. Dynamic typing (vanilla js) is just harder; more time consuming more cognitive load, IMO.
Which languages are missing unions? Off the top of my head, they exist in C/C++ and mypy python.
I love love love Python for data science, in part because it's dynamically typed. I can bang things out quickly without worrying about the engineering bits, and, since I'm working in an interactive coding environment, it's generally easy enough to just inspect the values of my variables to figure out what they are.
I hate hate hate Python for ML engineering, in part because it's dynamically typed. The same features that make it so easy to hack out a quick data analysis make it absolutely awful to build for durability. For example, since stuff in production runs hands-off, you need to feel pretty confident about the return types of every function in order to feel confident you won't throw a type error at run time. Actually pinning this down can get quite complicated, though, when you're working with a library like scikit-learn that relies heavily on duck typing. Sometimes you end up having to go on a journey down a rabbit hole in order to clearly identify and document all the types your code might accept or return.
(Disclaimer: Hate aside, it's still my preferred ML engineering language. You've got to take the bad with the good, and the language gets you access to an ecosystem that is so very good.)
Obviously I exaggerate a bit, but we've all seen various incarnations of a lot of those issues.
Happily with TS it’s possible to have DI and IInstantiationService’s and all that and still maintain good IDE support —- in no small part because the IDE is built with all those, in TS... if it was unusable we’d fix it.
Disagree in the strongest possible terms, tbh.
It's the lack of static typing that gets you 3/4 of the way down your experimental pipeline only for your code to fail because column "trianing_batch" can't be found. Huge productivity loss, even with rapid iteration.
There are great type system that provide enough ramp for for first runnable version. Rust, for example, has type inference and an unusually helpful compiler.
But most languages, especially the old ones, don't have that. Because of that, coders need full-blown IDEs like Intellij/VS to write comfortably in those languages. Without full-blown IDEs or editors packed with plugins, it is quite a chore to navigate types. There's no hyperclicking, type annotation, docs preview. While dynamic-typed languages usually is runnable since the first character without having to wait for the compilation, so even notepad is acceptable to write with (though that would be really painful).
I just didn't when I was new to programming. I learned Python because that's what I saw pitched to me all the time. There was a local Python meetup, MIT's CS courses taught Python and Udacity eventually taught Python. I loved the language and learned quickly how to do a bunch of basic stuff
But I wanted to make Android apps and, ugh, Java confused the crap out of me. It wasn't so much the language, but rather "URI? Where in the world do I get one of those?! Oh, you instantiate a URI with the string. In Python this is just a String..."
All SUPER noob-y mistakes. All because I (or my team) didn't understand how to think in Types. Furthermore, people who can Think in Types often don't know how to articulate that thought process to others.
Typed languages are very different. The tooling is far more robust and more able to point out errors, but also tends to be more complex than just a simple text editor.
This is changing with VS Code and LSP being a thing, but it still influences those communities in fundamental ways.
You are already doing the hard work of describing your types anyway, but because your compiler doesn't know the types, it can't help you out.
There are times when the cognitive load of a particular static type system is less than that of a dynamic one, for certain classes of programs and audiences, and times when it’s not. This is fine and we should encourage development of both kinds of type systems and a shared understanding of how to pick the right ones for the job (which is a discussion nearly always missing in these debates).
I know that in many cases "correct" means a lot more than that, such as in proper software engineering contexts when building a program/system that needs to live and evolve for a long time among many people. I write that kind of software all the time, and I always use static type systems for it.
But I also write a bunch of ad-hoc, one-time-use programs, and I'm very glad that I don't need to reason about abstract type theory in order to parse a CSV file and add up some columns to send to a colleague.
In general I know what you mean though. “What if this CSV parser turns out to be really important?” If you think there’s a high probability of that, do it in a statically typed language then. 99.9% of data munging I’ve done have been throwaway programs to answer a question to inform some decision or help refine a mental model.
I think this forum is written in a language that doesn't have types. My favourite though was lambda-the-ultimate.org, at some point the number one resource on the internet when it came to discussing about programming languages (maybe it still is, I haven't followed it for a while), which was written in an untyped language, PHP (Drupal, to be more exact).
This was one of the pain-points when I was working more with node.js: the function signature told you nothing - like whether the function would even return or not would sometimes be a mystery.
In very, very short scripts you can get away without types (like in a notebook for example), but once a project starts to get even medium size the tiny amount of time you put into writing a type name explicitly here and there is more than made up for by the degree it helps with the structure and correctness of your program.
Yes, it's bad, but it works. I hope there are better ways of doing it, but this is my way.
Also, python's type-hinting supports forward-references which are what you would use for recursive or "self-referencing" types.
Personally, I like the slow pace they're taking with the typing. It's touching the core usage of the language in a fundamental way and I don't think that can be rushed. We're seeing lots of community growth around the type-hinting, even using them at run-time, which is amazing to watch and marvel at.
Dict[str, JSON] | List[JSON] | str | float | int | bool | None
I've felt this way with Python and Ruby and Node projects (it is a major complaint in the RoR community), but have never felt that way with Scala, Java, C#, F#, Rust, or OCaml. Most of the time, when I need to make a change, I just change it, iteratively eliminate any type errors, and once the type errors are gone, the tests magically pass too.
Scheme avoids type errors by writing programs which could be given static types with a sufficiently powerful type-checker. If a Scheme programmer needs to define two aggregates, both of which have a property "name," they're likely to define two different functions, foo-name and bar-name, to get at them. This, though it's noisy, makes type errors more obvious.
CLOS helps avoid type errors too, since it encourages thinking not about how code interacts with a single type, but instead how it interacts with a whole _protocol_ of methods. I think multimethods in general are a powerful design tool that help avoid type errors in a dynamic setting, but I haven't had a chance to try them out in a language other than CL.
Many CL implementations also have static type-checking. For example, when I try to define a function with a type error in SBCL:
* (defun f (x) (+ 1 x (car x)))
; in: DEFUN F
; (CAR X)
; caught WARNING:
; Derived type of X is
; (VALUES NUMBER &OPTIONAL),
; conflicting with its asserted type
; See also:
; The SBCL Manual, Node "Handling of Types"
; compilation unit finished
; caught 1 WARNING condition
One can also attach type annotations to functions , and SBCL (and probably other implementations) will add a runtime CHECK-TYPE  unless you tell it to optimize for speed enough.
But that’s not because type errors are less likely. For dynamic functional languages they’re only less likely because the implicit contracts tend to be more general and the data structures tend to support a high degree of polymorphism.
The reason FP approaches tend to reduce mistakes is mostly that managing state is hard, and pure functions are easier to reason about.
What I'd like to see is a language that allows typeless programming to start but which is designed to allow the imposition of types at a later point.
Types are cool but when I worked on eclipse apis (plugins) god forbid they were utterly useless. Design matter, having 32 intermediate classes to do just anything, 12 of which are totally unrelated and suddenly types don't help you much.
Also it was full of Option types disguised as potentially empty arrays.
I guess today things are better.
The correct type system is actually way more expressive than not having strong static types. Sum types let you combine multiple return types elegantly and not be sloppy. Option types remind you to check for an absent value.
Static types let you refactor and jump to definition quickly and with confidence.
Your interfaces become concrete and don't erode with the sifting sands of change. As an added bonus, you don't need to precondition check your functions for type.
Types are organizational. Records, transactional details, context. You can bundle things sensibly rather than put them in a mysterious grab bag untyped dictionary or map.
Types help literate programming. You'll find yourself writing fewer comments as the types naturally help document the code. They're way more concrete than comments, too.
With types, bad code often won't compile. Catching bugs early saves so much time.
Types are powerful. It's worth the 3% of extra cognitive load and pays dividends in the long haul. Before long you'll be writing types with minimal effort.
I still code a lot of Common Lisp on the side, but my Lisp code now looks entirely different than it looked just 3 years ago. The language standard does support optional typing declarations, and there's an implementation (SBCL) that makes use of it to both optimize code and provide some static typechecking at compile time (with type inference). So my Lisp code now is exploiting this, and is littered with type declarations.
However, the CL type system is very much lacking compared to Rust or Haskell. I'm hoping one day someone will make a statically, strongly typed Lisp that still doesn't sacrifice its flexibility and expressive power. I'd jump to that in an instant.
Typed Racket  was really a revelation to me in that regard. I'd be curious how developers with more strongly-typed language experience feel about it.
https://github.com/stylewarning/coalton looks promising, and stylewarning has recently said he's still working on it.
This part inspired me to look up the wiki page "Haskell Lisp , because I somehow remembered that some people were trying to make a Haskell that could be written in Lisp. But this page reveals even more interesting efforts:
> Shentong - The Shen programming language is a Lisp that offers pattern matching, lambda calculus consistency, macros, optional lazy evaluation, static type checking, one of the most powerful systems for typing in functional programming, portability over many languages, an integrated fully functional Prolog, and an inbuilt compiler-compiler. Shentong is an implementation of Shen written in Haskell.
> Liskell - From the ILC 2007 paper: "Liskell uses an extremely minimalistic parse tree and shifts syntactic classification of parse tree parts to a later compiler stage to give parse tree transformers the opportunity to rewrite the parse trees being compiled. These transformers can be user supplied and loaded dynamically into the compiler to extend the language." Has not received attention for a while, though the author has stated that he continues to think about it and has future plans for it.
But this page does not list everything and there is Hackett , which introduces itself with "Hackett is an attempt to implement a Haskell-like language with support for Racket’s macro system, built using the techniques described in the paper Type Systems as Macros. It is currently extremely work-in-progress." - though it seems that it didn't change since two years.
And finally there is Axel  - which introduces itself with "Haskell's semantics, plus Lisp's macros.
Meet Axel: a purely functional, extensible, and powerful programming language."
But I'll check out SML. That's Standard ML, right?
You can just enclose all your function calls in parens :D
Also you probably want to check out OCaml rather than SML, I don't know that SML has much of a presence… anywhere really.
ML syntax is very pleasant, and roughly, sexprs w/o all the punctuation noise.
I don't believe it has a macro capability like lisps though, but you gain a sophisticated type helper.
Definitely worth looking into!
To me that was not the issue. It was, rather, discovering languages with powerful and expressive type systems.
My first job was in Java, most of my career afterwards was in Python. I've been type-curious for a while because of Haskell and OCaml and am very fond of Rust, I'd take a job in any of those happily.
Types in Java are still, today, largely verbose, hideous and extraneous. The cost / benefit is extremely low (or rather extremely high, you pay a very high cost for limited benefit, and the cost generally increases faster than the benefits). You can leverage types heavily, but it creates a codebase which is ridiculously verbose, inefficient (because every type is boxed), opaque, and simply doesn't look like any other Java codebase so will be very much disliked by most of the people you're working with. And the benefits from that will still, at the end of the day, be rather limited.
Out of nostalgia, getting so frustrated with dynamic-typed code, I once tried to go back to some of that old code and make it use JSON instead of proprietary formats. That was a nightmare.
In the dynamic languages it would be utterly trivial. Just call json_encode($whatever), or $whatever = json_decode($some_string).
Modern languages with modern type systems, inference, generics, etc. that make things like that possible and relatively clean completely change the picture.
Another thing that bugs me about dynamic languages is of course you have to manually check everything all the time because the compiler can't. We used to complain about the bloat of having to write all those type names and casts, but dynamic code, if it has good checks, can actually be more bloated in addition to being less expressive.
But I can relate to the pressure to deliver quick results. I found myself burnt out when working on a forecast model around three years ago. The constant "how's it goin'?" tore my attention away from the work, and I'm still convinced I could have delivered a better result.
So, in a way, I agree. In another, I understand the other side of the issue, and I think there are so many less time-intensive tasks going on around engineering that there's often little awareness that something like refactoring a class for better efficiency pays in smaller but compounding ways long-term, with most of the time cost and perceived opportunity cost being immediate and short-term. It's still worth it if you really do the math on the long-term benefit.
Either a b = Left a | Right b
In earlier typed languages, the types weren't there for reasons of soundness or productivity at all. The types were there for the compiler alone, as the compiler needed them to know which machine instructions to emit for various operations. Types were just a cost imposed on programmers.
Once computers became powerful enough that we could afford to spend cycles and memory making these decisions at runtime, dynamic languages became viable and we saw industry shift over to them, except in domains where dynamic languages still weren't viable, or where existing codebases or ecosystems made it not economically viable.
Fast forward to the present and decades worth of type theory knowledge is finally filtering through to industry in the form of languages like Rust, TypeScript, Swift, Kotlin, and others. For the very first time we're embracing types for their soundness and productivity benefits. This is an exciting new era.
While it is true that strong typing is a requirement for the best performance (and this remains so), the productivity benefits of strong typing have been known for a long time.
I mean, just look at languages like C# and Java. These are well established, extremely popular languages, used mostly in business software. A domain where performance is rarely critical. Yet, these languages are very popular. Not in the least because they make it easier for programmers to understand and work with other people's code, and because they provide good tooling, both of which are hugely valuable in a business/enterprise context. Strong typing plays a major role in enabling these features.
Even when C# was still a brand new language, roughly 20 years ago, Visual Studio already provided features like "go to definition", "find references" and autocomplete out of the box. These were a major reason for people to adopt the language.
It's no surprise that people like Anders Hejlsberg, who created C#, later went on to create TypeScript. They already understood the productivity advantages of strong typing and wanted to bring those to the web.
That said, I definitely think you're right to point that this is a new thing for industry, and not just a swing back to the idea of types that were previously mainstream in industry. I'm excited too!
None of these features are related to loosening the type system.
Foo f = fooFromElsewhere; // explicit typing (old)
auto f = fooFromElsewhere; // type inference (new)
Bar f = fooFromElsewhere;
I'm sure someone will give me a good example but for example std::sort in C++ before closures in C++, if you want to sort one array by another, for example you have an array of indices and an array of values and you want to sort the indices by the values, before closures I'd argue this was fairly painful unless you resorted to global variables or copying all of the data into some intermediate format. You'd end up having to write or generate a class with a sort function solely for the purpose of being able to pass in a member function to sort that could access the values. Today it's trivial because you can write a lambda that closes over the values and pass the indices into sort.
I also suspect that a lot of this has to do with people's personalities around the concept of borders. Some people like well defined borders in general in everything they do because they approach life from a more procedural perspective, and for procedures to work they need things to be in the right boxes and in the right places. While others prefer borders to be undefined and more free-flowing because more information can pass through concepts and that allows for a more unstructured design process. It only puzzles me that there's so much energy in indie game development for highly bordered programming environments (i.e. all the energy being put into gamedev Rust libraries) when indie developers tend to be people who value borders less, as do all creative types. But I guess people really like types...
If you can work reliably with dynamic typing, that means you are very disciplined about giving the right data to the right function, in exactly the right form. That you are very disciplined about tests, possibly including fairly stupid-looking unit tests (which aren't actually stupid, at least in a dynamic context). Adding static typing on top of that wouldn't help much of course.
When I write something from scratch however, I found that static typing actually speeds up my development. It's less work, not more. Because I don't have to write as many tests, or even worry about huge classes of errors — the compiler (and if I'm lucky, my editor/IDE) just checks them for me.
I don't know the work you do, but I bet that your style could benefit from some static checks. Perhaps not the mainstream ones, but your scripts work somehow, don't they? That mean they respect a number of invariants, some of which could certainly be checked at compile time, saving you significant time on stupid bugs. The result won't be TypeScript or Rust, but I don't think it would be fully dynamically typed either.
It's a point that comes back often, and that I totally agree with so it's worth reiterating. In addition to the improved dev tooling (autocompletion, hinting, refactoring), being able to write large swathes of code without actually running it and being 100% confident that it's all _valid_ (not bug-free of course) just takes a huge load off my mind.
Of course, there's huge differences between languages like Java and languages like Typescript. Talking about "typed languages" as a homogenous concept often doesn't make a lot of sense
I've heard similar things before, e.g. "static typing allows you to find bugs in your code without even running it".
Perhaps the reason I'm a fan of dynamically-typed languages is that I don't see the benefit of this. Maybe my workflow is unusual, but I don't write code without running it - I run tests every time I add a few lines.
Even if I already have a REPL. I believe the main reason is because the type checker is much closer to the source of my errors than runtime checks or tests are.
In gamedev, static types don't help when you have a constant value you have to change that tweaks the gameplay buried inside a compiled class that you want to balance out. Changing that one constant means either putting it in a script, which is usually written in a dynamically typed language, or recompiling the whole program, testing, changing the value, and repeating.
The only real reason I choose dynamic languages is because I spent hours on that last cycle just recompiling the whole program and throwing away all the state for a single small change, then getting the engine back to the previous state I was debugging in. I still don't understand if it was a bad habit or just how my mind wants me to program. I expect to be able to interact with my program and see how changing things affects the behavior very quickly, and a compile cycled shuts down that mode of thinking entirely. I remember Steve Yegge's essay that mentioned this, that "rebooting is dying." 
There were a lot of times I could write scripts, but the fact was that most of the time the code I wanted to tweak slightly was compiled, and that required a full module recompile every time. The fact is that if some of my code can be compiled, then I will probably end up changing the compiled code at some point, and that means a lot of waiting.
If C# had the ability to hot reload a class like a dynamic language to cut down the recompile cycle, I would be happy, but it sounds like it isn't possible. The old code will be mixed with the new code leading to instability.
So I've been spoiled by a dynamic language (Lua) while acknowledging I made a trade-off for one single feature. In my case if I used a statically typed language I would lose out on certain things and gain others, but dynamic program rewriting seems to best coincide with how I think, and I'm not sure how if I should change that.
On a trivial level, C and C++ can unload & reload DLLs at runtime. On a less trivial level, I believe the Yi editor, written in Haskell, can do hot code reloading. On a practical level, I use the XMonad window manager, whose configuration involves modifying a Haskell source file (the main one, actually), and hitting some shortcut. If my modifications are correct, the whole things reloads without loosing any state (my windows are still at the same places).
For quick hacks types are not useful though.
Maybe there's a place for both?
Wouldn't it be better to approach this by which problem we are trying to solve? A script that is run during development, where resources are unconstrained, and stability is not an issue, should absolutely value the time it takes to develop and maintain the script. So using a typed language may not be useful here. A situation where small optimizations make large improvements might benefit from a typed language, maybe a physics engine of a game intended for multiple platforms?
In the OP, the author approaches it from a "systems" perspective, that when you need either of the 3 scenarios, then you might consider using types. Type inference, Sum/tagged union types, and Soundness, which I think could easily apply to certain areas of game development. Ignoring the nuance around the issue, and being dogmatic that all scenarios in a given field do not need types is ignoring that what we're really doing is writing in languages that need to be interpreted by both humans and machines.
I do mostly work with python and JS, but last Christmas I learned Rust, and it strongly occurred to me that exhaustive matching, no nulls, borrow checking and strong type inference would be a real boost to development given the initial time to build the codebase up. I'd put money on them removing hours of hunting subtle bugs, and on missing the ramifications of refactors.
I built some small scale game stuff using SDL2 for Advent of Code and I enjoyed rust for doing that a lot.
I think also dynamic languages work best when developers actually are knowledgeable about the underlying types and effectively write code in a typed manner anyway. It's a much worse trade-off when function signatures actually avail of loose typing to do strange things.
Learning haskell changed a lot about the way I think about programs and that is even as someone who primarily writes in java.
I dislike types in 30 lines of python because they're unnecessary complexity. I like them in 10000 lines of c++ because they do some of the thinking on my behalf.
If you have the option between creating a function which accepts a string (e.g. ID) as argument or accepts an instance of type SomeType, it's better to pass a string because simple types such as strings are pass-by-value so it protects your code from unpredictable mutations (which is probably the single biggest, hardest to identify and hardest to fix problem in software development). I think OOP gets a lot of blame for this and it's why a lot of people have been promoting functional programming but this blame is misguided; the problem is complex function interfaces which encourage pass-by-reference and then hide mutations which occur inside the blackbox, not mutations themselves. Mutations within a whitebox (e.g. a for-loop) are perfectly fine since they're easy to spot and happen in a single central place.
If you adopt a philosophy of passing the simplest types possible, then you will not run into these kinds of mutation problems which are the biggest source of pain for software developers. Also you will not run into argument type mismatch issues because you will be dealing with a very small range of possible types.
Note that this problem of trying to pass simple types requires an architectural solution and well thought-out state management within components; it cannot be solved through more advanced tooling. More advanced tooling (and types) just let you get away with making spaghetti code more manageable; but if what you have is spaghetti code then problems will rear their ugly heads again sooner or later.
For example, a lot of developers in the React community already kind of figured this out when they started cloning objects passed to and returned from any function call; returning copies of some plain objects instead of instances by-reference provided protection from such unexpected mutations. I'm sure that's why a lot of people in the React community are still kind of resistant to TypeScript; they've already figured out what the real culpit is. Some of them may have switched to TS out of peer pressure, but I'm sure many have had doubts and their intuition was right.
I once read a Haskell (I believe, may have been SML or OCaml, this was a while ago) tutorial (can't find it anymore) that did this. It was infuriating as it completely hid the benefit of the type system. Essentially, details fuzzy, it was creating a calculator program. Imagine parsing is already done and had something like this:
eval "add" a b = a + b
eval "sub" a b = a - b
Sadly, I've seen similar programs in the wild at work (not using these languages, but with C++, Java, C#) where information is encoded in integers and strings that would be much better encoded in Enums, classes, or other meaningful typed forms.
And yet, despite correctly assessing the problem, you insist on fighting objects instead of mutability.
to make things mutable you have to clone them as such and i cant really think of a single api in cocoa/foundation that vends a mutable array or string...
i think its a very good point...
Of course you don’t send around references to mutable objects and of course you only send to a function just what it needs - but that’s regardless of type system.
In terms of modularity and testability, the ideal architecture is when components communicate with each other in the simplest language (interface) possible. Otherwise you become too reliant on mocks during testing (which add brittleness and require more frequent test updates). I think very often, static typing can cause developers to become distracted from what is truly important; architecture and design philosophy. I think this is the core idea that Alan Kay (one of the inventors of OOP) has been trying to get across.
'I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging"' - Alan Kay
It's very clear from Alan Kay's writings that when he was talking about 'messaging' he was talking about communication between components and he did not intend for objects to be used in the place of messages.
Sure, interfaces should be kept small. Let's to just that, then! Recognise that we want our classes/functions/modules to be deep (small interface/implementation ratio), and frown upon shallow instances in code reviews.
No need to give up static typing.
The point of good state management is to ensure that each instance has a single home. As soon as you start passing instances between functions/modules/components, you're leaking abstractions between different components. Sometimes it is appropriate to do this, but most of the time it's dangerous. Components should aim to communicate as little information about their internal state to other components as possible.
Now you've got an immutable id string, you can access as easily as the bare one, but now you can't mix it with other types of IDs, so you won't pass it to something expecting BarId by accident. As a result - no black boxes and a clearer design.
A variant of this is the cause of many Linux kernel issues. They basically had to cram it into macros to prevent passing real/kernel pointers to userspace by accident, because pointer is a pointer is a pointer.
Using a struct is infinitely preferable to a string.
Type inference was a major revelation to me as well in Rust. I was reluctant to learn the language because of my experience in Java with its high ceremony everywhere, mostly due to lack of type inference.
The first thing I noticed with Rust was type inference. It gives the entire language a distinctly high-level, almost scripting language feel - modulo ownership.
Similar to Rust, you can just write `val me = Person()`. No need for the new keyword.
Ie what you find in Haskell or OCaml is much more powerful than what eg Go gives you. (In Go type inference only works 'forward'.)
(I don't know enough about Kotlin to know what kind of type inference it has.)
But I agree, it's a waste to type it all out. So I use an intelligent IDE that reduces the repetitive typing:
So typing `Person.var` gives me `Person person = new Person();` or typing `Person person = ` will suggest `new Person()`
Sure, it doesn't look as appealing when creating an new object, however I get a better information when you've written something like:
Person me = something.GetOwner();
let me = something.GetOwner();
Alan Kay made essentially this suggestion when the requirements for the language that became Ada were being debated, but it was not taken up at that time.
The original C++ 'throw' declaration is an example of an ill-conceived attempt to provide and use more context, and an example where tools provide a better solution than piling on the syntax.
let me: Person = something.getOwner();
The white text on a gray background is provided by my IDE.
Honestly though, this is a tooling problem. There's no reason at all that developers should need to spend time encoding information into the source code that's already known to the compiler, but devs also need to use tools that can feed the compiler's knowledge back to them. Jetbrains Rider (and their Resharper extension for Visual Studio) have a set of options called inlay hints  that do this exact thing. Personally I tend to keep most of the type hints turned off because they do add a lot of clutter, but the feature is absolutely invaluable for parameter name hints.
The newest C# has the reversed inference for ctors too, so you can actually do this and not have to repeat it and still get it first.
Person p = new();
Someone has started to try to introduce 'var' and is getting push back from others. "it doesn't match current style" and "could be confusing" were some 'concerns'.
When you're trying to do
InternalFormatParserCustomForClientABCDEF customerFormatParser = new InternalFormatParserCustomForClientABCDEF(param1, param2);
var customerFormatParser = new InternalFormatParserCustomForClientABCDEF(param1, param2);
With that said, it's important to note that type inference is not new. I was doing type inference with Scheme nearly 20 years ago. I believe the reason it never took off in a serious way is because it combines all the downsides of dynamic typing with the downsides of static typing. Types are meant to document code. Without the annotations, you can't look at code and know what is going on. Which, ironically, is the complaint against dynamic typing. In addition to that, you get the pain in the ass of having the compiler always complaining. And because it's inferring types, the error messages are towers of baffling nonsense. It's the worse of both worlds.
One thing that never comes up in these discussions is the idea that creating good types is a skill itself. Much like naming variables, if you don't design your types correctly, your entire code base suffers. It's much easier in a dynamically typed code base to "nudge" the data in a certain direction than in a static type system where all types are locked down at initial design time. In addition, certain type systems are much harder to master than others. I've worked with dozens of TypeScript developers and not a single one really knew what they were doing. They were appeasing the compiler and little more than that. There are also plenty of footguns in TypeScript that even TypeScript experts continually forget. I could continue on the weakening of the value of TS (via "any" and "ts-ignore", etc.) and how these completely muddy any sort of metrics one may have on deciding whether types are "worth it" or not (or the fact that such metrics do not, in fact, exist). But that's enough ranting for one day.
In Rust, type inference is only inside the function, which I think gets you the best of both worlds.
> In addition to that, you get the pain in the ass of having the compiler always complaining.
This has stopped so many dumb errors of mine. My types aren't complex enough to guarantee that my program is correct, but they're complex enough to at least know that my program makes sense.
I take this one step further: Having a language with rich-ish types and good error messages like rust means I can very often rely on the compiler to tell me how to fix my dumb mistakes. In other words I know where I can be just as if not more sloppy than in an interpreted language and actually get away with it for little effort. I spend a little time getting the parameter and return types as I want them, quickly write an implementation without thinking too much about references and lifetimes then let the compiler work out the details pretty much automatically.
One thing notable in C you usually don't have to type the type name twice. But C++/Java/C# you do. That's when I think about it was a really bad grammar mistake. Java and C# shouldn't have propagated it.
Mentioned above I agree with, designing good types is non trivial. I think that's part of the problem that designing good API's is non trivial. I can see programmers getting the hate on when dealing with code bases with crappy types and crappy API's.
With the rise of VSCode IntelliSense/JetBrains code inspection, do you believe this is still true today? The programmer now has easy ahead-of-time access to inferred types that used to become available only at compile time or runtime
As someone that likes to keep his editor simple (to an extent--I'm using VIM after all), I always get frustrated when people try to introduce policies or procedures that work for them and their preferred setup, and who look at me as an obstacle because I prefer a different setup.
I'm of the opinion that code should be written independent of the tools used to understand and modify that code. If there's anything about the code that needs to be communicated, it should be communicated via the code itself, whether through naming patterns, comments, types, or any other methodology that can be encapsulated in a text file.
Other than letting each developer have their own preferred processes and coding environment, it also makes it easier to SSH into a remote box and know what's going on. A quick google shows that VS Code does allow for SSHing and browsing the remote files via VSCode. That's nice, but I don't know how well it works, and how much I like the idea of allowing another program to run commands on the remote box. I like that I can SSH into a box and use the tools natively available there to read and modify the code, and that the code is prepared in a way that makes it as easy as possible.
If they don't have access, then the tools ought to be standardized to the point where they can be integrated into any editor. The Language Server Protocol seems like a step toward this.
> I'm of the opinion that code should be written independent of the tools used to understand and modify that code.
This a widespread, "common sense" opinion that I've come to disagree with strongly. No one would argue that, e.g., illustrators, 3D modelers, music producers, etc. should be so tool-agnostic—and yet their situation is quite similar. One could produce a complex piece of music in Audacity instead of using Logic or Ableton, but musicians don't have the same mentality of picking the cheapest, most austere, or lowest-common-denominator tool. Instead, they invest in tools that enhance their productivity. And that's precisely what's at stake here. Pairing (a) a language that allows implicit "smart" features like type inference with (b) an equally smart editor to make what is implicit in the code explicit to the developer as needed, is more productive than forcing the developer to make everything explicit themselves.
Re: using VIM over ssh, your choice of scenario is revealing. Why would you limit your everyday development work based on the lowest common denominator tool you're forced to use in an emergency? Also, it's not necessary to run code inspection on the remote box. JetBrains IDEs, for example, will copy a folder from ssh or a similar environment, index and inspect them locally, and then sync them back as needed.
However, it's not that I'm not aware of features available to my editor of choice, it's that I specifically don't want an editor with those features. I don't want that functionality as part of my workflow. I prefer to reduce the noise and distraction so that I can keep concentrating on what's currently important to me.
Bringing this back to what the root parent was talking about, a significant part of code maintainability comes down to how we design our classes, services, etc. It's not so much about static or dynamic typing--both can experience their fair share of problems--it's about approaching our code in a way that makes it easiest for future readers and maintainers to pick up where we left off. That's a difficult task, but one that makes a huge difference. Saying that specific editors can alleviate some of those problems misses the point: that the underlying code itself is not well designed. What I meant to add is using these editor tools not only fails to fix the underlying problem, but that it also forces developers into tools they may not want to use.
Your distinction between the code itself and editor-based tools I think is a false one. The types are part of the code, and while one can use them tolerably well on the command line alone, they are most helpful with things like type hovers. The line is blurry between language features and code structure on the one hand and the editor tooling that gets the most out of it on the other.
experienced developer, the tools don't make a difference. VSCode, VisualStudio, Eclipse, tried many of them. This is from my experience, it may or may not be universal.
Exactly. If specific tools allow specific people to work better, great. I completely support them. But it's unfair to say that what works for some will work for all.
My goal when adapting practices is to adapt the practices that allow each developer on the team to work in the way that's best for them, and to avoid rules that limit developers in their choices.
Always willing to update when a clear case is made, though. Recently I stopped my rule of "80 characters per line max" because I don't think 80 character wide screens are common enough to warrant my consideration. Now I limit line length based on what makes that line of code most easily digestible--whether 30 characters, 80, 140.
Also keep in mind that you're missing out on many other arguably essential tools such as debuggers, smarter shortcuts, and other static analysis.
Therefore, yes, I think it should be expected of a programmer to pick the right tools for the job, in the same way that it can be expected of a designer to be able to work with Adobe files.
If classic VIM doesn't offer these features, then it isn't sufficient as a code editor anymore.
It's not just a matter of whether they're available, it's a matter of whether it's a fair expectation.
I've been developing for a decade and never found that I'm "missing out on many other arguably essential tools". Typically I'm as productive or more productive than my peers.
I get that you think usage of these features is a fair expectation. Can you provide your argument for why you think that's a fair expectation?
This seems like a case of assuming the conclusion, tho. Whether a text editor-based workflow is as productive as an IDE-based workflow when avoiding feature that advantage the IDE doesn’t impact on whether the IDE-favoring features are valuable enough to adopt and assume everyone has access to.
With python, for example, I can't at first see what a function need but at least see the body give huge clues, because python rely less of abstract type stuff (it have a issue with monkey patching and delayed build of objects BUT, "if look like a duck.." is most of the time enough to get things..
I use Ocaml a lot, which is very similar to F# as you know... and I document the types of all toplevel values in my code. Not because the compiler needs it, but because it helps me navigate the code more easily.
The problem to solve now is to lower the costs more and raise the benefits, not to try to eliminate them.
Does it need to swing back? Python/JS/PHP/Ruby are still plenty popular. And no, JS isn't typescript.
I phrased that carefully; obviously they aren't going to stop being "dynamically typed" under the hood. But, slowly but surely, those language's contribution to "dynamically typed" is decreasing, and I expect, will continue to decrease.
In another 20 years, I expect "dynamically typed" will be looked at as a complete mistake by the oversimplification process of history, as the number of people who were around when they were getting popular and understand why they were so attractive decreases. I do, myself; I experienced being liberated from some really, really crotchety languages by the freedom of Python 2.0, and I understand why the people of the time analyzed the programming landscape, and decided that the problem was "static typing" rather than "bad static typing", because there weren't any examples of good static typing. But now there are, and they aren't going to go away, and I expect that barring legacy languages, the choices in the future are going to be simple static typing like Go, complicated static typing like Rust, really complicated dependent typing like $LANGUAGE_YET_TO_BE_BUILT, and tiny languages like shell designed to never write programs in them large enough for typing to even matter.
It hard to convince non-programmers to see the beauty of a type system when they only want to print some html.
So what I see is a bright future for languages that can do both, types or no types.
Python is getting Mypy.
PHP (and Ruby) are dying much more rapidly than many of their peers of similar ages.
Also the Laravel framework for PHP has the most Github stars of any server-side framework: https://twitter.com/denicmarko/status/1309714816290951168
It’s because the ergonomics of the previous generation of (mainstream) languages was cumbersome. No sum types, no type inference, nullability all over, terrible error messages, abhorrence of expressions, etc. some of those things are only indirectly related to static typing, but many people mistakenly attribute them to static typing nonetheless. The new crop of mainstream statically typed languages wed these quality of life improvements with the rugged practicality of the previous generation of mainstream languages (none of these features are new, but no serious team is going to switch from C++ to scheme for the type inference alone).
> With that said, it's important to note that type inference is not new. I was doing type inference with Scheme nearly 20 years ago. I believe the reason it never took off in a serious way is because it combines all the downsides of dynamic typing with the downsides of static typing. Types are meant to document code. Without the annotations, you can't look at code and know what is going on. Which, ironically, is the complaint against dynamic typing. In addition to that, you get the pain in the ass of having the compiler always complaining. And because it's inferring types, the error messages are towers of baffling nonsense. It's the worse of both worlds.
Type inference is very much a “sweet spot” thing. Like many things, Rust nails this: you have to annotate struct fields and function arguments, but within a function body you get inference. Changes outside of the function don’t result in type errors inside of the function. Locality is key.
> One thing that never comes up in these discussions is the idea that creating good types is a skill itself.
This is an interesting point, and I agree; however, I think the issue is less that it’s hard to do and more that dynamic typists don’t see it as a worthwhile activity at all—“why should I try to create good types? I just need to get this happy path working so I can get on with life!” Of course there are abundant good reasons (we should care about quality and maintainability and not just superficially churning through feature tickets at the expense of all else), but I think this is a sort of fundamental disagreement between dynamic and static typists.
and also possibly because C++ has type inference
Not always. If you examine history not everything is a cycle, depending on what you look at human efficiency improves as well.
Society goes through natural selection. The cultures, methods and behaviors that help us survive live on while methodologies that aren't as good tend to get eliminated.
The cycles in the process occur in areas not under selection pressure. It's called genetic drift and mutations in this area can occur willy nilly in random steps or even cycles if it doesn't have an effect on survival. The pressure in this case is survival of a business.
There's not enough cultural data on dynamics types vs. static types, but I feel in general the dynamic type thing was a mutation. Dynamic types were natures trial at a baby born with one kidney instead of two because one kidney is more energy efficient to maintain. Now it's being naturally selected out.
We'll never know for sure unless you live long enough to see what happens to programming in the far future. For something to be truly called cyclical it must be adopted by huge amount of businesses and eliminated and recreated multiple times.
Type annotations aren't for me, they're for the compiler and the IDE.
If I wan't to know whats going on inside a variable, I cmd+hover over it.
Are you talking about scheme? It didn't take off for the same reason all other functional languages didn't take off. Why functional languages aren't as popular, who knows.
> Types are meant to document code.
Not true. Comments/documentation are meant to document code. Types and type systems exist to constrain the program space to produce sound programs. Type systems limit the number of valid programs. Types and type systems do not exist to document code.
My complaint is the compiler doesn't know what is going on.
This cuts down on the time required for the programmer to decide whether to complain or not.
> I've worked with dozens of TypeScript developers and not a single one really knew what they were doing.
I'd hate for them to start committing code that can't at the least be checked by the compiler.