A lot of people in the comments are saying how they started off in Java, or C, and hated types, but eventually grew to love them.
I started off in PHP. It was wild. Anything could be anything. Refactoring was a nightmare. Our codebase was littered with mystery variables like $hold, $hold1, and $holda, which were re-used all over the place. (Granted, this two other problems entirely orthogonal to types.)
Then I got a job at a Java place. My god, it was beautiful. I could suddenly know what arguments functions were expecting, even if I hadn't seen them before. I could be instantly aware if I was trying to use the wrong variable somewhere, or pass in invalid data.
It was as if someone lifted me out of a dank cellar where I had previously subsisted on whatever mushrooms the rats didn't deign to eat, into a brightly lit dining room full of steak and pastries.
I went a similar path, PHP to C#, and never looked back. When I had to switch to Node for one job I was pulling my hair out constantly (and quite literally) because of stupid things that would never have happened in what I called a “real” language (that being a typed, compiled one). I mean for $deity’s sake, there weren’t even any dependency injection options at the time and many many times it turned out a bug I introduced was because of simple typo in a camelcase property name.
I absolutely love Node for the ease of writing something quick and dirty. No dependency injection, no coding standards, nothing but a tool to quickly churn through a ton of data or to perform a single task really well. I also think there are some frameworks (using Typescript, like NestJS) that do JavaScript apps really, really well. I will still, never, ever, ever voluntarily write any kind of “real” application in a language that is not type safe again. The benefits just aren’t worth the perceived time savings...
Node isn't so bad now with typescript. The JS folks learned from endless fixing hot mess code that you couldn't really do with anything less than a good dev experience (Imagine a 5 minute turnaround from edit to test on JS to find a typo in a variable). With TS you get all the nice dev experience from JS, but also some of the C# niceness.
Having spent the last several years working in Node/TypeScript environments (and being firmly in the “I was wrong about types” camp), I think “isn’t so bad” is an overstatement. It certainly isn’t as bad. And especially as TS improves, the possibility to move more and more toward the “not so bad” ideal is there.
But there’s some truly awful stuff in the ecosystem; in the underlying language and the platform’s DNA; in the compromises TS (rightly) makes to be a productive real world tool; in the commonly used tooling; and just peppered throughout everything you can expect to encounter in common third party libraries.
Overcoming all that awfulness requires a lot of additional effort, is inherently limited, and isn’t common in the community (although that too is improving as TS becomes more popular, and as safer patterns become more idiomatic).
I think if I were building a new greenfield project with my choice of platform today, it would be a difficult choice whether to take all that I’ve learned in Node/TS and accept those trade-offs, or to invest in learning another platform.
Honestly haven’t given it a lot of thought, as I’m not currently starting a greenfield project with my pick of platforms.
But I think off the top of my head, languages that I’d look at as first contenders include F#, C#, Kotlin, Swift. I’m sure there are other good choices, for the kind of space I tend to work, that are equally productive and have enough of a community for me to be comfortable adopting them, but I would need to spend some more time researching options to really say with confidence what else I would consider.
All tied in with one singular company each that calls all the shots - For good and for bad. You kinda get all their other stuff showed down your throat along with the languages and things can get bloated fast. But at least things are streamlined. Nodejs had an uprising with io.js in 2014-15 and is now governed by a foundation. I like that a lot. I sleep soundly knowing that no company can pull the rug from under me on a whim.
At least C# (and Swift too) is pivotal to that one big company offering. Free software and openness are important but they are not a silver bullet against bad management.
If most business-side people actually knew what weak typed languages meant for their company, I can't imagine so many of them going for php/js as often as they do. You're always one expression with a `$contact` instead of a `$contract` in it away from a cancelled weekend trip or humiliating sales demo.
If the business side is making these kinds of technology choices, the business itself and the engineering side both have significantly worse and more important problems. And those problems will inevitably create those cancelled weekends.
Honestly, I think there’s an equivalency there. Someone who is an expert in JavaScript means a couple of years. You can hire a high-end JS Dev for a lot cheaper than you could any other language or platform.
The other issue with dynamically-typed languages is that the language sometimes "helpfully" fixes the types for you. I came across one JS project that said this:
for (var i = 1; i != Math.pow(2, 16); i <<= 1)
Replacing Math.pow(2, 16) with 1 << 16 effected a 30× speedup--and this was in SpiderMonkey, which tends to be a little less finicky about optimizing mixed types than V8.
(It's been a while since I coded PHP, but my recollection is that PHP tries to pull a strings-are-integer tricks a few times).
This doesn't look to me like a dynamic typing problem. All the types are Javascript numbers. The issue probably stems from Math.pow's flexibility; it can even accept fractional exponents. The more general algorithm is probably slower.
Even though I learned C/C++ in school, I started off my career with a typeless language, Perl. I loved it for its simplicity and power to quickly spool up working code, but realized it was problematic to use for large projects for many of the same reasons you state. I then switched to a Java project and was immediately frustrated with types because of how verbose it was, but after a while I came to appreciate just how beautiful and pragmatic it was, especially in its ability to help avoid so many runtime bugs that have been the bane of my existence in the Javascript, Clojure, and Ruby world.
I've often felt that some people dislike types because they expect to be able to write code in a certain way that they know will make some very narrow happy path work now and they get really frustrated when the compiler tells them that there are other paths in the code that don't work. "Why is this stupid compiler slowing me down?!". This frustration betrays the programmer's indifference toward the broader quality of the project and their willingness to trade bugs in other paths for a feature that appears to be working. The cognitive mismatch is that it's intended change the way you think and program so you can move quickly on many paths at once (not only your very narrow happy path)--with a well-crafted type system, we can move fast and have quality. Note also that 'quality' isn't just about bugs, but also about a code base that is maintainable, similar to the GP's and the parent's observations about the unmaintainability of their PHP and Perl code bases.
Yea, I agree and for this reason when I teach type systems to new programming students, I tell them that a type is somewhat analogous to a building material. You have brick, wood, steel and iron. You want to build a house that has solid bedrock and has easy forest fires in the area. What material would you use? Most would say rock.
Then I tell them about strings and ints. You can represent 66 both with strings and ints, but one is usable for computation where the other one is simply usable to write Alice in Wonderland with it, or simpler forms of text. Sometimes I get the question then: why can't a string both be computational and as a means of character display? And then I list the upsides and downsides of such a system, just like the upsides and downsides of mixing brick and wood to build the outer wall of a house.
I'm curious if people can think of other analogies that they use for teaching.
Why do you need an analogy at all? Are your students deeply familiar with the construction characteristics of wood and brick already and ready to draw parallels to software? How do those things relate at all to the subject at hand, strings as “means of display” (data?) versus strings as “computational” (keywords, operators, etc?).
Personally, I avoid analogies because they usually mean I don’t really know how to teach the topic, and I’m “hand-waving” on the fly. I either find a way to build on the student’s existing knowledge or I “park” the topic for discussion when the student has enough knowledge to give a correct answer.
I appreciate analogies when I start learning something. Although it's not a perfect representation of what's being taught, it makes it easier for me to start thinking about the topic I'm learning. I can fill in the details later. I think it's useful for certain types of learners such as myself. Although I'm not deeply familiar with the characteristics of construction materials, I was more familiar with them compared to types when I was learning about them. So it would've been a helpful analogy for someone like me.
I don’t know if this is pedagogically useful, but I think of types as shapes, variables (including struct fields and function parameters) as a shaped hole, and values as a shaped thing that can fit (or not fit) into those holes. Kind of like the children’s toy. You don’t want to pass a circle in to a function that needs a square shaped thing, and the type system helps make sure you don’t do that by accident.
If we think of variables as boxes and types as label on those boxes, does it makes sense to have labels on the boxes if complexity of opening the box just to see what is inside goes up?
Probably a little. I might lack average cognitive facilities, but I find that types pay off for me very early. When I worked in a Python shop, I would prototype in Go because the types helped me move fast, and then I could port it to Python (often for a huge performance loss, not to speak of maintainability) to integrate with the code base.
On the other hand, Python’s repl was nice for a small handful of tasks (although I mostly used it for figuring out what the actual type of some variable was, which is obviously a non-problem in the statically typed world).
In the Clojure world we use clj-kondo, maps, and spec to solve these problems
We get editor time feedback of mistakes with optional type hints + light inferance via clj-kondo and a data modelling system that we can export out as database schema, JSON schema etc
And dynamic enough constraint system to express something like all human names must be "Tony" only on Tuesdays
The same constraint that can be shared server side and client side without writing it twice without learning more syntax
Additionally check what the expected inputs for a function are by checking the function specs I think guardrails pro will make this more ergonomic when released
And finally ask the constraint system to generate valid examples of the constraints great for mocking data
I don't miss type systems but I also understand if you're not using solutions here then you're in trouble
Ya, that's why I think each language kind of benefit to different levels of having static type checkers and of various features as well. You can't just blanket say all type checkers are bad, or all language without one are bad.
Clojure is a good example here, it actually can be used with a very powerful static type checker core.typed, yet its users chose not too for reasons that say in JavaScript maybe a different choice would have been made. The REPL for example catches lots of type errors as you code. Other languages don't have such a coding workflow, so a static type checker feels really great in that it too will catch type errors as you code, etc.
Does anyone still use core.typed? My impression was that Circle CI's article announcing that it was moving away from core.typed was effectively a death blow to its community. One of Circle CI's founders later went on to use OCaml instead of Clojure on his next project (Dark) precisely for its static type system.
None of that is to say OCaml is "superior" to Clojure in some way. I disagree with a lot of the ways that the OCaml type system has evolved and I wouldn't be surprised to see people who have moved to Clojure from OCaml. However, having programmed professionally in Clojure (although it's been quite a few years so I'm not familiar with the latest advancements in e.g. spec) I still think a Clojure-like language could benefit from a static type system.
I don't think it'll work for Clojure itself because of a variety of patterns and choices in the standard library (which is in part why I think core.typed died, we also found core.typed painful to use in some of our experiments at my old job both in how it interacted with Clojure and the tooling around it). And philosophically Rich Hickey would probably kill Clojure before he ever considered designing Clojure around a static type system. However a programming language based off the same data driven ideas could maybe do it.
While a REPL and hot code reloading are absolutely huge productivity boosts, they are more or less orthogonal to the benefits provided by a good static type system (see e.g. hot code reloading with Elm or Purescript which comes quite close).
The thing that static type systems provide over tests and runtime contracts is the ability to constrain users of the API of a library. We use regression tests to make sure regressions in code we write doesn't happen again. Likewise types are effectively regression tests at the API level to make sure certain regressions in code that calls our code doesn't happen again. That is an extremely powerful capability that I consistently miss in dynamically typed languages.
The thing is, core.typed is a really powerful type system, but the ergonomics with the way Clojure works didn't work out, so people prefer not to use it when developing with Clojure.
That's what I find interesting about it. Not all language benefit from a type system in the same ways, some, like Clojure, actually get crippled. Now, it can mean that you need to find the right kind of type checker that provides the correct ergonomics for Clojure and maybe that would work. But it's still quite interesting.
For example, Erlang has a bit of a similar thing, Dyalizer made specific choices to work within Erlang's design. Had it not done so, it probably wouldn't have found adoption. Same with TypeScript.
So what's interesting here is that you have an apples to apples comparison where a language is found to be better without the constraints of static type checking.
When you look at other statically typed languages, they're oftened designed around the static type checker. That's the main focus, and the language itself revolves around that. So obviously in such a language, the type checker would be a necessity, as it's the main draw. So it's interesting to look at Clojure for a counter example.
That said, JavaScript you could argue is also an apples to apples comparison, and people opted for types. It'll be interesting to see also where Ruby and Python go, now that they have type checking features as well.
I still can feel very lost when refactoring a complex clojure system as opposed to something like Rust, because you have very little information about all the places in your codebase where a certain assumption was made.
You can go crazy and spec everything, in fact that can help, but:
- In practise, nobody does it
- Specs come with no guarantees. They could even be wrong.
- The official implementation stubbornly insists on not checking return types, so half of your annotation may just be glorified documentation (although you can use third party libs like orchestra)
Just imagine: Add a new required field to a spec, and get a convenient list of source code locations that you need to review. That's the promise of a statically checked system. It's not a silver bullet, but not having this leads to what I like calling "refactor anxiety" (i.e.: did I handle all cases?)
I still love clojure no matter what. I think in practise you can express so much, so elegantly, and with far less code, that your project size is always sorta manageable.
I did Java dev for a while, after being C before then. I felt like everything needed a pile of typecasting in front of it to work, even though these were often objects for classes you'd think should all play nicely. I realized only after dealing with it for so long that Java wasn't supposed to work that way, but how things can work when well written, vs old apps where half the code is written by a long line of 4-month co-op students, are two very different things.
Ultimately I think people just weren't given any credit for making good, reusable types, because then the next dev who submits a better feature faster using your work gets a raise, but you look like a kook ranting about best practices who doesn't do anything "business".
Type systems limit the set of programs accepted by the compiler. A sound type system will reject every bad program - but also may reject some good programs. Type systems therefore also will have escape hatches to let the programmer overrule the compiler.
Bad type systems need you to use the escape hatches frequently - you can't write much C without using casts.
I haven't yet used a language with no need at all for an escape hatch - but some languages need them far less often than others.
Java's type system is better than it was. The main places it still has weaknesses are around exceptions (you will need to wrap checked exceptions in runtime exceptions in places you'd have preferred to specify an exception type via generics, eg) and the occasional cast after an instance type check.
You can phrase good development practices in business terms: what are the risks to the business due to sloppy code? Are they greater than the risk of being slow to market?
Sloppy code has accumulating costs. A good type system can help greatly with refactoring to address those costs, but it cannot help with the attitude that those costs aren't real and don't need to be paid.
My first dynamic language was python, and the quality of the docs and overall apis of major libraries soft the landing.
But later I note that was parts of MY code that become a mess. The problem of delay of good design? is that good design GETS delayed.
And then fix that later is a problem. Major libraries and core APIs have the time frame and focus to polish them, but the rest of the code most do?, not much, so it stay in the awkward phase of "later will be refactored, maybe. Perhaps..."
Then later I move to F# and rust and can't delay bad design for long.
Is a chore to slow down at the start of the coding phase but later the speed up is huge: i don't need to fix my past mistakes by the truckloads...
Early on at the company I'm at now they decided to exclusively use $params for param passing to functions as the one true way to do things. It supports easy default params, named params, an easy passing of config through to deeper layers without needing to know about it at intermediate layers.
What it doesn't support is any sort of readability as a code base scales. It's often hard to know how the function you're calling will behave because the param can tweak some flag and totally change the behavior.
Apropos PHP: I hate it with such as much passion as the next guy, but I am quite impressed with what Facebook managed to do with Hack! (Including adding lots of types.)
Don't forget, much of that effort has now trickled down into the PHP language itself. We've long had scalar types for function parameters and return values, but with 7.4 and 8.0 we've added union types[1], nullable types[2], and typed properties[3]. While not a part of the language yet, generics can be annotated with the linting tools PHPStan and Psalm[4], and native support for these annotations is coming to the next release of PhpStorm[5].
Using PhpStorm, I've worked with 100 kloc PHP codebases where almost every type was inspectable. And the language support and tooling are only getting better all the time.
Coming from Haskell and Scheme, I'm now waiting for them to unify the syntactic handling of variables.
At the moment, you have to put a $ in front of most variables, and no adornment for when you want to call something as a function. Reminds me of Common Lisp with its two name spaces for functions and other variables.
Yeah, the $ sigil for variables is a historical curiosity, but I don’t think it’s going anywhere. It would be an unacceptable BC break for codebases that have implicitly relied on this “two namespaces” behavior
I had almost the same experience, first learning Basic and Fortran, then coming to Pascal and C. The difference is, I just was a student. I didn't write anything big enough to really get the need for types. For a couple hundred lines, that only had to live for maybe a week, who needs types? So when C and Pascal demanded them, I was annoyed.
A few decades later, I really value them. My code now lives for decades, not for weeks. Lines are measured in the hundreds of thousands. Other people work with me. Types eliminate a set of errors. They're worth the effort.
I think time is where the tradeoff lies. How long are you writing for? If your code is only for this week, types maybe are a net loss. If you're writing for a month, maybe it's a toss-up. But if you're writing for a year, types are a net win.
I used JavaScript for years and then used C# for years. During all those years, I also use languages with advanced type systems like TypeScript, Scala, Idris, etc.
I still hate typing in Java / C#. I feel intellectually insulted from time to time. There are too many times I can naturally express something in natural languages or just JavaScript, but cannot easily express those in Java or C#.
TypeScript is great, it lets me express many things I want, but it's more or less bloating and inconsistent. I always have to worry if the signatures are hard to read for the team members.
Interestingly, TypeScript reveals a lot of use cases to the mass - there's more to type systems. It's not possible to write a typesafe 'printf' in most of the statically typed languages. Maybe a full-sized dependent typed language like Idris will come in sight in the following decades.
I been jumping between Java, ruby, Javascript for some years now.
Ruby flexibility is amazing. I love the language and syntax frameworks built around it. Amazing.
Except for investigating and refactoring. Everything in Java, even in worse written systems, was fairly easy to understand and find using intellij and friends. Meanwhile in ruby if someone gets messy it might be impossible to figure that code out.
I like the concept of typescript best. Types when you want them. No types when you just prototyping. It was actually my favorite aspect of Adobe flex when I had a short stint coding in it 12 years ago.
I started with C and some Pascal. Then Java and Perl. Then JavaScript, then Ruby, then Python, then Elixir.
The great thing of Java was garbage collection. No more malloc/free hell and programs still worked.
The great thing about Perl and the other scripting languages was no type declaration (plus garbage collection.) And programs still worked.
After 30 years I kind of agree with the author of the OP: the time lost annotating programs with types of not offset by the benefits. In almost all cases type discovery is something the computer should do, not me. Same as for memory management: automatic, not manual.
My second job had a lot of VERY old C code. They were viciously strict about enforcing Hungarian notation for variable names. This is a workaround for your concerns, but I prefer types any day.
I also started off with PHP, about 20 years ago. Soon after, I started working with C# and had the same feelings you had about Java - code was so much more readable, maintainable and refactorable, and so much less buggy. I've never looked back from static typing, and am put off from using languages I'd otherwise be interested in (like Elixir).
A lot of people are making comments about how they can code "faster" without types.
But for the majority of the code we write, inital speed isn't that important. Understanding the code and maintaining it are orders of magnitude more important for any non-trivial code.
Types are not only a way for the compiler to understand your code and impose constraints. They're also your API to other programmers. When they see a sum type, they can understand its possible states. When they see a product type they can understand its possible values.
Understanding other people's code is at least half the job of a programmer, whether it's understanding a library or understanding code you have to maintain, or understanding your own code that you wrote 6 months ago.
It’s honestly embarrassing to hear all the worthless arguments against types.
The cognitive load of a dynamically typed (or unityped, or “untyped” or whatever) language is massive, yet the common argument is that types «increase» the cognitive load??? How does offloading a large majority of the trivial reasoning of a program over to a type system, ”INCREASE” the cognitive load???
It’s just so endlessly easier to program with types
I think there are a few reasons why people develop an impression that types are overhead:
* People who are learning to code are writing lots of code but not reading very much code. I think types do the most work when trying to understand existing code.
* Lots of people's first experience with typed languages was something like C++ or Java back when they had much worse error messages.
* The kind of mistakes you make when first learning to code make the type checker feel like a pedantic nitpicker instead of a protective ally.
* Programming instruction tends not to teach technique very much. If you invent techniques that leverage the strengths of types, then great for you. If you don't, then you might program for years until you are exposed to the benefits types can provide.
Also, many beginner programmers work on small code bases in every sense of the word.
These days I will often have to glue together some tiny part of two or three enormous APIs, some of which are "auto generated" from some other system. Think LINQ-to-SQL or WCF.
It's amazing when you can take a 100 MB chunk of code, and simply "navigate" to the thing that you want using tab-complete, in the sense that "somefactory.sometype.someproperty.subproperty.foo" is almost self-evident when you press tab and cycle through the options at each step.
And then when you finally get "foo", if it's the exact unique type you were looking for, then you can be certain that you did the right thing! There's practically no need to reach for the debugger and start inspecting live objects. Just tab, tab, tab, tab... yup, that's it, move on.
If you're working with a "blank slate" PHP app (or whatever), where you've personally written most of the lines of code involved, typing can feel unnecessary.
If you're glueing together Enterprise Bean Factory Proxies all day, then strong typing is practically mandatory.
To be honest, I disliked PHP’s dynamic typing just as much as I disliked Java’s static typing, and that’s after spending over 10 years professionally developing primarily in those two languages (and some python and javascript).
The last couple of years I’ve developed stuff in haskell and elm, but I’m currently getting back into python and javascript. And I’m sniffing around, trying to figure out how to utilize typescript and python type annotations in an ergonomic way, that doesn’t feel like too much overhead.
I’ll easily admit that it’s not as easy to reach the same kind of benefit in those languages.
The magic sauce really is (ideally as global as possible) type inference combined with programming primarily via expressions instead of primarily via statements, with appropriate language support of course.
> If you're working with a "blank slate" PHP app (or whatever), where you've personally written most of the lines of code involved, typing can feel unnecessary.
> If you're glueing together Enterprise Bean Factory Proxies all day, then strong typing is practically mandatory.
Might that not be a problem, though? Shouldn't more software systems be small, elegant and well-architected rather than a spaghetti nightmare navigable only through an IDE?
I like types, I think they are great, I like editor tool, I think it is great. I even like a lot of the luxuries modern systems afford me. But … maybe we could stand a little simplification?
Have you seen the enormous scope of something like Azure Resource Manager? Amazon Web Services releases several new products or major features per day. They all have APIs.
Office and Office 365 is also a behemoth that covers entire suites of business products, front-end and back-end.
If you start to seriously talk about integrating these things with a bunch of third-party components, you're talking tens of gigabytes of binaries.
> But … maybe we could stand a little simplification?
Always. Unfortunately, that runs up against the limitations of our squishy meat brains. Especially when they're numbered in their tens of thousands. Simplification, refactoring, and code reuse requires coordination.
I too am horrified that a mere database engine no longer fits on a standard DVD disc... when compiled into a binary.
But I can download the ISO image in a matter of minutes, and use the system with a few button clicks in an IDE to produce functional software.
I know that the raw numbers should qualify as a nightmare, but at the end of the day, things get done anyway and it doesn't seem to matter that much.
I guess we're just horrified because we know how this particular sausage is made...
I agree with this 1000%. Projects (note not programs or individual source files!) are simply easier to grok (maintain, enhance, debug, refactor) with statically (and strongly) typed languages than dynamic IMO.
Five years ago when switching jobs I’d pursue “full stack” positions because I’d done a fair amount of front end dev in the past. No more, me personally, I’m backend all the way. Dynamic typing (vanilla js) is just harder; more time consuming more cognitive load, IMO.
It's not perfect, but TypeScript goes a long way toward making frontend feel as safe as backend. You still have the potential for weird bugs when interacting with external JavaScript, but your internal code is pretty safe. Also, TypeScript's type system is impressively flexible, and I often find myself missing features (like unions) when working with other languages.
Typescript allows you to have literal type discriminators in their unions, which combined with control flow based type narrowing of unions effectively gives you sum types. That is what he is talking about, because most mainstream langauges don't have sum types and you need to go to strongly typed functional langauges (haskell, scala, etc...) or modern strongly typed langauges such as Rust to get access to such features.
Yes, this. Sum types in most languages are approximated by something like kotlin's sealed classes. Sometimes you don't want to declare a whole new class hierarchy, you just want to say "this function can either return this or this".
I love love love Python for data science, in part because it's dynamically typed. I can bang things out quickly without worrying about the engineering bits, and, since I'm working in an interactive coding environment, it's generally easy enough to just inspect the values of my variables to figure out what they are.
I hate hate hate Python for ML engineering, in part because it's dynamically typed. The same features that make it so easy to hack out a quick data analysis make it absolutely awful to build for durability. For example, since stuff in production runs hands-off, you need to feel pretty confident about the return types of every function in order to feel confident you won't throw a type error at run time. Actually pinning this down can get quite complicated, though, when you're working with a library like scikit-learn that relies heavily on duck typing. Sometimes you end up having to go on a journey down a rabbit hole in order to clearly identify and document all the types your code might accept or return.
(Disclaimer: Hate aside, it's still my preferred ML engineering language. You've got to take the bad with the good, and the language gets you access to an ecosystem that is so very good.)
This is absolutely it. Untyped languages are great for glue-scripts, for exploration (of an API, a dataset, whatever), for quick-and-dirty things. As soon as your logic grows beyond "what can be appropriately expressed in <5 files" and/or "this is going to have a second developer", types become helpful.
I see it completely the other way around coming from the python, dynamically-typed side. For me, statically-typed languages have a benefit on the smaller-side of the scale, but absolutely bomb when the code-base grows. At that point everything is a FooFactory or an IInterface with no help from the IDE anyways because of IOC/DI/attribute reflection magic. And when it's that big, everyone argues over folder, package and inheritance hierarchies and the "right way" to refactor & reuse code, with the inevitable slide into yet another level of inheritance or interfaces. All the while peppered with Singletons, overloads and new abstract virtual base methods with complicated method override rules.
Obviously I exaggerate a bit, but we've all seen various incarnations of a lot of those issues.
The second you’ve “engineered” yourself into losing good IDE support half the benefit or using a strongly typed language goes out the window in my opinion. Though maybe some bias because I make an IDE! :)
Happily with TS it’s possible to have DI and IInstantiationService’s and all that and still maintain good IDE support —- in no small part because the IDE is built with all those, in TS... if it was unusable we’d fix it.
IMO dataframes are the reason why dynamic typing fits data science so well. It's certainly possible to represent a single dataframe as a static type; but representing all the slicing, column removal, joins, etc. is actually pretty hard without dependent tricks. So bypassing types for data frames is preferable. On your ML engineering point, the other side of it is that once your dataframe's schema is finalizes it really should be statically typed so that assumptions can safely be made about what is/isn't inside of it
> representing all the slicing, column removal, joins, etc. is actually pretty hard without dependent tricks
Disagree in the strongest possible terms, tbh.
It's the lack of static typing that gets you 3/4 of the way down your experimental pipeline only for your code to fail because column "trianing_batch" can't be found. Huge productivity loss, even with rapid iteration.
We must work very differently. I couldn't fathom that happening to me, if only because I compulsively peek at samples of the data frame every step of the way, in order to make sure the data look reasonable all the way through.
As helpful a static type system can be, most of them do cause increased cognitive load UNTIL the FIRST RUNNABLE VERSION. Having your code running for the first time is rewarding for devs, especially those who are handling many simple codebases rather than few complex ones.
There are great type system that provide enough ramp for for first runnable version. Rust, for example, has type inference and an unusually helpful compiler.
But most languages, especially the old ones, don't have that. Because of that, coders need full-blown IDEs like Intellij/VS to write comfortably in those languages. Without full-blown IDEs or editors packed with plugins, it is quite a chore to navigate types. There's no hyperclicking, type annotation, docs preview. While dynamic-typed languages usually is runnable since the first character without having to wait for the compilation, so even notepad is acceptable to write with (though that would be really painful).
Having been in both sides, I love types in certain languages (and disdain them in others e.g. PHP). I would still pay the price of compilation and use TypeScript for scripts that lives long rather than using JavaScript. But I think both sides need to understand where the cognitive load arguments came from.
I just didn't when I was new to programming. I learned Python because that's what I saw pitched to me all the time. There was a local Python meetup, MIT's CS courses taught Python and Udacity eventually taught Python. I loved the language and learned quickly how to do a bunch of basic stuff
But I wanted to make Android apps and, ugh, Java confused the crap out of me. It wasn't so much the language, but rather "URI? Where in the world do I get one of those?! Oh, you instantiate a URI with the string. In Python this is just a String..."
Or another time, once I had mastered types, I convinced my Javascript team that Typescript was Worth It. And the ensuing chaos when nobody understood how to use Types and nothing would ever compile for anyone.
All SUPER noob-y mistakes. All because I (or my team) didn't understand how to think in Types. Furthermore, people who can Think in Types often don't know how to articulate that thought process to others.
I'll also throw out there that scripting languages w/o types tend to have a lot less tooling around the language. People are used to being able to type whatever they want and things kinda just work.
Typed languages are very different. The tooling is far more robust and more able to point out errors, but also tends to be more complex than just a simple text editor.
This is changing with VS Code and LSP being a thing, but it still influences those communities in fundamental ways.
And you also see many dynamic languages working around lack of types by describing the types anyway in the documentation. Like jsdoc in javascript before typescript became popular.
You are already doing the hard work of describing your types anyway, but because your compiler doesn't know the types, it can't help you out.
There’s a continuum for sure. Type systems have their own kind of cognitive load, because you have to learn them, how to model your invariants in them, how to understand error messages, avoid common pitfalls, etc. Type systems are usually quite “general” and abstract, and that always comes with cognitive load. Quite often reading the code of a parametrically-polymorphic Python function is way easier than understanding the generalized polymorphism constructs of an advanced static type system.
There are times when the cognitive load of a particular static type system is less than that of a dynamic one, for certain classes of programs and audiences, and times when it’s not. This is fine and we should encourage development of both kinds of type systems and a shared understanding of how to pick the right ones for the job (which is a discussion nearly always missing in these debates).
For the most part this cognitive load already exists in order to write correct programs. Types don’t just appear when a type checker is present. Type systems only add on the additional burden of needing to understand the formalization of that intuition. It’s not an insignificant burden, but it’s a much smaller gap than learning to intuit about programming correctly in the first place.
It depends on what you mean by "correct". If it means "it does what I need it to do and I can move on with my life", then no, the cognitive load doesn't already exist, and formalized type theory in many cases adds an incredible amount of cognitive load.
I know that in many cases "correct" means a lot more than that, such as in proper software engineering contexts when building a program/system that needs to live and evolve for a long time among many people. I write that kind of software all the time, and I always use static type systems for it.
But I also write a bunch of ad-hoc, one-time-use programs, and I'm very glad that I don't need to reason about abstract type theory in order to parse a CSV file and add up some columns to send to a colleague.
I don’t care if my throw away CSV munger is formally safe/sound or maintainable. I care if it gives me correct results, once, as quickly and effortlessly as possible. Which is why very few people fire up GHCi/rustic/javac for that, but instead use awk/shell/Python or something similar.
OTOH, sometimes the CSV munger one writes today is still in use five years later, when the input CSV has a UTF-8 character for the first time, and suddenly it crashes and no-one knows where the bug could even be.
UTF-8 support is completely orthogonal to static typing. There are dynamically typed languages that handle it great, and statically typed languages which have garbage support for it.
In general I know what you mean though. “What if this CSV parser turns out to be really important?” If you think there’s a high probability of that, do it in a statically typed language then. 99.9% of data munging I’ve done have been throwaway programs to answer a question to inform some decision or help refine a mental model.
I don't buy this "choose the right tool for the job" thing. I'd always advise you to choose the tool that your company has the most knowledge in. If there's a problem that cannot be solved in that programming language, then you can look for a better tool. But you shouldn't pick a language that nobody else understands, just to save a few lines or some time when doing the initial implementation - just to realize that you now have to constantly keep up the knowledge to be able to support it.
I consider that one of the most important criteria when picking the right tool for the job (if you're writing something to be maintained by other people in a company).
> It’s honestly embarrassing to hear all the worthless arguments against types.
I think this forum is written in a language that doesn't have types. My favourite though was lambda-the-ultimate.org, at some point the number one resource on the internet when it came to discussing about programming languages (maybe it still is, I haven't followed it for a while), which was written in an untyped language, PHP (Drupal, to be more exact).
> Understanding other people's code is at least half the job of a programmer
This was one of the pain-points when I was working more with node.js: the function signature told you nothing - like whether the function would even return or not would sometimes be a mystery.
In very, very short scripts you can get away without types (like in a notebook for example), but once a project starts to get even medium size the tiny amount of time you put into writing a type name explicitly here and there is more than made up for by the degree it helps with the structure and correctness of your program.
This kills me about python. So many times I cannot figure out what exactly a functions expects and what it returns, sometimes even from reading the documentation! Matplotlib is especially bad.
Good grief, matplotlib and the whole scientific computing stack are abysmal. Every function takes dozens of arguments, but any given call only has to pass some subset, and depending on the subset and their runtime types, the function just tries to guess what the caller wanted to do. I get that this stuff was written by amateurs, but in a saner world this stuff would have been addressed by now. Anyway, I moved on from Python to Go and I’m super happy (!!performance!!, !!package management!!, editor tooling, top notch document generation, quality ecosystem, !!static binaries!!, and types—even if Go’s type system isn’t as robust as Rust’s, it still goes a long way).
This is the most ridiculous thing about dynamic languages, I don't understand how people can be productive in such environment. Meanwhile I just press few keys and IDE shows my all about I need to know about that function. Oh, and it also runs 100x faster.
And mypy is still very immature. You can’t denote a recursive type (e.g., a JSON type) or specify a callback that takes keyword arguments. Even getting it to load type annotations from third party packages is hard in many cases. Worse, it seems to be improving at a snail’s pace if at all.
A "JSON" type in Python is just an untyped dictionary. Is that not what you're looking for?
Also, python's type-hinting supports forward-references which are what you would use for recursive or "self-referencing" types.
Personally, I like the slow pace they're taking with the typing. It's touching the core usage of the language in a fundamental way and I don't think that can be rushed. We're seeing lots of community growth around the type-hinting, even using them at run-time, which is amazing to watch and marvel at.
> the function signature told you nothing - like whether the function would even return or not would sometimes be a mystery
Yes! This drives me nuts about JavaScript! Untyped parameters are one thing, but having to read the entire function just to know if it returns anything is ridiculous.
To be fair, JS’s dynamic types aren’t solely to blame for this. There are dynamically typed languages that don’t require you to read an entire function body to know if it returns something. But, those are languages without statements, and all functions return something.
I agree with this for really large codebases, but I think you can get a surprising amount done before your program becomes "non-trivial" using a good dynamic language. For example, the website you are writing this comment on has been perfectly maintainable in lisp without any static typing for the past 15 years.
It also hasn't changed much in 15 years. Now that could be (and probably is) a deliberate decision. But there are more than enough cases of projects that haven't changed because they can't...they've painted themselves into a corner, and every time they try to change something, something else breaks.
I've felt this way with Python and Ruby and Node projects (it is a major complaint in the RoR community), but have never felt that way with Scala, Java, C#, F#, Rust, or OCaml. Most of the time, when I need to make a change, I just change it, iteratively eliminate any type errors, and once the type errors are gone, the tests magically pass too.
I get the feeling that lisp doesn’t produce as many runtime type errors as Python or JS and I’m not really sure why that would be. Maybe there’s something to a functional style of programming that improves quality even apart from type checking?
Arc "feels" a lot more Scheme-like than Common Lisp-like, but pg is a CL guy afaik; my observations on the two:
Scheme avoids type errors by writing programs which could be given static types with a sufficiently powerful type-checker. If a Scheme programmer needs to define two aggregates, both of which have a property "name," they're likely to define two different functions, foo-name and bar-name, to get at them. This, though it's noisy, makes type errors more obvious.
CLOS helps avoid type errors too, since it encourages thinking not about how code interacts with a single type, but instead how it interacts with a whole _protocol_ of methods. I think multimethods in general are a powerful design tool that help avoid type errors in a dynamic setting, but I haven't had a chance to try them out in a language other than CL.
Many CL implementations also have static type-checking. For example, when I try to define a function with a type error in SBCL:
* (defun f (x) (+ 1 x (car x)))
; in: DEFUN F
; (CAR X)
;
; caught WARNING:
; Derived type of X is
; (VALUES NUMBER &OPTIONAL),
; conflicting with its asserted type
; LIST.
; See also:
; The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
; caught 1 WARNING condition
CL's philosophy here differs a bit from most typed languages: SBCL will emit a warning (not an error!) for code it can statically show is impossible to run without getting a type error, while e.g. GHC gives an error for any code it can't statically show doesn't get a type error. However, in my experience, many type errors are obvious enough that SBCL warns about them (e.g. passing a vector where a list was expected, a string where a symbol was expected, nil where a non-nil value was expected, etc.), so this helps quite a bit, especially when combined with the interactive editing one gets through SLIME/slimv.
One can also attach type annotations to functions [0], and SBCL (and probably other implementations) will add a runtime CHECK-TYPE [1] unless you tell it to optimize for speed enough.
This is partly right. FP approaches do tend to reduce the volume of mistakes, and reduce the kinds of mistakes you can make.
But that’s not because type errors are less likely. For dynamic functional languages they’re only less likely because the implicit contracts tend to be more general and the data structures tend to support a high degree of polymorphism.
The reason FP approaches tend to reduce mistakes is mostly that managing state is hard, and pure functions are easier to reason about.
I have had similar experiences with Python. What endlessly frustrated me when looking up the documentation for Python’s standard library (I was using the classes for working with emails/imap) was that what kind of thing to pass in as arguments wasn’t specified clearly. It made using the API so much harder because I had to trial/error to figure out how to construct the proper object that the API would accept.
The reason people argue for being able to write and run code quickly is it helps with prototyping and iterating on a concept. For a lot of people, overly verbose typed languages kill that creative cycle. Type systems are great for data validation and self documenting code when you need those things but they essentially double the work when you’re first getting something off the ground.
For many enterprises, getting code out the door is a key win. So in an idealist conception, maintainability, correctness and good design are everything. In the practical reality of today, these often have to take a back seat to speed of implementation.
What I'd like to see is a language that allows typeless programming to start but which is designed to allow the imposition of types at a later point.
Also, TypeScript is designed to work the same way, as it can coexist with untyped JavaScript code as you introduce it to the code base (it still has to run through a compiler, of course).
Types are cool but when I worked on eclipse apis (plugins) god forbid they were utterly useless. Design matter, having 32 intermediate classes to do just anything, 12 of which are totally unrelated and suddenly types don't help you much.
Also it was full of Option types disguised as potentially empty arrays.
I followed a similar trajectory. Types were the bane of my early career. Hideous, extraneous.
But really, they're the light at the end of the tunnel once you've worked your way though the dynamic / weak typing minefield. It took me a lot of Python, Javascript, and Ruby for me to get there, but now I'm way more comfortable on the other side.
The correct type system is actually way more expressive than not having strong static types. Sum types let you combine multiple return types elegantly and not be sloppy. Option types remind you to check for an absent value.
Static types let you refactor and jump to definition quickly and with confidence.
Your interfaces become concrete and don't erode with the sifting sands of change. As an added bonus, you don't need to precondition check your functions for type.
Types are organizational. Records, transactional details, context. You can bundle things sensibly rather than put them in a mysterious grab bag untyped dictionary or map.
Types help literate programming. You'll find yourself writing fewer comments as the types naturally help document the code. They're way more concrete than comments, too.
With types, bad code often won't compile. Catching bugs early saves so much time.
Types are powerful. It's worth the 3% of extra cognitive load and pays dividends in the long haul. Before long you'll be writing types with minimal effort.
Same here! I've started with C++ and Java, learned to hate excessive typing, went through a long period of dynamic typing, and now I'm at the point you and the the author are.
I still code a lot of Common Lisp on the side, but my Lisp code now looks entirely different than it looked just 3 years ago. The language standard does support optional typing declarations, and there's an implementation (SBCL) that makes use of it to both optimize code and provide some static typechecking at compile time (with type inference). So my Lisp code now is exploiting this, and is littered with type declarations.
However, the CL type system is very much lacking compared to Rust or Haskell. I'm hoping one day someone will make a statically, strongly typed Lisp that still doesn't sacrifice its flexibility and expressive power. I'd jump to that in an instant.
My experience with strong types is limited-- I'd done the thing of learning some C, doing professional stuff in Ruby for several years and then discovering the ridiculous power strong types can have and doing some professional stuff in Go.
Typed Racket [1] was really a revelation to me in that regard. I'd be curious how developers with more strongly-typed language experience feel about it.
> However, the CL type system is very much lacking compared to Rust or Haskell. I'm hoping one day someone will make a statically, strongly typed Lisp that still doesn't sacrifice its flexibility and expressive power. I'd jump to that in an instant.
> I'm hoping one day someone will make a statically, strongly typed Lisp that still doesn't sacrifice its flexibility and expressive power. I'd jump to that in an instant.
This part inspired me to look up the wiki page "Haskell Lisp [1], because I somehow remembered that some people were trying to make a Haskell that could be written in Lisp. But this page reveals even more interesting efforts:
> Shentong - The Shen programming language is a Lisp that offers pattern matching, lambda calculus consistency, macros, optional lazy evaluation, static type checking, one of the most powerful systems for typing in functional programming, portability over many languages, an integrated fully functional Prolog, and an inbuilt compiler-compiler. Shentong is an implementation of Shen written in Haskell.
> Liskell - From the ILC 2007 paper: "Liskell uses an extremely minimalistic parse tree and shifts syntactic classification of parse tree parts to a later compiler stage to give parse tree transformers the opportunity to rewrite the parse trees being compiled. These transformers can be user supplied and loaded dynamically into the compiler to extend the language." Has not received attention for a while, though the author has stated that he continues to think about it and has future plans for it.
But this page does not list everything and there is Hackett [2], which introduces itself with "Hackett is an attempt to implement a Haskell-like language with support for Racket’s macro system, built using the techniques described in the paper Type Systems as Macros. It is currently extremely work-in-progress." - though it seems that it didn't change since two years.
And finally there is Axel [3] - which introduces itself with "Haskell's semantics, plus Lisp's macros.
Meet Axel: a purely functional, extensible, and powerful programming language."
Disclaimer: I never learned any Lisp, went from C/Java/JavaScript/Bash straight to Haskell and am a Haskell beginner for lifetime. Though I love the language and the fact that I will be learning it and surprised by it for the rest of my life.
> But really, they're the light at the end of the tunnel once you've worked your way though the dynamic / weak typing minefield. It took me a lot of Python, Javascript, and Ruby for me to get there, but now I'm way more comfortable on the other side.
To me that was not the issue. It was, rather, discovering languages with powerful and expressive type systems.
My first job was in Java, most of my career afterwards was in Python. I've been type-curious for a while because of Haskell and OCaml and am very fond of Rust, I'd take a job in any of those happily.
Types in Java are still, today, largely verbose, hideous and extraneous. The cost / benefit is extremely low (or rather extremely high, you pay a very high cost for limited benefit, and the cost generally increases faster than the benefits). You can leverage types heavily, but it creates a codebase which is ridiculously verbose, inefficient (because every type is boxed), opaque, and simply doesn't look like any other Java codebase so will be very much disliked by most of the people you're working with. And the benefits from that will still, at the end of the day, be rather limited.
Me too, but I learned in Turbo Pascal and early Java which had some real limitations to their type systems. Imagine - strings of different lengths being incompatible types, arrays of different lengths being incompatible types, no generics, so no standard collections, no serialization, tons of manual typecasting. Having to write separate methods for every possible set of parameters.
Out of nostalgia, getting so frustrated with dynamic-typed code, I once tried to go back to some of that old code and make it use JSON instead of proprietary formats. That was a nightmare.
In the dynamic languages it would be utterly trivial. Just call json_encode($whatever), or $whatever = json_decode($some_string).
Modern languages with modern type systems, inference, generics, etc. that make things like that possible and relatively clean completely change the picture.
Another thing that bugs me about dynamic languages is of course you have to manually check everything all the time because the compiler can't. We used to complain about the bloat of having to write all those type names and casts, but dynamic code, if it has good checks, can actually be more bloated in addition to being less expressive.
Nothing good comes easy. The dozens of hours I'd spend staring at 26 lines in R just trying different ideas to shorten/optimize/improve clarity, and that wasn't something I needed to sell that someone else would depend on for business or personal use.
But I can relate to the pressure to deliver quick results. I found myself burnt out when working on a forecast model around three years ago. The constant "how's it goin'?" tore my attention away from the work, and I'm still convinced I could have delivered a better result.
So, in a way, I agree. In another, I understand the other side of the issue, and I think there are so many less time-intensive tasks going on around engineering that there's often little awareness that something like refactoring a class for better efficiency pays in smaller but compounding ways long-term, with most of the time cost and perceived opportunity cost being immediate and short-term. It's still worth it if you really do the math on the long-term benefit.
Sum types even work when you actually have the multiple of the same return types. (Ie in Haskell `Either String String` works just as well as `Either String Int`; the types don't have to be distinctive.)
Also, `Maybe (Maybe a)` works correctly, contra possibly-null values in dynamically-typed languages. It's a surprisingly significant issue given how seemingly trivial it can sound.
An interesting insight I came across a little while ago is that for mainstream, industrial languages, this way of thinking about types is relatively new. It's not that we're seeing the pendulum swing back to types, it's that we're discovering them for the first time!
In earlier typed languages, the types weren't there for reasons of soundness or productivity at all. The types were there for the compiler alone, as the compiler needed them to know which machine instructions to emit for various operations. Types were just a cost imposed on programmers.
Once computers became powerful enough that we could afford to spend cycles and memory making these decisions at runtime, dynamic languages became viable and we saw industry shift over to them, except in domains where dynamic languages still weren't viable, or where existing codebases or ecosystems made it not economically viable.
Fast forward to the present and decades worth of type theory knowledge is finally filtering through to industry in the form of languages like Rust, TypeScript, Swift, Kotlin, and others. For the very first time we're embracing types for their soundness and productivity benefits. This is an exciting new era.
You're not giving the older generations of programmers enough credit here.
While it is true that strong typing is a requirement for the best performance (and this remains so), the productivity benefits of strong typing have been known for a long time.
I mean, just look at languages like C# and Java. These are well established, extremely popular languages, used mostly in business software. A domain where performance is rarely critical. Yet, these languages are very popular. Not in the least because they make it easier for programmers to understand and work with other people's code, and because they provide good tooling, both of which are hugely valuable in a business/enterprise context. Strong typing plays a major role in enabling these features.
Even when C# was still a brand new language, roughly 20 years ago, Visual Studio already provided features like "go to definition", "find references" and autocomplete out of the box. These were a major reason for people to adopt the language.
It's no surprise that people like Anders Hejlsberg, who created C#, later went on to create TypeScript. They already understood the productivity advantages of strong typing and wanted to bring those to the web.
As a bit of a nit-pick, it's not _that_ new - see languages like ML, SML, OCaml, Miranda, Haskell, Coq, etc. that combined the notion of types from programming languages and types from mathematics. It's more that it's only recently that _industry_ has been learning about it.
That said, I definitely think you're right to point that this is a new thing for industry, and not just a swing back to the idea of types that were previously mainstream in industry. I'm excited too!
It seems pretty evident to me that most successful and popular dynamically and statically typed languages are converging from different directions on a similar set of solutions. Very much reflecting the phenomenon you describe. Some simple examples: C# has moved from very strong typing of the exact sort OP criticizes (`Person person = new Person();`) to increasingly permitting looser/more expressive typing with `var`, anonymous types, pattern matching, etc. From the dynamic side, optional, loosely enforced typing is starting to grow more common (e.g., type hinting in Python, TypeScript in JavaScript) and provides a static but still flexible form of typing. So there's some happy medium where the language balances the permissiveness of dynamic typing and the expressiveness of static typing.
It can still feel that way. Take C++ for instance:
Foo f = fooFromElsewhere; // explicit typing (old)
auto f = fooFromElsewhere; // type inference (new)
Now what happens if we change the type of `fooFromElsewhere` from `Foo` to `Bar`? With the old way, we need to change the code to:
Bar f = fooFromElsewhere;
With type inference however, you won't need to change that line at all. And if the new type has enough in common with the old type, you may not change the code at all. It's just as strict as explicit typing, but it's arguably more flexible, and thus feels looser.
In cases where Foo and Bar are stricly incompatible, auto is fine here. In cases where Foo and Bar are similar and might cause trouble, auto shouldn't be used here.
Yeah, I was omitting a bias I have: I tend not to write class hierarchies, so my types are generally strictly incompatible. Things would be different if the code involved some big class hierarchy (as is sometimes the case, for instance in widget toolkits).
I feel like new features of popular typed languages have also helped them catch up to dynamic language it terms of ease of use. Go back before C++11, without "auto" writing generic code sucked! Callbacks without lambda closures also sucked. I'm sure someone will point to some 40 yr old language that had this but those languages weren't popular for whatever reason. var got added to C# in 2007 and it took more releases to let it be used in more places. Apparently added to Java much later.
I'm sure someone will give me a good example but for example std::sort in C++ before closures in C++, if you want to sort one array by another, for example you have an array of indices and an array of values and you want to sort the indices by the values, before closures I'd argue this was fairly painful unless you resorted to global variables or copying all of the data into some intermediate format. You'd end up having to write or generate a class with a sort function solely for the purpose of being able to pass in a member function to sort that could access the values. Today it's trivial because you can write a lambda that closes over the values and pass the indices into sort.
I don't really see most of my code being improved that much by types and I also see a lot of extra complexity that people in the sphere I am (indie games) have to deal with when using typed languages. It just doesn't seem worth it. When you have to spend a lot of time fighting your language, and handling numerous extra concepts that don't exist in an untyped language because they don't need to, it feels like the people who swear by typed languages that they simply like creating extra work for themselves as a way to avoid doing what's actually needed to be done, just because they want to feel productive.
I also suspect that a lot of this has to do with people's personalities around the concept of borders. Some people like well defined borders in general in everything they do because they approach life from a more procedural perspective, and for procedures to work they need things to be in the right boxes and in the right places. While others prefer borders to be undefined and more free-flowing because more information can pass through concepts and that allows for a more unstructured design process. It only puzzles me that there's so much energy in indie game development for highly bordered programming environments (i.e. all the energy being put into gamedev Rust libraries) when indie developers tend to be people who value borders less, as do all creative types. But I guess people really like types...
Statically typed languages (or at least the good ones) are disciplined so the programmer doesn't have to.
If you can work reliably with dynamic typing, that means you are very disciplined about giving the right data to the right function, in exactly the right form. That you are very disciplined about tests, possibly including fairly stupid-looking unit tests (which aren't actually stupid, at least in a dynamic context). Adding static typing on top of that wouldn't help much of course.
When I write something from scratch however, I found that static typing actually speeds up my development. It's less work, not more. Because I don't have to write as many tests, or even worry about huge classes of errors — the compiler (and if I'm lucky, my editor/IDE) just checks them for me.
I don't know the work you do, but I bet that your style could benefit from some static checks. Perhaps not the mainstream ones, but your scripts work somehow, don't they? That mean they respect a number of invariants, some of which could certainly be checked at compile time, saving you significant time on stupid bugs. The result won't be TypeScript or Rust, but I don't think it would be fully dynamically typed either.
> I found that static typing actually speeds up my development. It's less work, not more.
It's a point that comes back often, and that I totally agree with so it's worth reiterating. In addition to the improved dev tooling (autocompletion, hinting, refactoring), being able to write large swathes of code without actually running it and being 100% confident that it's all _valid_ (not bug-free of course) just takes a huge load off my mind.
Of course, there's huge differences between languages like Java and languages like Typescript. Talking about "typed languages" as a homogenous concept often doesn't make a lot of sense
> being able to write large swathes of code without actually running it and being 100% confident that it's all _valid_ (not bug-free of course) just takes a huge load off my mind.
I've heard similar things before, e.g. "static typing allows you to find bugs in your code without even running it".
Perhaps the reason I'm a fan of dynamically-typed languages is that I don't see the benefit of this. Maybe my workflow is unusual, but I don't write code without running it - I run tests every time I add a few lines.
OCaml has a REPL. I use it all the time to check that a new function I just wrote is correct. Yet I still get huge benefits from the static typing: many of my errors are stupid type errors, and having the type checker tell me about them, rather than a runtime error (or worse, a wrong result), makes early prototyping much faster.
Even if I already have a REPL. I believe the main reason is because the type checker is much closer to the source of my errors than runtime checks or tests are.
When moving from a dynamically typed language to a statically typed one, about the only thing I end up missing is hot reloading.
In gamedev, static types don't help when you have a constant value you have to change that tweaks the gameplay buried inside a compiled class that you want to balance out. Changing that one constant means either putting it in a script, which is usually written in a dynamically typed language, or recompiling the whole program, testing, changing the value, and repeating.
The only real reason I choose dynamic languages is because I spent hours on that last cycle just recompiling the whole program and throwing away all the state for a single small change, then getting the engine back to the previous state I was debugging in. I still don't understand if it was a bad habit or just how my mind wants me to program. I expect to be able to interact with my program and see how changing things affects the behavior very quickly, and a compile cycled shuts down that mode of thinking entirely. I remember Steve Yegge's essay that mentioned this, that "rebooting is dying." [1]
There were a lot of times I could write scripts, but the fact was that most of the time the code I wanted to tweak slightly was compiled, and that required a full module recompile every time. The fact is that if some of my code can be compiled, then I will probably end up changing the compiled code at some point, and that means a lot of waiting.
If C# had the ability to hot reload a class like a dynamic language to cut down the recompile cycle, I would be happy, but it sounds like it isn't possible. The old code will be mixed with the new code leading to instability.
So I've been spoiled by a dynamic language (Lua) while acknowledging I made a trade-off for one single feature. In my case if I used a statically typed language I would lose out on certain things and gain others, but dynamic program rewriting seems to best coincide with how I think, and I'm not sure how if I should change that.
Hot code reloading and static typing are not incompatible.
On a trivial level, C and C++ can unload & reload DLLs at runtime. On a less trivial level, I believe the Yi editor, written in Haskell, can do hot code reloading. On a practical level, I use the XMonad window manager, whose configuration involves modifying a Haskell source file (the main one, actually), and hitting some shortcut. If my modifications are correct, the whole things reloads without loosing any state (my windows are still at the same places).
I think the same. This is especially true for games where you're absolutely running the game again for everything you change, and in case anything is wrong it's generally very obvious visually.
Yes. Try prototyping for a quick POC/casual demo with javascript, then try with typescript. If you get back to your demo two month later (or have one other person to explain your code to), typescript is Infinitely superior.
I can second the experience. I write a lot of Common Lisp, and these days it's typed Common Lisp for me. It adds very little overhead in terms of code writing speed, but continuously stops me from making stupid mistakes (like e.g. forgetting a function I'm calling returns a sequence and treating it as a scalar value). My comfort of writing is much better, because I spend less time in interactive debugger hunting my own typos.
I can say what I do use Bash for: create files & directories, and simple string replacements in files. Anything more involved goes into a proper program. Usually OCaml, though I can fall back to more mainstream languages (Python, C) if I need a wide audience to be able to read it, and the program is simple enough that types aren't really a problem to begin with.
Aren't you falling into the same trap that the post explains? That there are nuances around when types are useful and not useful? An indie game developer might spend a lot of time designing methods of gameplay and artwork for the game, but that doesn't mean that types should go out the window for all levels of programming.
Wouldn't it be better to approach this by which problem we are trying to solve? A script that is run during development, where resources are unconstrained, and stability is not an issue, should absolutely value the time it takes to develop and maintain the script. So using a typed language may not be useful here. A situation where small optimizations make large improvements might benefit from a typed language, maybe a physics engine of a game intended for multiple platforms?
In the OP, the author approaches it from a "systems" perspective, that when you need either of the 3 scenarios, then you might consider using types. Type inference, Sum/tagged union types, and Soundness, which I think could easily apply to certain areas of game development. Ignoring the nuance around the issue, and being dogmatic that all scenarios in a given field do not need types is ignoring that what we're really doing is writing in languages that need to be interpreted by both humans and machines.
This seems likely. It took me embarrassingly long to realize the fact that you can't understand the benefit of a feature you don't understand. Seems like an obvious tautology, but it's one I fell for over and over, and one I think grandparent is guilty of here.
I think it depends on size or codebase, how many people are working on things and how long you can afford to develop before releasing.
I do mostly work with python and JS, but last Christmas I learned Rust, and it strongly occurred to me that exhaustive matching, no nulls, borrow checking and strong type inference would be a real boost to development given the initial time to build the codebase up. I'd put money on them removing hours of hunting subtle bugs, and on missing the ramifications of refactors.
I built some small scale game stuff using SDL2 for Advent of Code and I enjoyed rust for doing that a lot.
I think also dynamic languages work best when developers actually are knowledgeable about the underlying types and effectively write code in a typed manner anyway. It's a much worse trade-off when function signatures actually avail of loose typing to do strange things.
What languages have you worked with? As per the OP, there are certain languages where types are much more expressive and add something to the programming experience.
Learning haskell changed a lot about the way I think about programs and that is even as someone who primarily writes in java.
Maybe it's a case of problem complexity. I think of types as a tool to help cope with certain things. If you're building a garden wall you probably don't need CAD. For a 747, you probably do. Though aircraft predate CAD so it's clearly not impossible to do without.
I dislike types in 30 lines of python because they're unnecessary complexity. I like them in 10000 lines of c++ because they do some of the thinking on my behalf.
> I dislike types in 30 lines of python
They are already there you just don't want to acknowledge them. You can build the same prototype in strictly typed language just by sticking to some primitive types like int/string and type inference, and the progress toward something more complex as your prototype grows. I personally prefer to use types right away, so type system can guide me further and show me when I'm assuming something in a wrong way.
I agree. I think the downside of static typing is that it encourages developers to pass around complex types between functions instead of simple types and I think this is a mistake.
If you have the option between creating a function which accepts a string (e.g. ID) as argument or accepts an instance of type SomeType, it's better to pass a string because simple types such as strings are pass-by-value so it protects your code from unpredictable mutations (which is probably the single biggest, hardest to identify and hardest to fix problem in software development). I think OOP gets a lot of blame for this and it's why a lot of people have been promoting functional programming but this blame is misguided; the problem is complex function interfaces which encourage pass-by-reference and then hide mutations which occur inside the blackbox, not mutations themselves. Mutations within a whitebox (e.g. a for-loop) are perfectly fine since they're easy to spot and happen in a single central place.
If you adopt a philosophy of passing the simplest types possible, then you will not run into these kinds of mutation problems which are the biggest source of pain for software developers. Also you will not run into argument type mismatch issues because you will be dealing with a very small range of possible types.
Note that this problem of trying to pass simple types requires an architectural solution and well thought-out state management within components; it cannot be solved through more advanced tooling. More advanced tooling (and types) just let you get away with making spaghetti code more manageable; but if what you have is spaghetti code then problems will rear their ugly heads again sooner or later.
For example, a lot of developers in the React community already kind of figured this out when they started cloning objects passed to and returned from any function call; returning copies of some plain objects instead of instances by-reference provided protection from such unexpected mutations. I'm sure that's why a lot of people in the React community are still kind of resistant to TypeScript; they've already figured out what the real culpit is. Some of them may have switched to TS out of peer pressure, but I'm sure many have had doubts and their intuition was right.
If you use String for all your data types than you are no better than a dynamic language. There are many string like things that benefit from their own types, e.g. currency, identifiers, post codes. Such types should only be created from a parse of valid strings, i.e. no empty strings, whitespace, illegal values etc. They do not have to be Alan Kay "objects", despite what your language or thought leadership is telling you. They should be values with value-based equality. A modern statically typed language should let you define such a type in a few lines. This is all done in order to make illegal states unrepresentable, which is what type systems are for.
> If you use String for all your data types than you are no better than a dynamic language.
I once read a Haskell (I believe, may have been SML or OCaml, this was a while ago) tutorial (can't find it anymore) that did this. It was infuriating as it completely hid the benefit of the type system. Essentially, details fuzzy, it was creating a calculator program. Imagine parsing is already done and had something like this:
eval "add" a b = a + b
eval "sub" a b = a - b
...
Where the parsing should've at least turned those strings into something like an Operation type.
Sadly, I've seen similar programs in the wild at work (not using these languages, but with C++, Java, C#) where information is encoded in integers and strings that would be much better encoded in Enums, classes, or other meaningful typed forms.
Yes, every statically-typed language has a dynamic language as a subset. It is up to the author to use and apply types. One can certainly write Haskell where everything is in IO and everything uses Strings.
one of the interesting things in cocoa/foundation is the types are all objects, but they make a big distiction between NSArray and NSMutableArray, same with strings, dictionaries and many other objects
to make things mutable you have to clone them as such and i cant really think of a single api in cocoa/foundation that vends a mutable array or string...
The correct thing is imho to pass an immutable SomeType or an interface that only exposes the parts of SomeType necessary for the calculation and doesn’t allow mutation of the object.
Of course you don’t send around references to mutable objects and of course you only send to a function just what it needs - but that’s regardless of type system.
Sometimes this will be the best approach possible but adhering with this principle too strongly can overcomplicate the general design/architecture - It can give developers a green light to start passing around complex types all over the place and harms the separation of concerns principle.
In terms of modularity and testability, the ideal architecture is when components communicate with each other in the simplest language (interface) possible. Otherwise you become too reliant on mocks during testing (which add brittleness and require more frequent test updates). I think very often, static typing can cause developers to become distracted from what is truly important; architecture and design philosophy. I think this is the core idea that Alan Kay (one of the inventors of OOP) has been trying to get across.
'I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging"' - Alan Kay
It's very clear from Alan Kay's writings that when he was talking about 'messaging' he was talking about communication between components and he did not intend for objects to be used in the place of messages.
Abandoning a feature just because it enables a misuse is the wrong way to do it in my opinion. Yes, some inexperienced, stubborn, stupid, or hurried developers will pass around complex types when they really shouldn't. But no, this drawback does not nullify the massive advantages of (good) static typing.
Sure, interfaces should be kept small. Let's to just that, then! Recognise that we want our classes/functions/modules to be deep (small interface/implementation ratio), and frown upon shallow instances in code reviews.
i think messaging is orthogonal to strong or weak typing (hence small/strongtalk, objc, self etc dont let you automatically coerce objects to different types than you expected) and those systems all use messaging
In many langauges it is possible to have complex types that are pass-by-value. Rust also completely solves the mutation issues with pass-by-reference by putting the mutability of references in the function signatures and only allowing one mutable reference at a time.
But still, I think it does not fully solve the architectural issue or encourage good architecture (though it can certainly help reduce bugs)... In this case you may end up with lots of duplicate instances in different blackboxes which may not be a good thing either.
The point of good state management is to ensure that each instance has a single home. As soon as you start passing instances between functions/modules/components, you're leaking abstractions between different components. Sometimes it is appropriate to do this, but most of the time it's dangerous. Components should aim to communicate as little information about their internal state to other components as possible.
It really depends on the language. Some still help you use this case in an amazing way. For example rust will allow you to create an enum which can be a FooId(&str). (Or add extra 4 lines to get an owned String that's immutable)
Now you've got an immutable id string, you can access as easily as the bare one, but now you can't mix it with other types of IDs, so you won't pass it to something expecting BarId by accident. As a result - no black boxes and a clearer design.
A variant of this is the cause of many Linux kernel issues. They basically had to cram it into macros to prevent passing real/kernel pointers to userspace by accident, because pointer is a pointer is a pointer.
> Type inference: because having to write out every type, however obvious, is an incredible waste of time. Person me = new Person(); is ridiculous. let me = new Person(); may seem like a small improvement, but spread over the body of an entire program and generalized to all sorts of contexts means that type annotations become a tool you employ because they’re useful — for communicating to others, or for constraining the program in particular ways — rather than merely because the compiler yells at you about something it should know perfectly well.
Type inference was a major revelation to me as well in Rust. I was reluctant to learn the language because of my experience in Java with its high ceremony everywhere, mostly due to lack of type inference.
The first thing I noticed with Rust was type inference. It gives the entire language a distinctly high-level, almost scripting language feel - modulo ownership.
I love Kotlin. I also love Typescript. I'm yet to try Rust, but I'm positive that I'll love it as well. They are all born to solve the same problem I think, only from a different perspective. I use Kotlin every day and the more I learn the happier I am. I'd never go back to Java if I can help it.
Personally, I prefer the type names to be at the start of the line, so I don't need to scan to the end of the line (or guess based on a function name).
But I agree, it's a waste to type it all out. So I use an intelligent IDE that reduces the repetitive typing:
So typing `Person.var` gives me `Person person = new Person();` or typing `Person person = ` will suggest `new Person()`
Sure, it doesn't look as appealing when creating an new object, however I get a better information when you've written something like:
I think you are making a good case for a somewhat different point: we have long reached the point where the tools we use for writing and reading programs can give us all sorts of semantic information on demand, and language syntax design should take advantage of the fact that everything the programmer needs to know need not be forced into just one flat page-of-text view of the code.
Alan Kay made essentially this suggestion when the requirements for the language that became Ada were being debated, but it was not taken up at that time.
But that doesn't mean hide everything from view just because it can be accessed via a keyboard short-cut or mouse movement - my eyes are faster and easier to move. Also, code isn't read exclusively in an IDE, nor the same IDE that I use.
A lot of tools can put the type inference result for all variables when you are reading code (pycharm does it during runtime but IntelliJ + rust did it dynamically as I typed if I recall correctly)
This has always been the case. Emacs could do all this decades ago. Genera/Lisp, Smalltalk, etc. to various degrees did this. And they all failed to make their case. Time after time. Hiding context has always been bad for understanding code. The same reason dynamic scoping went out of style the moment lexical scoping was invented. Or that GOTO is considered harmful. Or that we all prefer simple, small functions rather than multi-page monsters that do ten different things at once.
Showing all the context has never been feasible, and abstraction is about how to deal effectively with that constraint. Types are abstractions, and consequently explicitly declaring a variable's type does not provide all the context.
The original C++ 'throw' declaration is an example of an ill-conceived attempt to provide and use more context, and an example where tools provide a better solution than piling on the syntax.
Sure, type inference isn't always better, but it's usually clearer — I almost always already know the type of the thing, so explicit types are just noise — and languages with inference still let you write
let me: Person = something.getOwner();
so you still have the freedom to write the type explicitly.
All the major Rust IDEs (VS Code, IntelliJ, and probably Vim + others) show the variable type inline when you don't provide it explicitly: https://i.imgur.com/96Wfukl.png
The white text on a gray background is provided by my IDE.
My personal experience with C# and Typescript has been that explicit typing like that rarely provides much value. Explicit typing tells me one thing and one thing only: GetOwner() returns a Person. But when I have questions, they almost always fall into the realm of "details about the GetOwner() method" or "details about the Person type". I honestly am struggling to think of any examples when seeing a variables type answered all of my questions.
Honestly though, this is a tooling problem. There's no reason at all that developers should need to spend time encoding information into the source code that's already known to the compiler, but devs also need to use tools that can feed the compiler's knowledge back to them. Jetbrains Rider (and their Resharper extension for Visual Studio) have a set of options called inlay hints [1] that do this exact thing. Personally I tend to keep most of the type hints turned off because they do add a lot of clutter, but the feature is absolutely invaluable for parameter name hints.
In the case of Java you have the option of using "Person me" or "let me". Simply stick to the convention of explicitly stating the type at least once when declaring a variable. I have also noticed that Python allows developers to note the type (at least when defining functions). It's not necessary and I don't think the language uses it, but at least it offers a standard mechanism to note it in the code.
Was recently on a project that just migrated to java 11 (mid august of this year) from java 8.
Someone has started to try to introduce 'var' and is getting push back from others. "it doesn't match current style" and "could be confusing" were some 'concerns'.
When you're trying to do
InternalFormatParserCustomForClientABCDEF customerFormatParser = new InternalFormatParserCustomForClientABCDEF(param1, param2);
you really start to hit readability limits, and formatting things like going over nominal line lengths and such.
var customerFormatParser = new InternalFormatParserCustomForClientABCDEF(param1, param2);
is certainly easier on the eyes, and the compiler can still known and infer what type 'customerFormatParser' is.
Yeah, the same debates happened in C# back when the 'var' keyword was introduced. People just fear change, but once they got hands-on experience they realized the value.
I see a lot of comments on HN the last few years that just assume the dynamic vs static debate was settled at some time. It never ended. Static typing merely became trendy because dynamic typing was the trend for many years (Ruby, PHP, Python, Perl, JavaScript). Like all fashion, things that are trendy will once again become untrendy, and the cycle repeats.
With that said, it's important to note that type inference is not new. I was doing type inference with Scheme nearly 20 years ago. I believe the reason it never took off in a serious way is because it combines all the downsides of dynamic typing with the downsides of static typing. Types are meant to document code. Without the annotations, you can't look at code and know what is going on. Which, ironically, is the complaint against dynamic typing. In addition to that, you get the pain in the ass of having the compiler always complaining. And because it's inferring types, the error messages are towers of baffling nonsense. It's the worse of both worlds.
One thing that never comes up in these discussions is the idea that creating good types is a skill itself. Much like naming variables, if you don't design your types correctly, your entire code base suffers. It's much easier in a dynamically typed code base to "nudge" the data in a certain direction than in a static type system where all types are locked down at initial design time. In addition, certain type systems are much harder to master than others. I've worked with dozens of TypeScript developers and not a single one really knew what they were doing. They were appeasing the compiler and little more than that. There are also plenty of footguns in TypeScript that even TypeScript experts continually forget. I could continue on the weakening of the value of TS (via "any" and "ts-ignore", etc.) and how these completely muddy any sort of metrics one may have on deciding whether types are "worth it" or not (or the fact that such metrics do not, in fact, exist). But that's enough ranting for one day.
> Without the annotations, you can't look at code and know what is going on.
In Rust, type inference is only inside the function, which I think gets you the best of both worlds.
> In addition to that, you get the pain in the ass of having the compiler always complaining.
This has stopped so many dumb errors of mine. My types aren't complex enough to guarantee that my program is correct, but they're complex enough to at least know that my program makes sense.
I take this one step further: Having a language with rich-ish types and good error messages like rust means I can very often rely on the compiler to tell me how to fix my dumb mistakes. In other words I know where I can be just as if not more sloppy than in an interpreted language and actually get away with it for little effort. I spend a little time getting the parameter and return types as I want them, quickly write an implementation without thinking too much about references and lifetimes then let the compiler work out the details pretty much automatically.
My experience learning Perl and Python after C/C++ and C# was okay I really get the obsession with unit tests now.
One thing notable in C you usually don't have to type the type name twice. But C++/Java/C# you do. That's when I think about it was a really bad grammar mistake. Java and C# shouldn't have propagated it.
Mentioned above I agree with, designing good types is non trivial. I think that's part of the problem that designing good API's is non trivial. I can see programmers getting the hate on when dealing with code bases with crappy types and crappy API's.
> Types are meant to document code. Without the annotations, you can't look at code and know what is going on
With the rise of VSCode IntelliSense/JetBrains code inspection, do you believe this is still true today? The programmer now has easy ahead-of-time access to inferred types that used to become available only at compile time or runtime
Should it be expected that all programmers use these text editors and have access to these tools?
As someone that likes to keep his editor simple (to an extent--I'm using VIM after all), I always get frustrated when people try to introduce policies or procedures that work for them and their preferred setup, and who look at me as an obstacle because I prefer a different setup.
I'm of the opinion that code should be written independent of the tools used to understand and modify that code. If there's anything about the code that needs to be communicated, it should be communicated via the code itself, whether through naming patterns, comments, types, or any other methodology that can be encapsulated in a text file.
Other than letting each developer have their own preferred processes and coding environment, it also makes it easier to SSH into a remote box and know what's going on. A quick google shows that VS Code does allow for SSHing and browsing the remote files via VSCode. That's nice, but I don't know how well it works, and how much I like the idea of allowing another program to run commands on the remote box. I like that I can SSH into a box and use the tools natively available there to read and modify the code, and that the code is prepared in a way that makes it as easy as possible.
> Should it be expected that all programmers use these text editors and have access to these tools?
If they don't have access, then the tools ought to be standardized to the point where they can be integrated into any editor. The Language Server Protocol[1] seems like a step toward this.
> I'm of the opinion that code should be written independent of the tools used to understand and modify that code.
This a widespread, "common sense" opinion that I've come to disagree with strongly. No one would argue that, e.g., illustrators, 3D modelers, music producers, etc. should be so tool-agnostic—and yet their situation is quite similar. One could produce a complex piece of music in Audacity instead of using Logic or Ableton, but musicians don't have the same mentality of picking the cheapest, most austere, or lowest-common-denominator tool. Instead, they invest in tools that enhance their productivity. And that's precisely what's at stake here. Pairing (a) a language that allows implicit "smart" features like type inference with (b) an equally smart editor to make what is implicit in the code explicit to the developer as needed, is more productive than forcing the developer to make everything explicit themselves.
Re: using VIM over ssh, your choice of scenario is revealing. Why would you limit your everyday development work based on the lowest common denominator tool you're forced to use in an emergency? Also, it's not necessary to run code inspection on the remote box. JetBrains IDEs, for example, will copy a folder from ssh or a similar environment, index and inspect them locally, and then sync them back as needed.
Well, to turn around your question: should languages enforce verbosity to satisfy a vocal minority using ancient tools? I'm not going to tell you what tools to use, but if your tools can't understand let/var/auto, then that's not my problem.
Upvoting so that people who need this may see it and benefit from it.
However, it's not that I'm not aware of features available to my editor of choice, it's that I specifically don't want an editor with those features. I don't want that functionality as part of my workflow. I prefer to reduce the noise and distraction so that I can keep concentrating on what's currently important to me.
Bringing this back to what the root parent was talking about, a significant part of code maintainability comes down to how we design our classes, services, etc. It's not so much about static or dynamic typing--both can experience their fair share of problems--it's about approaching our code in a way that makes it easiest for future readers and maintainers to pick up where we left off. That's a difficult task, but one that makes a huge difference. Saying that specific editors can alleviate some of those problems misses the point: that the underlying code itself is not well designed. What I meant to add is using these editor tools not only fails to fix the underlying problem, but that it also forces developers into tools they may not want to use.
I think you point to an important tradeoff. I also think it's clear the industry is leaning toward the decision that it is not worth giving up the advantages of static typing for the 0.1% of users who really don't want type info in their editor.
Your distinction between the code itself and editor-based tools I think is a false one. The types are part of the code, and while one can use them tolerably well on the command line alone, they are most helpful with things like type hovers. The line is blurry between language features and code structure on the one hand and the editor tooling that gets the most out of it on the other.
i tend to agree with you. maybe more than just agree.
if any of these "IDE's", or "tools", or whatever they are called actually provided a universal improvement in software quality and development time, then i would change my view.
experienced developer, the tools don't make a difference. VSCode, VisualStudio, Eclipse, tried many of them. This is from my experience, it may or may not be universal.
It's weird to make this argument in a thread about how type inference is only problematic if you aren't using a modern tool that understands the code at a level higher than raw text. Clearly the tool is making a difference.
> VSCode, VisualStudio, Eclipse, tried many of them. This is from my experience, it may or may not be universal.
Exactly. If specific tools allow specific people to work better, great. I completely support them. But it's unfair to say that what works for some will work for all.
My goal when adapting practices is to adapt the practices that allow each developer on the team to work in the way that's best for them, and to avoid rules that limit developers in their choices.
Always willing to update when a clear case is made, though. Recently I stopped my rule of "80 characters per line max" because I don't think 80 character wide screens are common enough to warrant my consideration. Now I limit line length based on what makes that line of code most easily digestible--whether 30 characters, 80, 140.
These features are now available in all common text editors.
Also keep in mind that you're missing out on many other arguably essential tools such as debuggers, smarter shortcuts, and other static analysis.
Therefore, yes, I think it should be expected of a programmer to pick the right tools for the job, in the same way that it can be expected of a designer to be able to work with Adobe files.
If classic VIM doesn't offer these features, then it isn't sufficient as a code editor anymore.
> These features are now available in all common text editors.
It's not just a matter of whether they're available, it's a matter of whether it's a fair expectation.
I've been developing for a decade and never found that I'm "missing out on many other arguably essential tools". Typically I'm as productive or more productive than my peers.
I get that you think usage of these features is a fair expectation. Can you provide your argument for why you think that's a fair expectation?
Working with code as if it is raw text is strictly inferior to working with code as raw text + AST. If that’s how you want to work, that’s fine, but it’s probably not good to choose your team's technologies because you want to work at that lower lever of abstraction.
> Typically I’m as productive or more productive than my peers.
This seems like a case of assuming the conclusion, tho. Whether a text editor-based workflow is as productive as an IDE-based workflow when avoiding feature that advantage the IDE doesn’t impact on whether the IDE-favoring features are valuable enough to adopt and assume everyone has access to.
I think yes. I do F# for a lot of time, and looking at past code I can't just figure some stuff. Not only their type system look alike dynamic, it make you build some stuff that is IMPOSSIBLE to get without a serious look at the types: And the types are invisible and behind abstracts constructs.
With python, for example, I can't at first see what a function need but at least see the body give huge clues, because python rely less of abstract type stuff (it have a issue with monkey patching and delayed build of objects BUT, "if look like a duck.." is most of the time enough to get things..
F# lets you specify the types of function arguments and return values. If it will make your code easier to read and reason about, why not just do that?
I use Ocaml a lot, which is very similar to F# as you know... and I document the types of all toplevel values in my code. Not because the compiler needs it, but because it helps me navigate the code more easily.
Yeah, but it is not very idiomatic and if I need to put types everywhere, but is the point of it at the end? Is like people that type python: Is screaming is the wrong tool for the job :)
I don't think dynamic typing will ever swing back. They came out of an era when types were expensive and had moderate benefit. Now they are cheap (in modern languages) and have significant benefits. There will always be things like shell scripts and stuff, but I don't expect to ever see a big language without a good typing story ever again.
The problem to solve now is to lower the costs more and raise the benefits, not to try to eliminate them.
Python/JS/PHP/Ruby are still plenty popular... but the weight they are putting on "dynamically typed" is decreasing over time. Basically, all the dynamic languages are creeping towards the statically-typed side with various partial annotations and optional features. But there are no statically-typed languages I know of rushing to add more dynamically-typed support to their languages. If you draw all those language trends out, they don't meet in the middle; they meet solidly in the "statically-typed" area.
I phrased that carefully; obviously they aren't going to stop being "dynamically typed" under the hood. But, slowly but surely, those language's contribution to "dynamically typed" is decreasing, and I expect, will continue to decrease.
In another 20 years, I expect "dynamically typed" will be looked at as a complete mistake by the oversimplification process of history, as the number of people who were around when they were getting popular and understand why they were so attractive decreases. I do, myself; I experienced being liberated from some really, really crotchety languages by the freedom of Python 2.0, and I understand why the people of the time analyzed the programming landscape, and decided that the problem was "static typing" rather than "bad static typing", because there weren't any examples of good static typing. But now there are, and they aren't going to go away, and I expect that barring legacy languages, the choices in the future are going to be simple static typing like Go, complicated static typing like Rust, really complicated dependent typing like $LANGUAGE_YET_TO_BE_BUILT, and tiny languages like shell designed to never write programs in them large enough for typing to even matter.
I’m not so sure about that, the reason is that the total number of programmers in the world are just increasing and more importantly more and more non-programmers program code today than ever before.
It hard to convince non-programmers to see the beauty of a type system when they only want to print some html.
So what I see is a bright future for languages that can do both, types or no types.
PHP is quite weak in the U.S, I can only assume that's why you got the impression it's dying. In Europe it couldn't be more popular. I'm not here to "defend" PHP, I use Ruby, but living in the Netherlands you'd be amazed how many companies use it. Ruby is a very small niche here in comparison.
> I see a lot of comments on HN the last few years that just assume the dynamic vs static debate was settled at some time. It never ended. Static typing merely became trendy because dynamic typing was the trend for many years (Ruby, PHP, Python, Perl, JavaScript). Like all fashion, things that are trendy will once again become untrendy, and the cycle repeats.
It’s because the ergonomics of the previous generation of (mainstream) languages was cumbersome. No sum types, no type inference, nullability all over, terrible error messages, abhorrence of expressions, etc. some of those things are only indirectly related to static typing, but many people mistakenly attribute them to static typing nonetheless. The new crop of mainstream statically typed languages wed these quality of life improvements with the rugged practicality of the previous generation of mainstream languages (none of these features are new, but no serious team is going to switch from C++ to scheme for the type inference alone).
> With that said, it's important to note that type inference is not new. I was doing type inference with Scheme nearly 20 years ago. I believe the reason it never took off in a serious way is because it combines all the downsides of dynamic typing with the downsides of static typing. Types are meant to document code. Without the annotations, you can't look at code and know what is going on. Which, ironically, is the complaint against dynamic typing. In addition to that, you get the pain in the ass of having the compiler always complaining. And because it's inferring types, the error messages are towers of baffling nonsense. It's the worse of both worlds.
Type inference is very much a “sweet spot” thing. Like many things, Rust nails this: you have to annotate struct fields and function arguments, but within a function body you get inference. Changes outside of the function don’t result in type errors inside of the function. Locality is key.
> One thing that never comes up in these discussions is the idea that creating good types is a skill itself.
This is an interesting point, and I agree; however, I think the issue is less that it’s hard to do and more that dynamic typists don’t see it as a worthwhile activity at all—“why should I try to create good types? I just need to get this happy path working so I can get on with life!” Of course there are abundant good reasons (we should care about quality and maintainability and not just superficially churning through feature tickets at the expense of all else), but I think this is a sort of fundamental disagreement between dynamic and static typists.
>I see a lot of comments on HN the last few years that just assume the dynamic vs static debate was settled at some time. It never ended. Static typing merely became trendy because dynamic typing was the trend for many years.
Not always. If you examine history not everything is a cycle, depending on what you look at human efficiency improves as well.
Society goes through natural selection. The cultures, methods and behaviors that help us survive live on while methodologies that aren't as good tend to get eliminated.
The cycles in the process occur in areas not under selection pressure. It's called genetic drift and mutations in this area can occur willy nilly in random steps or even cycles if it doesn't have an effect on survival. The pressure in this case is survival of a business.
There's not enough cultural data on dynamics types vs. static types, but I feel in general the dynamic type thing was a mutation. Dynamic types were natures trial at a baby born with one kidney instead of two because one kidney is more energy efficient to maintain. Now it's being naturally selected out.
We'll never know for sure unless you live long enough to see what happens to programming in the far future. For something to be truly called cyclical it must be adopted by huge amount of businesses and eliminated and recreated multiple times.
Both classic static and dynamic typing are trending over time to being replaced with the option of static typing with gaps; languages with static typing are more frequently implementing dynamic escape hatches and languages with dynamic typing are getting optional static type checking tools, and both static-first languages and optional typecheckers for dynamic-first languages often have fairly robust type inference, so the firm bright lines that used to exist between static and dynamic languages of the experience of using them is quite a bit blurred.
Yes, but overall static typing is trending again in the sense that more and more static typing is being used... just a little less strict because of the trapdoors as you say.
> reason it never took off in a serious way is because it combines all the downsides of dynamic typing with the downsides of static typing. Types are meant to document code. Without the annotations, you can't look at code and know what is going on.
Type annotations aren't for me, they're for the compiler and the IDE.
If I wan't to know whats going on inside a variable, I cmd+hover over it.
I actually wish you'd written more about this, this is important and doesn't get a proper discussion.
If you could come up with some code examples of the problems you hinted at it would make a beautiful blog post.
> I believe the reason it never took off in a serious way is because it combines all the downsides of dynamic typing with the downsides of static typing.
Are you talking about scheme? It didn't take off for the same reason all other functional languages didn't take off. Why functional languages aren't as popular, who knows.
> Types are meant to document code.
Not true. Comments/documentation are meant to document code. Types and type systems exist to constrain the program space to produce sound programs. Type systems limit the number of valid programs. Types and type systems do not exist to document code.
I don't have to look at my static code to know that it is doing the wrong thing. The compiler tells me. This is impossible with dynamic languages. It is obvious that you never used types otherwise you wouldn't have come up with this nonsense.
I was a statically typed fanboy for most of my early career. C# and later F# were my daily drivers and still hold a fond place in my heart. More recently, I learned TypeScript and a bit of Haskell and Rust.
However, I think dynamic languages have their place. Most of my server side code involves parsing one string and transforming it into another (JSON to SQL or the like). Something like Clojure spec is really, really useful for this, and beyond that the remaining code doesn’t really materially benefit from static types.
At any rate for my typical web application, I now prefer dynamic languages, which is something I never thought I’d say.
One last thought: soundness is just one variable to optimize for, and it’s not as important as I once thought. For most parts of my application, rough edges are not a big deal. For the really important stuff (like payments), I always write tests and also do a fair amount of manual testing. And for that stuff, the bugs are almost always logic bugs that types wouldn’t have caught.
> Most of my server side code involves parsing one string and transforming it into another
But this is the real problem. It's a huge failure that so much programmer effort goes into writing the same broken marshaling code over and over again. It's unproductive, boring, and if you believe in http://langsec.org/ also the major source of security issues.
How about a language that supports both strong typed data type and dynamic data types? Not sure about other language, but Free Pascal and Delphi support both of them.
I once told a JavaScript guru colleague of mine that I was spending my free time dabbling in Haskell. His response was 'lol, why would you do that?'. His point was that spending time learning things that you're not going to be using directly any time soon is a waste of time. My point was (and still is), that learning such things opens up a completely new way of thinking about problems and potential solutions.
Anti-intellectualism in programming is fascinating to me.
The variance in skill between individual programmers is immense, and languages act as force multipliers on that. There are huge opportunities that emerge from pushing yourself to explore new paradigms and domains.
> There are huge opportunities that emerge from pushing yourself to explore new paradigms and domains.
What are those opportunities? As a Rubyist, to me it makes more sense to become really good in Ruby. As I get older and more expensive, I need to be better than the 3 year experience 26 year old colleague. And if not better at least not noticeably worse.
Learning Elixir, Go, Haskell or you name it isn't gonna help me. Not in intellectual terms and not in practical terms. Also there are so many other worthy things a person can do with his time other than learning a new programming language (both within programming and outside programming), we shouldn't judge people by their passion or lack of passion for learning new languages.
> As I get older and more expensive, I need to be better than the 3 year experience 26 year old colleague. And if not better at least not noticeably worse.
Yeah, I had this line of thinking. I don't recommend it. I eventually realized I don't want to play the same game as people who are willing to throw away more than I am. Specifically, I don't want to be in positions that companies would be asking those types of questions. Younger devs will work longer, not have families, work for cheaper, and be more compliant.
So I decided I'd focus my career in the following way: no more web dev. I chose compilers and systems programming to self-study. It's been fantastic. Along the way, I learned Haskell and Rust, contributing a bit to both communities, and having a lot of fun. Eventually, I started finding my way towards seeing more jobs that were in my wheelhouse, and landing some.
> there are so many other worthy things a person can do with his time other than learning a new programming language (both within programming and outside programming), we shouldn't judge people by their passion or lack of passion for learning new languages.
Agree. But if you want to pivot your career, don't wait for your employer to sponsor you in doing so.
> Younger devs will work longer, not have families
I work in a country where work hours are 8.5 hours (the half hour is for lunch). Period. If you consistently do more than that you'd be seen as someone who can't get his work done in time. So it's just not culturally encouraged to do that. I come from a much more capitalistic society originally, so I know first hand these companies/societies you speak of exist, and I'm hesitant to ever go back also because of this issue.
The sad thing is I know for sure I write shit code after 8 hours, and I'm sure it's true for most people. So while some startups can churn out young devs like that and get away with it, I'm sure there are businesses that actually care about code quality. Even in the most capitalistic of societies.
But we stray: within web dev, the question is how can I be more valuable to a Ruby team - as someone who did 3 years python, 3 years php and 3 years elixir or as someone who did 9 years Ruby?
That's a bit of a strawman though. My initial premise was that learning on the side something orthogonal to what you use daily is beneficial to ones capabilities in general. Orthogonal being key here.
> within web dev, the question is how can I be more valuable to a Ruby team - as someone who did 3 years python, 3 years php and 3 years elixir or as someone who did 9 years Ruby?
As your career progress, your value is less the volume of code produced and more how you improve those around you, be it via processes, mentoring, abstractions, and guidance.
I won't speak to opportunities; I agree with you that there's a lot to be said for specialization.
But, to the point about getting older: There's a fair amount of evidence that learning new, challenging (as in, outside your comfort zone) things is a big part of maintaining your mental acuity as you age. Doing crosswords is good for your brain, but only if you're relatively new to them. If you've been an avid crossword puzzler for decades, not so much. Learning a new natural language is another example of something that is supposedly good for keeping the creative juices flowing.
I wouldn't be surprised if challenging oneself with a new programming paradigm behaves similarly. Which would mean that learning Haskell might make you a better Rubyist, not because (as people often like to say) Haskell teaches you specific things that you can't learn while using Ruby, but because the mental challenge of learning to work with such a drastically different programming language just generally helps keep you sharp.
That said, I would not go so far as to speculate that this effect is any greater for learning a programming language than it is for learning Esperanto. Just pick the one that sounds more fun.
My experience matches yours. Every time I learned a new language, I gained a new understanding of a different way of doing things, and I was able to bring some of that understanding to my daily job's language. It's absolutely made me a better programmer.
I didn't just study those languages for nothing, though. I always had a purpose in mind for them, with the possible exception of Ruby which just seemed neat.
Limiting what you learn is, by definition, self-limiting.
On the one hand, that's a useless truism - we can't learn everything, so you do have to pick and choose. On the other hand, I still find the framing useful, because it reminds me that, in a sense, the decisions about what not to learn are more impactful.
To the example of programming languages: Only focusing on perfecting my skills at the tools I currently use would leave me less able to understand the limitations of the tools I currently use. And would limit my ability to take my career in new paths where I might have a need to use other tools. On the other hand, learning every single new thing might distract me from properly mastering my current tools, or might eat up so much of my free time that I'm effectively spending all my time thinking about work and never recharging my batteries.
"Never learn anything you don't see immediate use of".
Such short term thinking could explain why dynamic typing is so appealing: simpler implementations, more possibilities than most mainstream static type systems, while requiring basically no learning at all.
> I was spending my free time dabbling in Haskell. His response was 'lol, why would you do that?'. His point was that spending time learning things that you're not going to be using directly any time soon is a waste of time.
Just as an anecdote, if you are persistent enough you can learn Haskell and get a job writing it :)
Types are controversial because we can't measure engineer productivity. Full stop. I see people on here arguing that they are faster one way or the other, yet we have absolutely no empirical evidence for such a claim. What makes it more complex is that "fighting with types" often takes one out of the flow in a different way than usual progamming challenges. Since it's unmeasurable, we cannot see the effect of productivity relative to team size, feature churn, product timeline, unit test discipline, etc.
For example, I'm completely convinced types add significant productivity increase to any team of more than two programmers. Likewise types are a net productivity improvement to software that has to be added to multiple times in a year. If it's a one-off write and throw away, or tiny team, it probably doesn't help as much. Also if it's a big team, but with highly siloed developers, likely doesn't add as much value. I think fighting with types FEELS slower than it is, like how dealing with a car that doesn't start immediately or a street with a lot of stop signs feels slower than just getting out and walking, despite the empirical evidence to the contrary.
However, these beliefs of mine are BACKED BY ZERO EVIDENCE. Just let that sink in. We all are arguing about a topic that inherently cannot be measured. We should be trying to tackle the measurement issue first, rather than just keep yelling about chocolate vs vanilla forever.
When all we have to go on is feelings, we get people arguing which is better: cars or bicycles, without any discussion or facts about top speed or total distance. The cyclist is always going to talk about wind in their hair feeling like they are going so fast, or how it slows them down to look for gas stations and just stand there pumping gas when they could be making progress pedaling. And to stretch the metaphor even further, there are plenty of environments when a sedan is slower than a mountain bike, and plenty where they are the same. I suspect all this is true with types, yet we have absolutely no way to know. For all we know, the "collective wisdom" of types might be exactly backwards, because basing engineering decisions on feelings rarely correlates to empirical results.
I personally believe that productivity differences depend mostly on the programmer's personality. Certain types of personalities will be more productive with statically typed systems, others with more dynamically typed ones. If this is the case then the measurement problem becomes one of personality assessment rather than anything else, and even when we manage to get this measured correctly, because it's a personality issue, we'll still have people arguing cars or bicycles without facts being able to help at all (except to point that it's mostly a matter of preference so the discussion is pointless).
Exactly. There is a lot of post-hoc rationalisation happening, along the lines of:
> I like language/technology X. Why? Well I'm obviously a super smart and rational person, so my preferences could only be based on facts and logic. Therefore everything I like must be objectively correct. QED.
Anecdotally, the more someone prides themselves on being intelligent and logical, the less skeptical they are of their unsubstantiated emotional judgements.
Despite primarily working in dynlangs, I would guess that type systems affect developer productivity somewhere in the range of +-50%. While I have pulled that number directly from my ass, I think that if the effect size was larger then there wouldn't be much of a debate. If a company could swap languages for any given team and get 3x more functionality, or get the same work done for 1/3rd of the cost, we would know about it already. And to be clear, 3x productivity is achievable, but not by simply switching to Haskell.
Keep in mind though, the statement that "and get 3x the functionality" <- right there, that's what is unmeasurable. We can't know. We have no unit of measurement, so no one could possibly know. A practice could be 100x faster, but because software is rarely as important as marketing, sales, and luck, we'd never know. A dev team that's 100x slower could still win, and it probably happens regularly. It's the essential issue with all tech discussions. We assume that "hand waving" someone would notice. But without any unit of measurement, I posit that no one ever will. There are so many network effects that change the success of a software project that is unlikely that even in another 50 years we will have consensus.
I think from an empirical mindset we have to grant that the lack of evidence probably means there is no significant advantage. Because there have been lots of studies, we can't simply say we don't know if there's an effect, or we don't know how to measure it.
Don't conflate the inability to measure productivity with the conclusion that for this one issue the differences can't be that large. That's like saying, "just because we can't measure distance or velocity, rockets and cars probably are similar speeds, or it depends on the driver's personality." It's empirically unsolved how productive an engineer even is, therefore all other discussions are 100% subjective.
We don't know if there's an effect, because we have no way at all to measure it without costs so staggeringly large no one will ever try. We'd need similar engineers, doing all the same practices, with the same stories, and just one difference. But then the engineers would need to have been selected to be similar skill and speed previously, which would require doing other projects with all fixed practices and languages to measure developer skill relative to peers. This also doesn't take into account team cooperation and morale, creative thinking around problem solving, etc. It's a bummer, but without some breakthrough in AI research allowing developers to be measured based on the code they write, it's seeming more and more like an unsolvable problem.
I which there was a little more elaboration on _why_ he originally disliked types so much. This perspective is still very strange to me, as the value of types seems self-evident for systems more complex than a script, and I'd like to understand it better. As it stands now, my only assumption is that this comes from the place the author is coming from: people who don't think types are useful are generally coming from a place of ignorance.
This feels facile and self-serving though ("people who disagree with me don't know what they're talking about"), so it's more likely that I have a blind spot. Are there any current or former type-haters that can help shed light on their perspective?
EDIT: I should note that my production experience is primarily with (modern)C++ and Python, so I don't even have the benefit of more modern static typing systems like Rust's.
I can't speak for Chris, but I can speak for myself.
I did the bulk of my early programming in statically typed languages. Specifically, late 90s C, C++, and Java. Then I found Perl, and pretty much went into dynamically typed languages only for the next near-decade.
At the time, I felt like the types didn't pull their weight. There was a lot of extra writing, and you got very little benefit for it. Plus, dynamically typed languages had these rich features that just weren't really accessible in mainstream statically typed languages, and so they kind of became associated with each other in my brain even if that wasn't specifically true. For example, seeing stuff like https://www.cs.ait.ac.th/~on/O/oreilly/perl/cookbook/ch11_08... absolutely blew my mind, and I had no idea how you could do something like this in the statically typed langauges I was exposed to at the time.
It wasn't until I discovered languages that had significantly more powerful features, and had a significant amount of type inference, that I felt the equation changed. The former to give me something better than purely "don't make some kinds of simple mistakes", and the latter to make it ergonomic enough.
Steve says he can’t speak for me, but in this case… he basically can. The timelines are a little different (late ’00s for me instead of late ’90s), but the trajectory was the same, and the feature gaps he alludes to here were definitely a part of it.
I recall arguing in 2012 or so that types were useless… with a colleague who had a background with OCaml. The problem was that when we said “types” I meant something completely different than he did. (Jerrad, if you happen to read this, you were right. )
> Plus, dynamically typed languages had these rich features that just weren't really accessible in mainstream statically typed languages, and so they kind of became associated with each other in my brain even if that wasn't specifically true.
This resonates a lot with me (though as you say, isn't really a typing thing). I work on ML systems for autonomous vehicles, so I switch between Python and C++, and I definitely occasionally start writing some C++ logic in an elegant functional way and realize that C++'s boilerplate makes it less readable than the usually-less-readable naive construct like a loop (eg I'm surprised at how often std::transform comes out looking terrible).
Do you mind if I ask about the size of the systems/teams you've worked on throughout this process? My experience with C++ has been in large systems worked on asynchronously by lots of engineers (with strong code review policies in place), and my experience with Python has been everything from moderately-sized systems down to smaller ones down to scripting. The Python case also included work with an inexperienced (and frankly, partially unintelligent) team of engineers, which made the discipline imposed by typing all the more valuable, but I have found the documentation and structure that typing imposes on code to be invaluable in communicating
As I said in my other comment, I'm still concerned this is a little facile, but having started my career on C++ systems at Google, I wonder whether I've just set a standard so high for the health of a system that many of those who dismiss typing's value don't understand that those benefits are possible (I'm certainly an order of magnitude better at reading code than anyone my current team, but this is confounded by the fact that everyone else has a weaker engineering background and stronger robotics or ML domain expertise).
To be clear, I don't think this applies to you necessarily; your approach of trading off the benefits of ergonomics vs structure resonates strongly with me, and I'd imagine it's even more the case on pre-modern statically-typed languages. What I'm really curious about is the perspective expressed in the OP, which, in the 2010s, doesn't even see _value_ in types beyond perhaps negligible benefit from some compiler checks.
For what it’s worth, a significant part of my not seeing the value of types back in the early 2010s was the combination of my own immaturity with the pretty poor quality of the systems I was working with. You can make C++ do a lot of this work for you (albeit not quite as far as a language influenced by Standard ML). But none of the C++ I ever encountered in the wild in that era did. It was all the effort with none of the benefit. I spent many, many months of my life dealing with the exact same kinds of bugs in Fortran, C, and C++ systems that I was dealing with in PHP and JS and Python. So the feeling that resulted was: why do the extra work I have to do in Fortran and C and C++ if it’s not actually buying me anything? No doubt if I’d been working on teams which were effectively leveraging C++’s type system, I might have felt differently. But I wasn’t, so I didn’t.
For what it’s worth, these days I tend to talk almost exclusively in terms of the tradeoffs involved with adopting any given type system vs. the alternatives in the space. Even as an admittedly huge Rust fanboy,[1] I don’t think it’s a panacea or a thing you should pick for every project. To the contrary. And I’d go so far as to say that there are some places where something like Erlang is going to help you solve your specific problems more robustly than Rust will, which likely sounds like heresy to many fellow fans of type systems.
> Do you mind if I ask about the size of the systems/teams you've worked on throughout this process?
The "oh wow I'm never doing static typing again" bit happened entirely as a hobbyist, maybe working on a project with a friend or two. My early professional life was dominated by Ruby, mostly in smallish teams, that is, like, under 30 people. Then I got into working in Ruby open source, where the teams are much bigger, and aren't all working for the same org...
> occasionally start writing some C++ logic in an elegant functional way and realize that C++'s boilerplate makes it less readable
Not to be That Guy, but you should give Rust a try if you haven't yet. You may not like it, but it will make writing that style of code feel more like Python, often.
> I wonder whether I've just set a standard so high for the health of a system that many of those who dismiss typing's value don't understand that those benefits are possible
I think the issue isn't about possibility, but about realization. That is, I also did know that more robust static typing existed, but in 2008 I wasn't going to be writing a web app in Haskell, but in PHP or Rails. So it felt pretty academic. And even knowing that it's possible in theory isn't the same as experiencing it in practice. One of the things that frustrates me about this particular Discourse (static vs dynamic types) is that many dynamic language proponents (myself included historically) will understand that testing is a skill that they need to practice with in order to succeed, but don't perceive types the same way. It takes practice! You won't just magically see benefits immediately. They're a tool. But in order to justify doing that work, you have to be able to see a real benefit. I don't begrudge people that don't see the connections here and so don't want to put in the time or effort to realize the benefits.
> Not to be That Guy, but you should give Rust a try if you haven't yet. You may not like it, but it will make writing that style of code feel more like Python, often.
I've never used Rust but I was a (low-confidence) big fan the moment I saw it, and everything I've seen since has only strengthened this impression. I don't exactly _love_ C++ and all its janky weirdness, and Rust seems like it improves ergonomics while allowing even more powerful control over memory and type safety.
Unfortunately, there are a couple of computer science subspecialties that I prioritize when looking for roles, so I lose some degrees of freedom in the ability to pick jobs or side projects that would expose me to Rust.
Oh yeah, types need to express everything people actually want to do or they become a straitjacket. In the late 1990s we had to find cumbersome workarounds for seemingly-valid templates that would crash Borland C++. Today Java’s type system is … not great, but it was a lot worse before generics in 2004.
I originally worked with C++. I then moved to Python and loved it, partially because of the dynamic type system. I'm still in the dynamic-types camp, though I haven't really learned a language with a good static type system yet, and the little I've learned of Haskell has made me super interested in a good type system.
To answer your question, the reason I prefer Python to C++, in terms of types at least, is that I feel the type system makes you get the order of building a program wrong. I think the right way, in most cases, is an agile-style, prototype-first approach. You should usually start with the simplest prototype of something that does what you want, then slowly expand and generalize as you learn more about the problem domain and make product and design decisions.
With C++, you usually start off with lots of decisions on how your data looks, and it's usually really hard to make changes afterwards. This forces you to make a lot off the architectural decisions when you know the least, and are fairly locked into those decisions.
The main benefit proposed benefit of a type system, or at least of C++'s type system, is that the compiler can help you catch type errors. However, these errors represent a small fraction of possible buga - so you're going to have to write tests anyway, and it's not hard to add type tests as well (where relevant- the idea is that it's often less relevant than you think!).
Caveats to my opinion:
1. I haven't worked in C++ for many years, and I know that there's been a lot of changes. Not sure how my opinion stacks up to current cpp.
2. If the type system gives you more than just catching some class of errors, I can see that it might be worth it. Again, no real experience with anything other than Python and js in many years.
3. I'm pretty sure I'm right about the correct way to build software in terms of steps. The older waterfall approach where everything is designed to front makes little sense in most projects I've seen, or at least in the early phases. Possibly something like Excel, in exchange there's very little new product development and in which the product domain is super well understood, has different trade offs.
Having said that, I might be wrong about how much cpp forces you to do things the other way. That was my experience and most people's that I know- but that was also the common way to do things in the past anyway. Possibly, there are good ways of doing that kind of development in cpp today.
In my experience, types (at least in the kinds of languages I like) and tests are orthogonal, though people enamored of either often try to use it to do the work the other is best at. I both TDD and TDD: that is, I do both type-driven and test-driven development, and the combo is incredible.
I strongly suspect that if and as I’m able to add tools like formal methods, logic programming, etc. to my tool belt they will similarly become orthogonal tools for correctness that I can employ where appropriate.
I'll add a 4th thing to his list based on my experience moving from Java to Typescript:
Nominally-based type systems like Java (where you can only write Foo f = new Bar() if Bar has Foo somewhere up it's static type chain or interface hierarchy) are way more of a pain in the ass than structurally-based type systems like TypeScript (where you can say f: Foo = new Bar() as long as TypeScript determines Bar has all the required fields of Foo).
Primarily this makes refactoring much easier because you don't have to necessarily change things up a (potentially brittle) type hierarchy. Instead, you can just add new properties to your type to fulfill the set of properties required on the target type.
That isn't always a benefit, though. When interfacing with a database schema that uses small integers for ids, I'm all the time defining types like `struct CustomerId(u32)` and `struct InvoiceId(u32)`. It's incredibly valuable that the compiler will not let me mix up a customer and an invoice, even if both of them are 42.
In the end when properties are missing I usually use an interface and click "Implement all missing fields and methods" anyway. Modern Java would be good enough for solving this problem.
I could agree that error message with information which properties are missing would be helpful is some circumstances.
While I agree, I feel that third-party library consumption is a bit of an edge case (i.e. the majority of your interface implementations won't be wrappers around third-party stuff).
Whenever I do work closely with a third-party library, the wrapping pays for itself quite quickly because it never takes long for you to find a 'quirkiness' or unsuitability in how the library implements something and you end up with code resembling this:
public void Do() {
_instance.SpecialOptionForTheBehaviourYouRequire = true;
try
{
_instance.Do();
}
catch
{
// Workaround for bug that has not been fixed in ThirdParty Library yet. Remove when fixed!
_instance.ResetBrokenStateCausedByBug();
_instance.Do();
}
}
The focus of industrial development for a long time has been focussed on the processes, or the verbs, not the things, or the nouns.
Focusing on the Nouns and the types of those Nouns lets you understand what states they can be in and more importantly, the ones they cannot be in. By using types as much as possible to express those invariants and constraints, you can make sure that the processes don't do something that they shouldn't.
Proving software correct is a very complicated and, as yet, unsolved in most practical cases. Strong typing and strong type systems are a step in that direction.
If a compiler ensures that a sum type's possible values are always exhaustively matched, then you can be sure that the processing at least considers all of the possible values.
That leads to positive results when programming.
If you use Option types and then ensure that the None type is handled, the possiblity of nulls is dramatically reduced, if not eliminated.
If immutability is enforced, that removes the possibility of inadvertent modification, especially somewhere deep in a call stack. That leads to safer concurrency, which is an absolute requirement for using the compute capacity to its fullest extent.
Strong typing like this isn't "overwrought", it's making sure that your code doesn't do something to a thing that it shouldn't do.
As long we can both agree, that at an elementary level, it's faster to write code in a dynamic language, than it is with a strongly typed one, then there will always be a trade-off to consider. Like the trade-off between using glue and nails.
Being able to change things quickly, or describe things creatively is occasionally going to be more helpful than immutability guarantees in some product domains.
That practical focus is the very reason why dynamic languages are so popular. They facilitate rapid iteration, which is a quality which also certainly can lead to positive results when programming.
A focus on 'process' is really an understanding that the 'noun' is subject to change, as requirements so often do in engineering and product development.
I suspect the modern dynamic languages are a response to this focus on practicality, with Ruby probably being one of the bests illustrations of this idiom (I don't write much Ruby at all, but respect it for what it is).
However, this purely practical focus is not always the most desirable quality in a programming language, and in those domains where you know your requirements are written in stone, then an effort should be made to describe requirements as formally as possible.
But a myopic preference for only one approach to writing software, will probably introduce some flaw into your program, no matter what language you are using.
Every single language is an abstraction after all.
We moved away from Assembler in an effort to focus attention on expressing what tasks computers should perform. Dynamic languages are a logical result of that focus.
Type hype is real. It almost seems like job-creation propaganda at this stage.
When I judge things, I look at practical outcomes; and the fact is that I produce better software with more features within the same timeframe if I use JavaScript rather than TypeScript and the product in both cases is equally robust. This has been true for me both independently and as part of a team.
With JS, I can write more code and more tests within the same amount of time and there is no drop in quality.
I'm very surprised that nobody else seems to be experiencing the same thing. I've been back and forth many times between the two paradigms and for me it's clear as day.
Sorry anecdata isn't data. You might be able to write more code in terms of LOC but you're also writing tests that a strongly typed system wouldn't need.
You don't know that your code is "equally robust". You don't know what sort of "drop in quality" you have because you're not using strong types.
You are making a judgement that isn't backed by anything other than intuition.
I switched my Javascript code base to Typescript a few months ago. There is a productivity cost that is declining over time as I get more used to Typescript. But the conversion also flushed out some significant bugs in the JavaScript base.
And I only get to work on this code once a week. So when I come back to it I’ve found it’s much clearer how it works and I get productive much faster.
I will bet any coworkers who have to work on your code wish it was in Typescript.
This doesn't conform with any of my experiences in non-trivial JS codebases. Migrating to TS tends to reveal previously overlooked implicit typing issues. Also, I find the "upfront productivity loss" of TS to be overstated: adding in annotations here and there doesn't take much time at all, and pays dividends quickly. Many hours have been lost tracking down some elusive runtime bug stemming from a typo in a vanilla JS property access.
I code without types. My "proof", that types do not pay for themselves goes like this:
Every time I encounter a bug (during coding, testing or in production) I make a note what type of bug it was and how it could have been prevented.
Types are way down on the list of what could have prevented the bug. Especially for production bugs, which are the most important of course. It is so rare, that a bug could have prevented by types that I can say with confidence that they would have been a net negative.
Talking about which tool can prevent the most bugs, integration tests win by a large margin.
The reason is that most bugs are conceptual. Like "Oh shit! We have allowed people to tag items as duplicates of other items. And we have a function that traverses the list up to the original item. But now this new feature over there had a bug where it marks the last original as a duplicate of another duplicate and then when the traversal function in that other module is used, it ends up in an infinite loop".
Another example of a popular bug category: The code contains assumptions about the environment that do not hold true. For example PHP's mb_strtolower() will not always create the same string as MySQL's LOWER(). It is very rare and only holds true for a tiny tiny fraction of the UTF-8 characters. So you might expect them to behave the same until you one day trip over one of those few chars.
I think it takes people who do not start out 'believing in types' (I was taught by pupils of dijkstra in NL so my belief in strict-as-possible has always been quite firm) a same kind of timespan / experience as the OP; experience (very) large weak/stringy typed projects, experience them for at least a few years full time and then, when the frustration sets in, try something which is the complete opposite like Haskell/Rust.
I think a certain frustration needs to be there to try something else anyway, when you come against the billionth cannot call hello() on undefined error in a critical (for the company), 100k+ LoC, multi-team project, you might wonder if there is something else.
You are probably not going to post your 'list' here but proper types prevent many trivial and non trivial bugs. Many of the points on your list will fit there, but you wouldn't know it yet.
> Talking about which tool can prevent the most bugs, integration tests win by a large margin.
Obviously there are ways of catching them without types, but proper type systems catch them at compile time and also; how are these mutually exclusive; we use both. We just need less integration tests.
> Like "Ooooooh shit! We have allowed people to tag items as duplicates of other items. And we have a function that traverses the list up to the original item. But now this new feature over there had a bug where it marks the last original as a duplicate of another duplicate and then when the traversal function in that other module is used, it ends up in an infinite loop".
In some of my favorite languages you can catch this in types at compile time.
Obviously; do what works best for you and your team; I just don't buy your overarching statements and 'proofs'. If it works it works, but it would probably work better with types.
Agreed and we have to continue improving in every way; dynamic/static/hybrid, just saying that I have not seen this dynamic enlightenment in larger projects. I have only seen the pain of runtime errors that other (static language) teams never had. Sure, if you would-have-written a test for it, you wouldn't have had it either, but types rather force you to think about it while writing. So sure, you are 'done faster', but the fall-out, and again ofcourse YMMV, of having statically preventable bugs popping up in Sentry at 3 am in the morning with things you would've prevented (not necessarily directly by the types but you would've thought about it more because you had to define the types, which is I think what the parent poster here misses too; I just tend to think less and try more without types which, again for me, is a bad state, but ymmv) is not great.
But sure, I am biased as my experience with statically typed langs has been good since I moved from asm/basic in the 80s to pascal/c (they were an improvement over asm/basic and my first experience with types, not saying you should use them now, or not).
The view I've heard expressed is that deep thought on a piece of code reduces bugs. Whether that takes the shape of religious TDD, rigorous proofs or detailed type design doesn't make such a lot of difference.
I used to be fully bought into types, but I've since realised that they have a number of downsides that in many cases more than offset their benefits:
1. Ergonomic typesystems require a lot of work to happen at compile time and slow down the iteration time (one of the more important things for programming in my view). In my view, saving the source and seeing the result almost immediately in a browser is one of the big advantages of web development.
2. Types are almost always written in a second, much less powerful DSL and then sprinkled distractingly through the code that actually does the work. I prefer the way Haskell does this- separate the type signature out onto at least a separate line rather than mashing the two different languages together.
3. Higher levels of abstraction tend to become very hairy in many type systems (although not all). This ends up just meaning that people who like types often restrict themselves (unconciously) to less abstract programming. They spot the time they're saving by avoiding some kinds of bugs, but they don't see the time they're wasting by being unable
to talk at a higher level of abstraction. Another way this shows itself is that types are very rarely first class objects in strongly typed languages, making it very difficult to create code that operates on types, or understands types.
4. Type systems open up opportunities for type driven architecture astronauting, which is just yet another way you can go down an unproductive rabbit hole. There was an interesting study done on different teams solving problems with different languages. The differences of different teams within the same language was much bigger than between languages, but the team that made slowest progress (and without particularly having an unusually low number of bugs) was the team that leant the hardest into encoding everything in the type system.
5. Type systems encourage code generation build pipelines, which again slows iteration time and makes everyones life miserable.
6. Type systems reflect a incorrect model of the world - user input, network input, file system data is not typed. The misery that I've had with some web server frameworks that refuse to acknowlege that they don't know every possible thing that the web client might send them and are able to slot it into a predefined type. I think this is the same kind of error that we made with OO systems - thinking that we could fit the world into a predefined inheritance hierarchy.
7. Type systems encourage a static view of the world. The types of things can change under you, dynamically, (e.g. the structure of a table in a database), but in most typed languages you can't cope with that correctly without shutting down and deploying entirely new code.
8. Related to that, it's hard to imagine using a strongly typed language with the live image approach of smalltalk or sometimes used by lisp systems. This means that the popuarity of strongly typed languages is killing valuable and interesting approaches to building complex systems that emphasise observability, interaction and iteration as a way of understanding them.
There are genuine advantages to typed languages, but many of the advantages touted as being unique to typed languages can be provided by advanced linting and IDEs (intellij was surprisingly capable on plain JS + jsdoc). You can also ameliorate some of the disadvantages of untyped languages while keeping the benefits by deliberately programming in a fail-fast way.
I'm sure that type systems have their place. The research I've come across on empirical studies suggests that while there may be positive effects they are small, which does not at all mesh with the extreme partisanship I generally observe. Yes, type systems gain you something, but there seems much less awareness of what you lose.
I am positive about a lot of these points for the future. Especially the performance points; that's going forward fast. But yes, that's often pretty slow; not that bothered by it for my work though. Also, linters work well for statically typed languages too; I usually don't have to compile for 100s of lines of code. If the editor does not complain, it'll probably all work fine. Like I said; do what works for you , but I think at least a good mix will get more benefits.
6. People mention this more often, but I just don't see how that works; you cannot program without knowing what data you are getting. Sure the world is not typed, but at the moment you are going to use the data, it is typed; be it in your logic, head or actual types. Any webserver can go lower level and give you a ByteStream, but when you finish parsing that, you still have types. You might not know them upfront, so you use ByteStream for a bit, but once you know, you bake types and the world is nicer. Imho :) Not sure why that's a difference?
7. This is an issue where? I know it's Erlang domain, but microservices/docker/k8s/ci/cd/lambda/functions/.../all modern crap do this (redeploy, killing the previous instance(s)) with any code, always, including dynamically typed code. So sounds like a niche?
8. Agree with this; we should experiment and research these things and continue building them. I work with Lisp/Clojure as well and like it, I just miss types often. I never suggested it's all crap; I'm just looking where benefit comes from.
> you cannot program without knowing what data you are getting
Types almost always overconstrain. Each part of your code relies on some very specific properties of your data, yet most type systems end up restricting your function to only work with data that meets a whole bunch of other properties that your code doesn't actually care about.
Not in my experience; so many, basically, stringy types.
> Each part of your code relies on some very specific properties of your data,
So then you either have a type that exposes those you need or you have different types for different functions.
> yet most type systems end up restricting your function to only work
Again, I don't understand this statement; someone implemented the types to fit the data for some functions they needed. How does the 'type system restrict' anything?
There's a lot of things, some about old-school types (Java, C), other about modern ones. I don't think most are fundamental, even though some are common experiences today.
#1 is fundamental. (Yet people somehow live with the JS ecosystem that's slower than GHCi.) It's supposed to evolve into always becoming a smaller problem, since computers are always getting faster; but I don't think we've put everything we can into types already, so I expect it to get worse in the near future.
#2 and #3 are about old-school types.
#4 Oh, yeah, they can. But they can also help a lot in team coordination. Powerful stuff enable you either way, if you harm yourself or take advantage of it is your choice.
#5 Failures in type systems encourage code generation. Expect that to always improve, but always slowly.
#6 That's why there's always a parsing stage between input and processing. You deal with input errors at the parsing stage, and processing errors at the processing stage. Most communities of dynamic and old-school languages make a large disservice to the industry by mixing those; they explode error handling into something intractably complex.
#7 Hum... You are holding it wrong. Do not state variants into your types. Instead, use the type system to get every invariant out of the way, so the variants stand clear. (And yeah, there are plenty of libraries and frameworks out there that try to encode the environment into types. That deeply annoys me... But anyway, if you do that, take the types as requirements upon the environment, not its description. Those are different in very subtle ways.)
#8 This shouldn't be fundamental. AFAIK there are not many people trying this, and the few there face a Sisyphean task of keeping their code up to date with the mainstream changes. I do hope people make progress here, but I'm not optimistic.
> Yet people somehow live with the JS ecosystem that's slower than GHCi.
I think a lot of people, including the parent, seem to equate speed of ecosystem and iteration with web development and instant reload of web pages. When other systems allow fast iteration, it goes unnoticed unless it's for web dev. Luckily, a bunch of those 'impossible' systems have to now too, like [0].
Web development is an example. Fast iteration is the thing that I like. I have so far associated fast iteration with dynamic languages, and it is certainly the case that my experience is that most fast iteration systems are dynamic.
But maybe that simply reflects a concern of the relevant communities. If strongly typed language systems start adding fast iteration approaches to things and are able to achieve a similar level of quick iteration then that will definitely address one of the things I dislike about them. I haven't coded significant amounts of haskell since 2000, but back then what you could do interactively was very restricted.
At the end of the day, the compiler is doing a bunch more stuff in strongly typed languages. It's like taking a bunch of your verification infrastructure and saying 'these must run before you're allowed to see the result of what you wrote'. It will necessarily be slower, although with work maybe it won't be so much slower that it matters.
Thanks for noting down something about your age; I have always been a bit ageist about 'fast iteration' as I never met someone close to my age (been devving professionally for 30 years this year) that cares too much about it. I am not a very good programmer, but a very experienced one and i'm consistently faster at delivering than my 'fast iterating younger peers' as I simply know what i'm going to type beforehand, I don't need too many iterations to get it right and I have enough experience to know that i'm close to what we need after it compiles. The people who just type/run 1000 times/minute get stuff done, but it's not the way I would ever like (or liked) to work.
> It will necessarily be slower,
GHCi is fast but other avenues can be explored as well, like creating a real interpreter just for development , like Miri for Rust. Only for faster iteration of logic, you forgo some of the type benefits, but when you are done iterating, you compile and voila. I guess the merging of incremental compilation, jits, interpreters etc will evolve in something that might not run optimally but gives blazingly fast iteration up to perfect performance after deployment. And anything in between.
There absolutely are approaches to this that don't fall foul of my complaints, but when you say 'old-school types' I think you're talking about Java and non inferred types.
I was including other more modern languages in my criticism. Scala for example ends up with pretty hairy types very quickly for higher level code. So much so that they made the documentation system lie about the types to make it easier to understand.
And most currently popular languages don't give you runtime access to types and allow you to treat them as first class.
The languages that allow you to deal with types with the same language you write code in are not remotely mainstream. So unless by 'old school' you include all mainstream languages then I disagree.
I was thinking about unusable code, caused by the need to write way more down in brittle types than anything you save on coding. In fact there are problems with complex types.
It's interesting that you talk about bugs, when the OP doesn't make that argument. The OP is very clear, actually:
> It didn't mean I was free from logic bugs. (Nothing can do that in the general case!) It did mean that a program which type-checked wouldn't blow up in ways the type-checker said it shouldn't, though.
Other than soundness (which is not the same as avoiding logic bugs, anyway), the points the OP raises are about expressiveness. Types help [you / the OP] organize more of your knowledge in the codebase. That's never going to be the most obvious preventative measure against bugs, because the goal is higher-order: structuring your codebase so you can reason more clearly about the program.
That might be a matter of personal taste. How ones brain is wired. For me, types make code less expressive. I can grasp the structure of a piece of code the easier, the less meta data there is on top of the algorithmic structure.
I can grasp the latter much better. It immediately forms a structure in my head that I will remember while I read other parts of the code. To do the same with the former, I think my brain uses up twice the energy or more. And even then, I will not have such a good grasp on it as with the former.
You're showing an example of a function signature in a specific language, not an example of typing though. These are related but not exactly the same.
For example here is a Ruby version, very dynamic:
def request(method, url, **options)
end
request("get", "http...", foo: "bar")
And here is a completely statically typed Crystal version:
def request(method, url, **options)
end
request("get", "http...", foo: "bar")
The argument types are inferred automatically from the usage and mismatched usage will be caught at compile time. And in the second case the ide can still tell you the types in the signature if you want to know them.
In the third example, here's completely dynamic python:
Counter point: with the former, I know exactly how to use it: for item in request(“get”, “google.com”, []) { .... }
Whereas if there’s no example for the latter, I’d have to read the code of that function (or sometime the code of the functions it called) to know how to use correctly.
Any example where the type is defined as something more specific than an array (especially in PHP, where the `array` type doesn't even differentiate between numerically indexed arrays and dictionaries).
Regardless, I still think the type hints in Symfony's version are better than nothing. It's also worth noting that `private static` is concerned with scope, not types.
Heck, not only I don't have to look at the code, with proper IDE setup, I even don't have to go to the documentation page, just `item.` and my IDE/editor will present me with the list of things that are applicable to my item.
And that's only the response part of the usage, there's absolutely no way for me to know if the method is an enum/constant or is it a string without looking into the code for dynamic typing, and again, sometime N-level deep to get enough information of "what to pass to this function so it will work".
That is just another piece of metadata. But it does not help either. Now you know you get an array of strings. But you still don't know what is inside of the strings.
It might be ['headers'=>'...','body'=>'...' ] or ['status'=>'...','response'=>'...'] or god knows what.
Is that like a hashed array you’re implying? In that case, the type wouldn’t be string anyway, but something like HashMap, Object, Dict, etc. More commonly though, something like HTTP options would be its own type, with defined properties, which you could browse via autocomplete or your IDE’s hover/peek functionality. That type would then specify the types of its values, so you can be certain that options.status is an integer, for example.
You’d generally only use an otherwise untyped array of strings when you can’t know beforehand what their values can be.
A better example for a typed function would be something like:
function Request(string method, string url, RequestOptions options) : Result {...}
or
Result Request(string method, string url, RequestOptions options) {...}
Here, we know `url` is a string (rather than the parsed object), we know all the supported options (they're members of the RequestOptions object/enum), and every value in the return value. Some might suggest even making `method` an enumeration.
If the function name was better, you could call that function right now without needing any more information.
Also, especially if you're a fan of defensive programming, most of the basic argument checking is performed at compile time, leaving the function body cleaner.
Based on your example alone, perhaps you don't have enough experience with typed languages to appreciate the benefits. I don't mean that in a belittling way, but more of an invitation to learn more about the craft.
Well, for me it's easier - perhaps because I'm used to reading parameters in type+name pairs.
The data type is right next to the parameter name - and it will be in the pop-up provided by my IDE when I use the function... I don't need to look at the docs or comment (which may not exist).
In this case I know what I am getting and I know the users are already, when reaching the webserver, deserialized, validated (or the type would reject them and produce an error and I know I am getting an actual Group back.
in your case that would be;
function AddGroup(GroupName, Users)
I have no idea how that is better or clearer or less work to write? No idea what it returns; Users probably is an array of users, but is it? Or is it a typo? And I need to validate whatever comes in.
You can add docs but I do that still with the typed version (although thats automated mostly unless I need to explain something).
Worse as well is that when I have this:
function AddGroup(GroupName, Users)
function AddGroups(GroupNames, Users)
Now i'm completely lost. I'm not even sure these users as input are the same things? Both arrays? Both the same User 'things'? Output?
But yes, in php I write typeless mostly too (although I luckily do not have to touch it much anymore) as the types mostly suck. But in more advanced type systems, the types will tell you a lot/all about the input/output, so you can do without a lot of docs and trying things out can be automated; as we know the precise input so some generator can generate example input that will immediately work; like swagger on speed.
Etc. But agreed, your example does not benefit too much from typing, however I would define Option[] as something precise than just a void* and hope for the best. Also Method and URL so I know invalid input for those very well defined things cannot be violated.
So your example:
private static function request(Method $method, URL $url, Option[]? $options): WebResult
would clear up a lot for me. Now, for instance, I can see, and not guess, that you are answering a web request instead of some request with confusingly similar names to webrequests.
But how about some more 'concrete' examples; 'request' is rather a low level / abstract thing (I hope...). But your business logic would contain more concrete functions; any examples from those that are similarly badly geared to types?
Beauty is in the eye of the beholder, however we were not talking about verboseness vs terseness though.
You can make anything terse with stringy types and you indeed seem to favour that, however, this was about how types change readability and brittleness. This is all easier to read by people who do not know what request() does => request is not a good example, but it still stands. In your case, you have no case what $method or $url are supposed to be; you can do;
I assume something in Symfony catches that, but I cannot see it from the signature at all, and the validation error is, in my opinion (obviously) in the wrong place; it should be at the call site because that's where you are creating the erroneous input so that's where I want to be lead to.
you could possible write an IDE plugin that hides the types information and only show them on hover etc. I don't think this reason is that strong of a point IMHO.
I moved (very roughly) from Java to Python to Haskell. When I was coding in Python if you asked me what percentage of bugs I encountered could have been caught by types, I would've probably said about 10%. Because I would've been thinking about types in Java and errors like calling "length" or a string and then accidentally calling "length" again on that int value from the previous call to length.
Now that I am working primarily in Haskell if you ask me the same question I would say probably about 90%. Not only does catching the "calling length on an int" types of errors at compile time but it gives you whole new tools (newtypes, Sum types, mtl-style constraints) for expressing complex concepts in a simple manner in the type system.
And all of these tools result in having to spend much less time thinking about things I had to think about in python, leaving me to spend more time thinking about the business logic which reduces the number of logical errors (which obviously can still occur).
You can see on line 57 "unless (status == Fresh) $ do" -- I originally wrote checkStatus as a function that returned a boolean and the expression was "unless fresh $ do" but I replaced the boolean with a tri-valued sum type and rewrote the check against the value "Fresh" -- this avoids the boolean blindness problem and makes it much harder to get the condition wrong -- and while it's still possible to get the condition wrong, it's much easier to find it when you do.
I find the value of static typing to be more from increasing the ease of exploration and understanding of a codebase than preventing bugs directly. The constraints on how the code you're reading could be being used making building a mental model faster. Not to mention the tooling built on top of the typing that can help with exploration.
So I fixed a bug recently where a url validatior code was falling because the code was old and recently what is allowed in an url changed. Imagine there was a language where url/email/file path was a type that could validate itself when you create it and you don't need to always try to create regex to validate things. Even more cool would be if string would not be something we use daily (like we don't use every day bytes) so we would have always a type for customer name, file name or file path and you would need to explicitly ask a conversion from a customer name to a file name etc. The way I think future languages and types could help is to make it almost impossible co create invalid data structures or state.
It's not just about preventing bugs. Types restrict what you can do with each variable, such as what methods/members exist on the variable and what valid operations you can do with it. It makes development easier because you don't have to hold it all in your head. Every member/method access is validated at compile time, without needing to write any tests.
You may not make such mistakes when writing dynamically typed code, but if you ever need to change something (like the name or type of a variable/method/member) it inevitably means trawling through all references in your codebase manually and changing them by hand. Please correct me if there's a better way to do it in dynamically typed languages.
A good static type system is better than dynamic typing. But, dynamic typing is better than a bad static type system.
A bad static type system will slow you down, forcing you to appease its irrelevant complaints, and pervert your code into a form that no sane programmer would choose to write it in, if it weren't for the type system looking over their shoulder. I won't mention any names, but suffice to say such languages exist.
On the other hand, a good type system recognises that its goal is to allow the programmer to write code first in the form that makes sense to them, and then be expressive enough to describe it.
Interestingly quite similar to my own progression. I think working in modern typed languages ( Rust, Swift, Typescript ) is what primarily changed my viewpoint in a way that C and Java just couldn't.
In the last year I've transitioned from full time JS development to full time TS development and have been pleasantly surprised how easy the transition was for my personal projects. Previously I had been quite adamantly against TypeScript; seeing the type system as an unnecessary level of complexity given I already structured my code in quite strict ways. In most it helped me find 1 or 2 small errors, but it certainly helps reduce the amount of time I spend verifying the behaviour of code. It definitely has its flaws, but most of them are linked to its compatibility with JS and I don't think they can be resolved unfortunately.
I hated types in college, I thought they were a huge waste of time, and on their way out.
But over the years that opinion shifted. I got experience with the actual types of problems we would see. Again and again runtime errors in production, usually from type issues. This really pushed me to invest my time into static analysis tools, and static analysis tools work best when types are at the very least annotated. This was an lead in to compiled typed languages where these types of runtime errors are rare if not impossible.
As someone in a similar boat, I feel still like it's more fun to knock out a really quick prototype in an untyped language.
For something I actually have to maintain and build tests for, a well typed language is absolutely preferable. I used to quip that no one could build maintainable JavaScript, and I enjoyed writing JavaScript, but now with TypeScript I think it's largely doable.
I think what really changed in me is my desire to knock something out quickly was replaced with the desire to have stable software where components could be built well from the get-go and not need modifications for years.
Having started out with Pascal and C, I thought I hated statically typed languages as well until I got exposed to SML and Miranda. I guess Rust or Swift have the same enlightening impact to people coming from Java or Go and I'm glad such approach is finally hitting the spotlight, even if a bit hampered.
That being said, and as much as I see the appeal on those and hoped that stuff like ATS or SPARK were more prevalent, for me they lead to dull, boring code bases. Which is great! But when I look back on my career, the most fun I had, the craziest abstractions, cool hacks, the code I'm most proud of, it's the one written in dynamically-typed on untyped languages. And here I'm talking about some flavor of Assembly, APL, Lisp, Forth or Smalltalk. PHP, Python and JavaScript and similar scripting languages just don't cut it for me.
Yeah the article is just "hey, I discovered SML!". I've been using SML, Haskell, then OCaml since the early 90s and the benefits of (proper) types and type inference have always been obvious.
Hate on types seems to stem from three perspectives I can see:
1. 2000 era Java-likes cause superlinear class complexity explosion
2. 2010-era typed functional programming - Scala stdlib type sigs that didn't fit in a tweet, 7 scala stdlib rewrites and churn related to really figuring out how to even do functional programming in applied domains like crud apps
3. All along, cutting edge comp sci research being presented as the "correct" way to program and you're a normie drooler if you haven't read yesterday's paper and rewritten your stuff onto it
For me it is somewhat different: I always used typed languages and occasionally have to read and improve untyped code. I must admit, I feel half blind in the latter. When debugging a python web service, trying to figure out the control or data flow, I very often scratch my head and wonder "what is this thing"? This applies both to library functions and code from colleagues. Sometimes it feels like untyped languages are write-only languages.
This is my biggest gripe, too. Discipline about commenting helps, but only so much.
In my case, I have Clojure spec or the JavaScript equivalents at the important boundaries in my code. That makes this issue much less painful and also solves a few other issues to boot.
To be honest, in more advanced type systems, I’d ask: “What is this” only to be shown a convoluted type signature. I then put a println or breakpoint in there just like I would with a dynamic language and poked around.
I would even assert the amount of typing you get in a language like Java or even C is very useful. I think a big reason why the python 2 to python 3 transition was so painful was a lack of static typing.
Unlike other languages where if you update to the new version and get a whole bunch of build errors about methods not existing for a type or this function now returns a different type, you had to run an %80 of the way there migrator who's effects you weren't completely sure about and run your app through your unit test suite. Almost nobody has %100 unit test coverage to approximate what a build statically typed language compiler does, some functions 'worked' anyway due to duck typing and did behaviors that were different than what you were expecting and a bunch of methods returned a slightly different string type that gave more errors, only when you ran the code, each exception at a time.
While in a static language you can just run it, and you would of saw all the places where a function now returned a bytestring and deal with them all at once, properly. I've dealt with several breaking change migrations with the swift language, and although annoying, it was tractable and only took a day or two without worries about hidden gotchas.
Nowadays my minimum requirements for static types are:
- Nullable types (does this type container accept nil or not)
- Very basic generics for containers
- Typical static typing features
Bonus:
- Enums with variables (also known as ADTs)
The reason why I chose these features specifically is they give you a lot of benefit, but are not actually slow to compile and not complicated to implement for language maintainers when first writing a language. I hope golang has all of them one day.
> I think a big reason why the python 2 to python 3 transition was so painful was a lack of static typing.
And I'll add compilation vs interpretation to this. I have, on a couple of occasions, had to update Java or C# code from one version to a much newer version. This meant that deprecated features were often used (more an issue with the Java cases). Thanks to actually compiling the code and the static typing, most of the things I needed to update/change were apparent from the start. This didn't mean they were good "modern" versions of the program, but they were functional and operational and could interact with newer Java and C# code bases properly with only a week or two of effort.
The same was true when a library was changed in a breaking way (at the interface level). Of course, you still need tests when semantics of the library change (like it returns the same type, but with different meanings), but that's usually well-documented.
The heuristic I use for choosing a statically typed language vs a dynamically typed language for a task is whether or not the code will sanely fit in a single file.
If that is the case, then that means I’ll probably be able to keep the structure of the code in my head and therefore I’ll be able to get by with a dynamically typed language. However, once the code starts to span multiple files, typed method signatures in a statically typed language are invaluable. It sucks having to navigate a codebase with tens of thousands of lines of ruby or python code trying to work out exactly what structure of object can be passed to the method you’re working on.
For my use cases, the vast majority of errors with dynamic languages boil down to being able to run scripts that have undefined variables; I'm prone to (mental) typos, so if I had a dialect of Python that failed before runtime when undeclared references exist, I could probably cut the number of iterations I need to arrive at a working script by at least half.
I've never found a valid use case for allowing undefined references, though I suppose it does make the language implementation significantly easier.
Writing tests is not really a solution either. Tests are also code and suffer from the same problem.
I most often choose Python because it's what's available, and it's good enough for the things I need to use it for. Other people can generally understand my code, too.
F# probably is a good language, but I haven't had a good enough reason to use it.
Pycharm warns you when that happens. I'm guessing they are using a combination of the abc module (to parse the abstract syntax tree) and some linter that can be found on pypi.
Would a validator that runs before your main python executable solve your problem?
I work on the big data side of Automattic, so I tend to work with PHP, Typescript/JavaScript, Python, and Scala, but I also work with C# for fun. When you work with and without types often, you realize that it doesn't matter. Types are great for expressing constraints on the code, but you can do the same thing with conventions in non-typed code.
IMHO, type-less provides some resiliency. If you write code for a "string" anything that satisfies "stringiness" will still run just fine. This lets you build types that "trick" the code you're calling, which can be useful sometimes. I think I've abused this capability less than 3 times, but more than once. Mostly it allowed me to significantly change behavior without rewriting huge chunks of core code. However, these were all proof of concepts. Don't get it twisted, working with no types is pretty annoying sometimes.
Types, on the other hand, are great for most things, but I do find they get in the way sometimes. It's annoying when you look at the code you're calling and see that it only uses `.x` on a type you pass it, but you have to completely build a type just to create a `.x` on the type you pass in. I once found a NullPointer bug in HDFS with a static type (very strange) and had to work around it doing exactly this. Anyway, I think I'd like Go's approach to interfaces.
Those who argue strongly for one method over the other are likely to not understand the one they're against. Which sucks, because if you do know both, there's rarely a reason to have strong feelings one way or another. So, once again the naive but vocal voices get all the decision power.
/Too many arguments at work are over dogma and I have to spend a lot of my time reminding people that there are options. That's all. There are options. Your kneejerk response to a problem might be fine -- might be right -- but there are always options.
I was never sure what to think of dynamic typing, until I tried to implement a fairly simple Earley parsing algorithm in Lua. The thing was 50 lines, and I was lost. I only managed to get runtime errors such as "you can't add functions, you can't apply a number, this reference is null…
That's when I understood where TDD came from: dynamically typed languages require so many tests to work reliably that we better write those tests first so we're not tempted to omit them.
Anyway, I redid the whole thing in OCaml, and this time got lots of compile time errors. Which I could correct, and once the compiler was happy, well… my program was basically correct.
---
I still don't rule out that other people's brains are wired fundamentally differently. I don't expect it, it would surprise me, but I honestly don't know. What I do know is that dynamic typing is not for me. I'll suffer through it to get other advantages (Python's comprehensive library for instance), but that's about it. Dynamic typing and I are otherwise done.
The key to writing type-less code is making it so simple that you could cry. Avoid being clever at all costs. However, it's almost the exact opposite when you have a nice compiler checking everything, you can be mighty clever and know that it will work.
I'm lucky in that we can solve a problem in a myriad of languages, depending on if it needs to be "realtime", or can be precomputed. There are some types of problems I'd rather solve in Scala, and others, Python or PHP. It just depends.
> The key to writing type-less code is making it so simple that you could cry.
I can't help but interpret this as "the key to writing type-less code is choosing problems so simple that you could cry". In other words, dynamic typing doesn't scale.
When I have a very simple problem to solve, sure, I won't mind dynamic typing. For anything worth more than an hour of coding however, I'll definitely reach for more reliable tools.
You can look at it that way, or you could look at it like comparing RISC vs. CISC architectures. RISC would seemingly be too simple to accomplish everything CISC does but that's the beauty of being Turing complete.
Using interfaces/traits/protocols don't preclude strong typing.
If you write code for a type that "has" stringiness, as expressed by a trait, then anyone can build a variant of their types that expresses that trait.
> IMHO, type-less provides some resiliency. If you write code for a "string" anything that satisfies "stringiness" will still run just fine.
Things like that are actually possible to do in a light weight way in FP languages and many of them are part of the stdlib. (PureScript happens to be my favorite example.) Many times it brings back the same feelings I had working in Ruby (which I also happen to love), but with the added confidence of a strong type system.
I went through an intermediate phase: Typing is good for complex projects, but not simple ones. I've found even that isn't true: In simple programs, typing pays for itself when the compiler, IDE etc catches mistakes I've made.
Maybe I just make lots of mistakes. Maybe people who prefer dynamic typing are more careful/better programmers than I. Personally, I screw things up so often, it's always easier to use typing!
One note of caution. There's a pleasure to solving problems that arise from type systems akin to solving Sodoku puzzles. It's intellectually satisfying and feels like productive work.
I sometime worry that a lot of the overhead and baggage that type systems sneak into coding are hidden because it's fun to resolve them.
I'm still on the fence about types. I love them when they help me but I find myself generally writing more boilerplate and less elegant code than I can sometimes write in Python.
My typed language is C# so I do have type inference (although not the most advanced). I don't have Union types. My IDE does warn me about potential nulls.
So I've got 50-75% of the things listed in this article. And I'm still not 100% sure there's not a hidden cost that's quite hard to define.
> There's a pleasure to solving problems that arise from type systems akin to solving Sodoku puzzles.
IMO, That's not type system but OOP. Where I have problems with types is usually due to class hierarchies, and developers trying to invent "beautiful" taxonomies and abstractions for the sake of abstractions.
I guess generics sometimes come as a cludge too, particularly if they are combined with the above OOP issues.
In a typical bussiness/app code if you don't use inheritance or keep it to minimum, and use generics where they belong, static typing is rather mindless. "This needs something that implements interface ServiceA, and a string" and then a bit of familiarity with handling collections and "Options".
From my experience, it's only sudoku puzzles if you (code author) make it so. Not very unlike the opposite version when people go very wild with abilities of lack/dynamic typing.
I’m one of those programmers who doesn’t care about types. Am I coding in a dynamic language without compile type type checking? Great, I lean into it, hand me vim and a good testing framework. Putting me into a big Java codebase? OK, time to run IntelliJ and use automated refactoring to change the code. The type system dictates the style and trade offs so it’s just one more dimension that informs how you deal with the code and write new code.
I’m more worried about what I’m working on and why.
Types for checking things are correct is really important which is what this talks about.
For me though they really come into their own when those types then help to eliminate boilerplate. Haskell's foldMap function is my classic example of handling the accumulation of a result where it does the hard work based on the types. Idris goes even further by supporting the ability to infer obvious code for your code editor like handling each case in a sum type.
TypeScript, Rust, and working on a huge projects impacted me the same way.
Rather than "using types so that things run", I have been encouraging people to ”describe business entities/constraints with types” in some languages. That's a way to get compilers to check half of your business constraints. But, I agree that some types are not worth it, particularly those not separating nullish from its non-nullish counterpart.
I'm having the distinct impression that the top comments in this discussion didn't read the article all the way through.
This is the article's conclusion:
> And, critically, this taught me to be far less dogmatic about the value of ideas in programming languages and software development in general
And yet all the top comments are completely, absolutely dogmatic (or at the very least, far more strongly worded than they merit; presented as fact when they're anything but) repetitions of "arguments" said over an over in that sixty-year-old flamewar. A flamewar that has no ending in sight and probably has no ending possible.
There have been studies made trying to answer this particular question: whether static or dynamic typing ensure more or fewer errors. Not only have those studies always been inconclusive, in one case the discussion among the researchers themselves ended up turning into a flamewar.
As for me, I'm just as happy working with Python or with Typescript, or with Kotlin or Clojure; somewhat less so with Go or Ruby; and I'm absolutely miserable when working with Java or PHP. Because these are just personal preferences, nothing more.
Great article. Chris’s rust podcast often sounds like this article reads and worth the time.
I happen to be a Python guy. I don’t get to program every day anymore, and python is about all my brain has room for now. However, I’d love to have optional types and am envious of the Typescript crowd. I wish they would have based it on Python instead. Oh well, one can hope. Maybe one day we’ll have something like it.
I think a point not often brought up is that Java takes a stance against type inference. It argues that type inference can lead to bugs, because sometimes it doesn't infer the type you meant and makes code harder to read. So explicit types it is. Also inference slows down the compiler and similarly the auto-completion.
Now, I'm in the camp where I don't really think static type checkers are all that great. I judge languages as the sum of their parts, and in that way, you can easily rank one statically typed lang above a dynamic lang and yet rank another dynamic lang above both of them.
I just bring up Java's stance, because I find it interesting. I tend to find most static type checkers enthousiasts have an internal conflict between static types make for programs with less defects, static types help me navigate my code, and static types are too verbose.
Java's stance is flat wrong as a matter of practice. I don't believe I've once had the compiler infer a wrong type in a way that led to runtime bugs. If it did, we would all be rightfully annoyed with that compiler because it would be broken.
Once more, for the back of the room: Java does not count as an example of a worthwhile type system. Good types do not increase verbosity.
I can see you disagree on a matter of principle and personal experience. And that's alright. I just think it's as valid an opinion as all the others. There are some who like explicitly defined types of various levels. Those who want it all infered or partially infered, and those who don't want them at all, or want them optional or gradual, etc.
And none has been able to make a claim over the others even now years later the debate still rage on. The data doesn't show up. And different people have different experience that leads them to different paths.
So I like to think of it more as a personal choice. Like various master chef all have their preferred brand and type of knife, so do developers with programming language and type checkers.
Any language that supports type inference will also support annotations if your taste/linter/boss demands it. That's fine. But hobbling the compiler by not including type inference at all, because you think it causes bugs, is quite definitely stupid. I literally don't understand how anyone could dispute this except by inexperience with type inference.
Based on the sentence with the tangent about "optional or gradual" etc, I suspect you are conflating two debates. Of course there is debate on to what degree static type systems are helpful. That is quite separate from the debating whether to include the very convenient and theoretically rock-solid feature of type inference. I have hardly heard anyone discuss this, much less attempt to gather empirical data.
Returning to the original point, there is no conflict between static typing and concision, except in C++/Java style type systems. I especially don't understand how a "static type checker enthusiast" would think there is, but if you meet one tell them Someone On The Internet told them to learn OCaml already.
Like I said, I'm definitely pro inference, in fact, I don't even mind a dynamic language (depending which one), give me a Lisp or Smalltalk and I don't even need any static type checking. Now, OCaml is actually next on my list, but I've done Rust, Haskell and Elm, used Kotlin, Scala, Fantom, and C#. And I've used core.typed on Clojure, for now, Clojure is my language of choice, but I definitely like a functional language with a fully infered type system compared to the imperative flavours.
But there is a segment of the static checking world which is dedicated to non Hindley-Milner, mostly imperative, mostly OO scene, and I feel denying Java's influence on static typing I think would be reductionary, as it is one of the most widely used statically typed language.
Anyways, I just wanted to bring their voice to the conversation, even if I don't agree, and am a big fan of local type inference that Java 10 added. Which maybe the fact that they finally added it shows they recon being wrong about it.
Now, for cause of error. One of the one they describe is "action at a distance" like this:
var result;
// many lines of code
result = new ArrayList<String>();
Now here because the assignment could be way later in the code, someone might mistake the type of the variable since it isn't obvious what it would be,.since the assignment is so far away from the declaration.
Another issue they talked about was multiple assignment. For example:
var x = "Hello"
// multiple lines later
x = new CustomerName();
Now what should the type of x be? Java could infer that it must be the closest common ancestor, which might be Object in this case. That's most likely not correct though, and the rest of the type checking now will probably lead to weird errors. Like why doesn't x work with String methods or CustomerName methods?
Another issue is with ABI compatibility. If the return type of a method is infered, and compiled. And a dependent class uses it. A programmer could change the implementation and have the inference infer a compatible but different type all of a sudden, like say it now infers ArrayList instead of the interface type List. Now the client code is broken, because it assumed the List type, and this is probably not the intent of the programmer, but a side effect of the inference having changed without their notice.
I think they talk about more stuff here. You got to dig into the mailing lists and JEPs and all to find most of it.
I'm not trying to deny Java's influence, just saying that it's bad. I do think it was worth bringing up (I upvoted your top-level comment), but mostly because we need to recognize the damage it has caused to the discussion around typing. People try to talk about "dynamic vs static", or worse run studies, when their only static language is Java. Their arguments are nonsense and their studies are wasted as far as I'm concerned, since they're dealing with a straw man. Java is treated as the poster child when it should be the black sheep. FFS, it still has null pointers.
Your examples are mostly not problems in reasonable languages. Just don't declare variables before initialization. Assigning a variable two incompatible types is an error, just like it would be without inference.
I'll grant the ABI one is interesting. In situations where separate compilation is on the table, maybe you want to annotate function types. But obviously the myriad of inferred languages don't suffer much from this. You could just as reasonably argue that the class author made a breaking change, and it should be treated exactly as if they changed the annotation.
What I wonder though is what about Java is bad? Like, is the lack of inference the key differentiator between good static type systems that help, and those that get in your way and slow you down?
Or are we actually claiming something more, that the paradigm of Java prevents its type system from being useful. Thus we're as much having a conversation about type systems as we are about functional vs imperative/oo programming and other such paradigms.
And in fact, if you think of it that way, maybe it isn't a type system you've been looking for after all, but a better language/paradigm. And you can see people moving away from Java to Erlang, Clojure, Elixir all have this "OMG this is so much better" feeling. Similarly, someone who moved from Java to Haskell, OCaml, and all also has this "OMG this is so much better feeling."
Where as one moves between Python/Ruby and Java, like the OP, and thinks, I don't see the big deal with static types?
Now someone could say, well, it's because you moved to a bad static type system. Try Haskell or OCaml instead. But that's not just Java with a different type system, now you've also fundamentally changed the paradigm and so much more.
It becomes difficult then to distinguish the type system from other aspects of the language.
Now, I'll show some bias here, but I've used Haskell and Clojure. The languages have a lot in common, one of the big difference between the two is the type system (ignoring purity). When moving between those, there's not a clear feeling of one being better, it's much more "hard to tell". The Haskell type system can be neat, can help me make sense of what is what in the code, what fields are available, what functions are supported, let me rename things with more confidence, etc. But it also does get in my way sometimes annoyingly, distracts me from my problem domain, and can feel limiting at times.
Amusingly, this is the same thing someone whose debating the pros/cons of Java's type system over Ruby or Python would say.
I guess where I'm going with this is too say that a type system is only a part of the story. And with Java, is it truly the type system that is the problem? I don't think so. In fact, I think maybe within the context of the semantics of Java, its type system is actually the best it can be.
By the way, I think the issue with Java's inference of mutable variable is that most of the time you want to infer the most generic type. Like say:
var items = new ArrayList<>();
It be nice for items to be a List and not an ArrayList, that's what a programmer would have annotated.
If a method returned items, you'd want the method to be a List, so that clients don't break if you later decide to use a different type of List.
Another challenge now though becomes what is the type of elements in the List?
You can go by the type of the first added value, but if that is behind conditional branches like:
var items = new ArrayList<>();
if (thing == 1)
items.add(100);
else
items.add("hello");
Now what?
I feel a lot of the mutable semantics of Java combined with the inheritance based subtyping makes type inference much harder, because of all these weird edge case.
> What I wonder though is what about Java is bad? Like, is the lack of inference the key differentiator between good static type systems that help, and those that get in your way and slow you down?
Lack of type inference is part of it - Java's type system gets in your way more than better type systems due to it. But the bigger part is that Java's type system doesn't give you the tools to model things properly. One big issue is nulls - no matter what you do with types, you have the possibility of null constantly in your way. The other is the lack of sum types, without which many things that you want to model become difficult, verbose, and error-prone.
IMO Java's crippling flaw is that it is very verbose (largely but not entirely due to lack of type inference), but its type system nevertheless permits a lot of errors (notably NPEs) that other languages easily catch. In a lot of ways, it gets the worst properties of both static and dynamic languages, both the inflexibility and the unsafety, all the weight and precious little of the power. It would be a lot more tolerable if it was null-safe with proper algebraic Option types.
So yeah, a smart programmer who moves from Java to either Haskell or Clojure will likely find it a breath of fresh air, for very different reasons. Incidentally, the reason I recommend OCaml is because it's a HM-inferred language but with a much more pragmatic type system than Haskell's. You can just println wherever you please.
It's true that subtyping makes inference a lot harder. My reflex is that subtyping is overrated and you should just use interfaces and composition, but you would have a decent claim that I'm moving the goalposts or something. I'm sure OCaml has a semi-decent solution, given the O part of the name, but I don't know off hand what it is. Worst case, if the compiler chokes on your subtype-heavy code I would say to just annotate it. Annotations are certainly valid for when type inference fails or for generally forcing it to do your will in weird cases. Relatedly, I don't totally hate forcing top-level function signatures to be explicit, which would also solve the most-general-interface thing in most cases.
AFAICT your last example is just ill-typed and should not compile. This is another of the places where Java's type system just fails entirely. What is the caller who gets that thing supposed to do with it, except reflect on the type of the contents like we're writing Python?
2. When you want to implement a method which applies to multiple libraries, you end up writing `fooString`, `fooInteger`, `fooDatetime` etc (or foo(String), foo(Integer)) etc. DRYing is too hard. So you end up writing 10x more methods
3. No you don't avoid `null` checks at all.
4. Generalizations are harder to implement.
5. Interfaces are uglier compared to dynamic languages.
6. The code becomes less readable, not concise and verbose
7. Every new and old typed language is less elegant compared to the dynamic. Look at typescript vs javascript, typescript code is ugly, verbose, non-readable at all.
> 2. When you want to implement a method which applies to multiple libraries, you end up writing `fooString`, `fooInteger`, `fooDatetime` etc (or foo(String), foo(Integer)) etc. DRYing is too hard. So you end up writing 10x more methods
This is only a problem in typed languages without generics (like go) or without sum types (like go). With generics, the compiler takes care of writing fooInteger, fooDatetime, etc. With sum types, the compiler verifies that you’ve handled all of the cases in your foo function.
>2. When you want to implement a method which applies to multiple libraries, you end up writing `fooString`, `fooInteger`, `fooDatetime` etc. DRYing is too hard. So you end up writing 10x more methods
As discussed in the article, you want a sum type here.
>3. No you don't avoid `null` checks at all.
Entirely language dependent.
>4. Generalizations are harder to implement.
For some definition of harder. Harder to implement, causing you to think harder about what should be allowed under these generalizations, leading to less bugs, leading to things being... easier to actually implement in the long run.
>6. The code becomes less readable, not concise and verbose.
Since you only recently got into dynamic languages, just wait till you try to get into a large codebase without types. Where everything basically devolves into containers of arbitrary things that you don't know what they contain until you've gone through the thing with a debugger.
Maybe your day job becomes more interesting that way. Making CRUD apps is admittedly boring, but this isn't in my opinion the way to keep one's self on their feet.
How many languages have true sum or union types though? For example, in Haskell I can’t declare a function as (foo: (String | Int) -> Int) and then call (foo “bar”) or (foo 123), I need to create some kind of wrapper type (like Either) to contain the possibility of a String or Int.
If this were possible, there would not be such a proliferation of useless types throughout code, and code could be updated incrementally much more easily, since if I want to add the possibility of another type in a functions inputs, I don’t have to go and update every callsite.
Haskell's Either is a 'true' sum type, it corresponds exactly to the way sums have been defined in the literature for decades, and also is the Curry-Howard representation of logical or. It necessarily must be inside a Either-like wrapper in order to be type safe, String and Int are ultimately different and so at some point we must discriminate between them. foo could call further functions with its argument inside itself accepting Either String Int, but eventually a destructor will be hit somewhere in the call stack.
If String and Int do implement the same behaviour under some circumstance, then a typeclass should be used to define that behaviour. Then foo can have type (Bar baz) => baz -> Int, where String and Int have Bar instances.
Right, so I must have used the wrong words “sum type” when I should have said “union type”, but I thought the intended meaning was clear.
It doesn’t necessarily need to be inside an Either wrapper, that is an implementation detail, neither does there need to be a typeclass specifically defined for it. For example the new Dotty dialect of Scala has support for union types as I described them [1]. This is nice because it makes union of types an actual union operator, satisfying actual commutativity (A | B) = (B | A) and idempotence (A | A) = A, which Either does not without some extra explicit isomorphisms.
I’m curious, is the ‘true’ sum types, I mean where true is in quotes, based on the premise that lazy language can not have logically true sum types? Or is it in the quotes for some other reason?
I agree with my sibling comment regarding Haskell, but your example essentially as written is possible in typescript. Python’s type system also allows it via the Union type, e.g. Union[str, int]
Dynamic languages don't do away with such types, especially uint/float, etc. They just hide it away from you. The only thing it gives you is false confidence, where one day you end up doing operations on two floats because your untyped function is supposed to get numbers in, and you're left with 36.99999999999994$ on your bank account because it wasn't explicit.
So, you write unit tests to make sure that you only pass the rights types when calling it. Doing a very bad job at just being a bad compiler.
Now, I'll agree, the languages you mentioned are absolutely horrible with types. Types in C are basically nothing more than suggestions because you end up casting const away while laughing, C++ has std::types<template typed<std::frobnicate<T, A, R>>>, C# and Java in the past were excessively verbose and required you to type every single thing. Java/C#/C++ finally got a bit better with that with auto/var.
Kotlin types and type inference are absolutely fantastic and make your code clearer.
2/ No, you don't write fooString, fooInteger, etc. You write foo(value: Int), foo(value: String) and let the type system resolve and tell you which ones are available, rather than relying on documentation and runtime type checkings that are plain bad. And if it makes sense, you can even declare them as extension functions, Int.foo()
3/ Yes you do. Unless you explicitly opt in to null values or you're using platform types, like calling code from Java with @Nullable/@NotNull annotation
4/ Nobody calls for generalizations all the time. You're not supposed to write generics for all your methods.
5/ Explicit definitions are uglier compared to duck typing and just going "yolo it has a .frobnicate() method that means I can call it" ? I agree that over time, they might become unyieldy, but once again, with a proper typechecker and type inference, they're a blessing. Using kotlin and writing
when(this) {
is Frobnicator -> this::frobnicate
is Zobtrinatex -> this::zobritnex
else -> this@caller::defaultBehavior
}.invoke()
gives you safety, checks that you're not mixing return types.
6, 7/ Depends. I've written some true abominations, yes. Typecript gets ugly when the libraries you're using abuse types horribly (React used to be absolute trash for that, declaring props was terrible). But then, you pay the cost once (at declaration), and get benefits through your entire app.
Don't write generics for a component, unless it makes sense. If you're exposing a Container<T> and you can access the data inside and you require its type, okay, it makes sense. But don't do a Page<T>, that's dumb.
Generics are a tool. Like C tells you to not abuse (void*), don't abuse generics. Use them with parcimony and your code will be better, give you more guarantess and literally write itself if you've got a competent IDE.
Thanks @ohgodplsno for the detailed and thoughtful response.
One final point is that with statically typed languages you sometimes fight with the language to deliver. The language and libraries becomes a maze of problems. Developer productivity and sense of accomplishment is low.
I sometimes look at my code when I wrote a matlab clone in C++, It seems like half of the code is things which I wrote to overcome or augment the language and the libraries.
In Python you'd usually use a `Decimal` for money and it would give you an error if you tried to mix it with floats, not silently throw away information (which I certainly agree would be a dire consequence).
Say you'd like to have a foo method that returns... a number, whatever.
You do not need to have multiple symbols "fooString", "fooDecimal", "fooString", etc. Simply overloading the parameters gives you type safety, and keeps a single foo symbol.
fun foo(value: String): Int = value.length()
fun foo(value: Int) = value
fun foo(value: RemoteDatabase) = value.servers.map { it.connect().executeSql("SELECT 1")[0].toInt() }.sum()
All these define a foo method, that returns an Int. The untyped alternative (what javascript, python, etc do in a naive way, without trying to duck type) is to do this:
fun foo(value: Any) = when (value) {
is Int -> value
is String -> value.length()
is RemoteDatabase -> value.servers.map { ... }.sum()
else -> error("Welp, our type system couldn't help us there.")
}
Additionally, if it really makes sense, you can define foo as an extension function on the type:
fun Int.foo() = this
fun String.foo() = length()
fun RemoteDatabase.foo() = servers.map { ... }.sum()
You can then use it directly on the type, rather than call foo(1):
val fooResult = 1.foo()
val fooStringResult = "Well hello there".foo()
So basically, author likes type systems that are aimed around writing correct programs (haskell and rust) dislikes types when they are an implementation detail so the compiler knows how many bytes to reserve to store a variable (e.g. c).
The author didn't really change his mind, he just encountered a totally different type of type system with different goals.
I've been using Kotlin (which is very similar to Typescript, the language the author also mentions) to great effect mostly because 2 of those points (type inference and sum types) and something else that's also very powerful and not mentioned in the article: extension functions.
Without types, the complexity is shoved elsewhere (in someone’s head). Types make the inputs and outputs explicit.
He is right about a good type system being able to infer and being smart at the same time allow placing enough guards around things to truly express the authors intentions of how things ought to be used.
One of the reasons why I am not a fan of golang. It’s type system is not that great. You have to do a lot of hacks and copy pasta.
Rust, C# and Typescript get it. Python with its new py3 typings syntax is great but it still needs a lot of work to be able to express things like Union types, nullable types, partial types, Recursive types , conditional types, type transforms, genetics etc.
There’s a lot that goes on in a good type system. The balance of letting the user express their problem at the same time keeping errors meaningful and preserving the code guarantees.
For me, using C++, types have always been expected to do the heavy lifting. A language without sturdy types is a language that doesn't help you; you are left to do everything by hand.
C++, like Haskell and, to a lesser degree, Rust, make types power tools, engines of coding automation. Java types, by contrast, do not much for you, and C types are little better than none.
If you are new to programming with strong types, you will not reflexively unburden youself onto your types, and continue doing things the hard way. That will feel comfortable, but will limit your reach. To level up, you will need to consciously, and frequently, ask yourself if a type could shoulder another burden. You will know you are succeeding when features you didn't notice you were coding pop up, unbidden.
I was wrong about types too. I used to think after I had mastered half a dozen languages, I should start spending time on typed languages, maybe eventually learning type theory, Haskell and what not.
Now I focus on things that matter: deep learning, AI, and robotics.
Yeah I agree most of the pain of types comes from the ways they're implemented (especially in early C++/Java etc)
Hence my beef with them. PyLint (as an example) can deduce and type check your program for you. Why do I need to annotate the types for every single thing? Python is not exactly a weakly typed language if you go down the details.
So if the compiler knows (or even worse in the case of Java: the IDE knows), why do I need to tell it that? Then you end up with the cases of several prototypes for the same function for things that are essentially the same.
I can understand type annotations for documenting interfaces. Those make sense.
People often think about coding, but that's generally a small piece of it.
Typing is a form of structure, and especially a kind of documentation.
When encountering APIs of various kinds 'typing' is part of how they are expressed.
It's hard to build large monoliths without typing.
When working with 'wide scope' problems, when the typing is more oriented towards the data, that's another thing altogether, which is why I think for many systems untyped JS and Python works well enough there.
What most typed systems lack is a fluent way of mapping to 'data types' which are often external to the system, or at least externally defined.
> It's hard to build large monoliths without typing.
It's hard to build anything larger. In a microservice architecture one of the best things one can do is to define machine-readable schema (which is just a way to "type" RPC calls and messages) for all APIs/messages/events, and then auto-generate generate client code from it.
Most of the people who oppose typing must have been working on tiny projects, in small teams where they were co-authors of most of the code.
One good reason for not caring about types: prototype code. I do a lot of prototyping (trying ideas quickly, not for production). I typically do that in python. In that case, I don't think types would be helping me at all, I can get run-time errors, but I don't really mind, it's an easy fix.
I used to have to prototype in C++, definitely not the type of language that's fit for that (if only because it's compiled). Prototyping is best done with an interpreted, untyped language in my humble opinion...
I think the usefulness of type annotations is directly related to whether you are using classes.
In JavaScript or PHP, it’s usually easy to figure out if something is meant to be an int, string, array or object (yes we’ve probably all done “1”+1=>”11” before, but it’s very rare).
But if you are using classes, it’s very useful to distinguish at coding time which exact class an object is. Often you’ll end up with multiple classes with similar sets of properties, and it’s imperative to know which one is being used so you call the right methods.
I'm working on a system where the users supply equations as rules and in a roundabout way we came across static vs dynamic typing.
having users have to define each equation and its type (boolean, numerical, etc) would make it simple and foolproof, but add extra work and requires defining each equation separately when a lot of them are just partitioning the space (imagine 30 combinations of x, y and z) so we ended up deriving the type from how its result is being used, but that puts the work of getting it right on the user
I've been wondering where to fit Rust in my mental taxonomy of typed languages.
Normally, types are used to statically type check a program, but with borrow checking and lifetimes, types are used to validate concurrent writes to memory locations. That's a whole different use case. One I find much more useful to be honest.
It makes me wonder actually, could you have a language whose types are dynamic to the extent of type errors, but whose write accesses are still statically type checked?
Can you skip them in a typed language? Yes, just use any, Object or whatever the equivalent.
Can you add them to an untyped language? No.
They are not needed anywhere. But I argue that especially JavaScript module-systems would have benefited greatly from them.
A million lost hours in fixing obscure "undefined is not a function"-errors from output of highly dynamic pluggable build/transpiler-systems like webpack, requirejs, babel, buck etc. could have been avoided.
> If you have the option between creating a function which accepts a string (e.g. ID) as argument or accepts an instance of type SomeType, it's better to pass a string because simple types such as strings are pass-by-value so it protects your code from unpredictable mutations
What if you work in a language where SomeType is pass-by-value?
Rewriting our app from Node to Go reduced cloud spend by 20x, did not increase dev time, made it much easier to read and grow the project past 5k lines. Literally no downside.
If your database is typed on the bottom, and your documentation is typed on the top, you have a type sandwich in every non-script application. Might as well be consistent.
Static types are great for Fungeable Programmer at Big Co. where you can be dropped into a project and it sort of gives you a way to figure out what's present in the code.
However, if you need to get somewhere fast, be it a side project where you've only got so much time due to other constraints in life, or it's a product you're trying to get to market first with a small team, then you're not going to benefit from ossifying statically typed languages - you're going to reach for something that's interactive, powerful, and not bogged down in edit/compile/test cycles. You're going to instead reach for something like Common Lisp, Ruby, Python etc.
Once you're making $COMFY_DOLLARS_PER_MONTH you can then pass it off to some re-write team, though.
I think this is a false dichotomy. When I’m knocking out something as quickly as possible, I actually prefer statically typed languages. I have some quirks b/c of my particular background (my formative years were a constant mix of C and C++ and PHP and JS), so I often build even “script”-like tools in Rust instead of Python or Ruby or whatever. I find the ability to refactor rapidly is increased for me as I work in languages like Rust, ReasonML, etc.
But I also work with and know folks who are absolutely brilliant, who build large systems carefully and slowly… living almost 100% in the REPL of a dynamic programming language. This approach is not the one I prefer, and in fact when I’ve tried it it has kind of driven me batty. But this doesn’t seem to be a function of either competence or context whatsoever (though certainly types do help somewhat in the Working In Any Sufficiently Large Codebase scenario); it seems to be a function of cognitive style.
Is 'Maybe Haskell', mentioned in the article, worth reading? Or can anyone recommend a similar book of the 'just enough to be dangerous' kind on Haskell or maybe Clojure?
'Maybe Haskell' is a fantastic intro to using types and why you might care about them. It's focused in scope and really meant to give you a taste of some functional ideas and a bit of Haskell, and in that effort it does a great job.
Also, you can't beat the price ($0). If nothing else, probably worth browsing through to see if something resonates with you.
Obviously I’m partial because of my history with the book, but having read a bunch of other introductory Haskell materials, this one is still my favorite. It’s my favorite in part because it doesn’t try to teach you the whole language, it just gives you a taste of some of its powerful abstractions, walking through them with just the `Maybe` type. I read the whole thing on a plane flight, and reread it on the flight back.
I found it a pretty good introduction. Unfortunately at some point it devolves into typical Haskell "how can we turn this clear, readable three-line function into a two-line function by peppering it with obscure operators".
I learned Clojure by reading Clojure for the Brave and True and can recommend it as a good intro. It helps that the language itself is simple and consistent. You can get remarkably dangerous with only a few hours of study.
Everyone have their opinions, that's fine, but who cares. It's a waste of time to write out your opinion or discuss them because they are just opinions.
I think you misread the post! I very much like TypeScript, am a core contributor to the TS work in the Ember.js ecosystem, and am actively involved in efforts to make TS a first-class supported language in my day job!
> If you talked to me about types in programming languages six or seven years ago, you would quickly have learned that I was not a fan.
> I was working with Python and JavaScript and simply didn’t miss types at. all
> I understand why I thought that types were worthless from 2012 – 2014. In fact, for the specific languages I had used up to that point, I continue to think that the types don’t really pay for themselves.
Ah, I can see how you got there. I was not working with TS then; like most people I didn’t yet even know it existed. At the time, I preferred untyped JS and Python to typed C, Fortran, Java, C++.
Extremely fair. I go through phases where I over-emphasize all the things, and usually a second/edit pass helps clean it up. I actually went through and removed a ton of them from the post itself, just because of this very comment. ;)
For me it was the opposite. I started out with dynamically typed, then switched to statically typed for many years, then I realized that types were almost worthless (and even harmful in some cases in terms of how they influence design/architecture) and I switched back to dynamically typed and never looked back.
Nothing beats good testing and good architecture design.
The benefits of types is not something to be discovered, but something that's taught in school with very convincing arguments, if not too much zeal. It's interesting to see this kind of post. Not to be sarcastic about the late discovery of typed goodness, but the fact that this is not already a concensus in the engineering world.
It’s not a consensus because many of us have found that types are not the panacea we were taught. Erlang is a fantastic language, and it’s arguable that types would not improve it, but rather hinder some of its better features. Same is true for Smalltalk and various lisps. I think there’s a place for various approaches to types.
I realized these lessons about types in the first semester of compsci at Uni - no one "taught me" but the curriculum meant that I learned this. It's great that people come to these insights on their own - but it's really clear that this is a very inefficient way of learning.
Kind of! But there is also, in many areas of life (and perhaps including this one) a reality that you can't actually learn things truly without going through some of these bumps along the way.
The funny thing is, I started out passionately affirmative of the value of types, became disillusioned as discussed in the post, and then came around when I found types that actually did the things that I wanted.
Honestly I think people who object to typing are lazy and inconsiderate to other people who will need to read use and maintain their code. They often haven't had much experience and haven't worked on large code bases in my experience.
Author here, with a few comments on things that I see coming up through the discussion—
1. This is not a post bashing dynamic types. When I write that I now value type systems, that doesn’t imply or entail thinking that dynamically-typed languages are bad, wrong, etc. I have a preference for (a certain variety of) static types these days, but some of the best and smartest engineers I know have a preference for dynamically-typed languages—and as my conclusion notes, I take that much more seriously than I did earlier in my career. I’m particularly interested in spending time with Erlang/Elixir and Clojure in the future, as both of them take very different approaches to robustness from the dynamically-typed languages I’ve worked with in the past.
2. Folks have asked about what specifically I found lacking in the type systems I encountered earlier (Fortran, C, Java). I alluded to this in the post, but I’ll elaborate a bit. For my part, I found that all of those—really, anything in that line without influences from e.g. Standard ML—required a great deal of the work of a type system without the degree of benefit I wanted from it. There has been some progress here in languages like Java, C++, and C with `var`/`let`/`auto` with inference, and more by adding in closures and the like. But even today, the gap in what I can express and have the compiler check for me between Java and even TypeScript (much less TS, Rust, Elm, Haskell, etc.) is large. That gap is where a ton of the value is, at least for me.
3. Other folks allude to coming from e.g. PHP to C and Java and finding even C and Java’s relatively limited type systems to be a blessing. I can see how that would be the case, especially given Java’s really great tooling for refactoring. I actually worked with C, C++, Fortran, PHP, JS, and Python all in parallel for the first four years of my career (and semi-regularly poked at Java; I had some early-2010s interest in Android dev that never went anywhere), so it was certainly not from lack of exposure that I didn’t find the tradeoffs all that valuable in the type systems of C, Fortran, Java, etc.
4. I strongly suspect that “cognitive style” (for lack of a better way of putting it) plays an enormous role in how one feels about and approaches types. The literature on types is not very conclusive or robust, and while it finds (e.g in studies of TS) that it does eliminate certain categories of bugs, advocates like me would do well to remember that the effects are relatively small and relatively limited even so. And as I alluded to in (1), I know a bunch of brilliant developers who recognize the value of types in principle—who have worked with Haskell or other similarly robust languages!—but don’t prefer them, and from discussions with them it’s very clear to me that we just think differently, and therefore approach building systems differently!
5. I think there are places where not using the best and most robust type system imaginable is negligent: a TLS implementation, for example. But in those areas, I also think that you’d better be doing an enormous degree of testing, and putting formal methods to use, and doing multiple audits, and basically throwing every possible solution at the problem. Same thing for aircraft or spaceships or medical hardware. Your average CRUD app… probably doesn’t need that level of effort, though. One of the problems in any discussions or debates about these kinds of things is failing to distinguish appropriately between the things which are necessary for different domains and indeed which may be better suited for those domains.
6. Finally, a meta-point: I find it entirely predictable, but a little bit sad, that this post blew up here on HN, when I tossed it off in under 15m… while a deep dive on a cutting-edge JS reactivity system I spent 3 weeks writing and revising got far less attention (and I’m sure that goes for many such careful write-ups). I get it: we all have opinions about things like types. But it’s still too bad!
Comparing languages with types to languages without types is like comparing conservatism to liberalism. Less rules = more change. How much of your newfound liking for typescript can be attributed to 'realizing' it's better and how much of it is simply because you're getting older?
This is the opposite of my progression. I worked professionally with Haskell for 6 years and then Scala for 2 years at the early stages of my career.
Static typing is a waste of time that does not facilitate better design, better clarity or safer code. The classes of errors detectable and preventable with static typing are almost never very important, and in dynamic languages you can achieve the same results with lightweight unit tests that are no more effort, and often much less effort, than creating or maintaining type system designs.
Type annotations are code and more code is just more liability and more complexity. It really is just that simple. You should strive to write code with the fewest cognitive concepts necessary. Cognitive concepts are the hugest form of tech debt you can have in code, much much worse than repeated code, lack of modularity, lack of test coverage and so forth. Type system design invites running amok not just with all kinds of new cognitive concepts to model every domain problem, but also the code authors are encouraged to use “creativity” when developing these concepts, which leads to poorly factored messes that express the limited view of a set of specific people - often with huge premature abstraction that makes the code excessively brittle (even when it’s claimed to be extensible) and poorly suited to accommodate use case changes or requirements changes.
For the past 5 years I’ve moved on to exclusively using the Python ecosystem for system designs. It’s nice since I work in machine learning and no ecosystem even comes close to Python in terms of ML tooling, but even for backend systems and web service designs around the ML product my team creates, using Python has been pure joy in terms of easy development cycles, easy ability to write less code, easy ability to achieve high safety and reliability, easy ability to trade off safety as a resource to accommodate sudden business requirement changes.
I can sincerely say across the board programming large backend systems in Python has been more pleasant, easier to design, read, understand and maintain, easier to extend, and led to safer, more reliable delivered solutions than any of the projects I worked on using Haskell or Scala (even when working with those languages in mature organizations that had very seasoned, veteran functional programming experts establishing in-house development practices).
Now that I’ve transitioned through senior & staff engineer and became an engineering manager, my perspective over the years has come to endorse dynamically typed languages as vastly superior tools.
My experiences with Haskell and Scala just lead me to believe Python is strictly better for large system design.
No, the Haskell job was in a large financial service firm with a huge investment in Haskell. Many people on the team were even experienced with GHC compiler engineering. The code base was very large and cross-org.
All the other jobs have been in large ecommerce firms, again with large codebases cutting across the org, and separate pockets of “small project” development here and there.
I also have many years of experience in C and C++, both of which are solid. I especially like writing systems in C because it’s easy to make it module oriented, and static typing doesn’t need to have anything to do with concept or domain modeling, yet you still can occasionally sprinkle it in with simple structs. I like C++ without classes or exceptions, but the language outgrew that way of using it.
The best approach I ever saw to using Haskell at scale was to severely disallow any complex type system feature, like even disallowing lensing or custom type classes. Just ruthlessly stick only to basic language features and module oriented design. When you make your own data structures, never generalize them to inherit functionality through type classing, rather always rotely splay out a new module of all the functions and behaviors for operating on that data structure. Do not seek to endow it with any type of compositional behavior besides very basic plain function composition (eg no monadic behaviors).
> The best approach I ever saw to using Haskell at scale was to severely disallow any complex type system feature, like even disallowing lensing or custom type classes. Just ruthlessly stick only to basic language features and module oriented design.
I agree. It's not unlike what you can do with reflections and other dynamic features of dynamicaly-typed languages. So it's not a problem of static typing itself. It's a problem of taking a feature of the language and strangling the code with it.
I had to do some refactors of JS code where everything was a shapeless objects with random fields, as the language doesn't even nudge the developer into considerations like keeping consistent "shapes" of data. Had similar experiences with some Python code, where I had to fight both data types fuzziness, reflective-features inventions (hello __getattr__!) and OOP over-abstracted nonsense at the same time.
One way or another, in larger teams/projects dynamic typing just doesn't scale. Type system works like a most basic documentation / developer aid / consistency encouragment, even if it is pushed into complexity nightmare.
I don't think I've ever even seen a reasonably well working large project in a dynamically-typed language...
IMO, static typing gets blamed for problems that are usually caused by OOP: complexity caused by needless abstractions (mostly inheritance taxonomies).
> “ IMO, static typing gets blamed for problems that are usually caused by OOP: complexity caused by needless abstractions (mostly inheritance taxonomies).”
That’s fair, since most of the issues are caused by poor attempts to create “concepts” that match a problem domain and encode them in the type system, which is a very OO mentality. But it’s not exclusive to OO, you can get the same problems in functional languages or plain module languages too if you go too far with type system designs.
But functional languages offer plenty of ways to successfully avoid it too.
Nonetheless, I still argue that safety & correctness should be treated like resources, not absolutes. Safety and correctness are things you want more or less of, depending on costs (in terms of extensibility, extra code, rigidity, difficulty to onboard new people to conceptual complexity, etc.) no different than making memory trade offs for better runtime or vice versa.
Languages should facilitate thinking of enforced correctness as a gradual concept, like gradual typing in Python, so the programmer has freedom to make these choices. Languages should not dictate one way (eg always enforcing type safety), and especially not at the cost of extra code or complexity.
> “ One way or another, in larger teams/projects dynamic typing just doesn't scale. Type system works like a most basic documentation / developer aid / consistency encouragment, even if it is pushed into complexity nightmare.”
> “I don't think I've ever even seen a reasonably well working large project in a dynamically-typed language...”
My experience is the exact opposite. I’ve never seen a large codebase that is statically typed which doesn’t turn into a complete ball of mud with overlapping type system concepts that bring extensibility and innovation to a grinding halt and cause months of delays to adjust for even the simplest changes to business requirements.
Meanwhile the two largest code bases I’ve ever worked in (one was the entire search engine implementation of a giant tech company, the other was a document editing CMS system also at a large tech company) were both 100% Python on the backend, both on the order of 500k to 1MM lines of code, and working with them was very enjoyable. Plenty of tech debt and headaches like any big enterprise codebases, but none of the “grinding halt“ - “we can’t do anything like this because our earlier brittle commitments to certain type system designs would have to be fundamentally refactored to allow it” - sorts of existential, death of velocity kind of problems.
Interesting. Seems we have very opposite experiences.
I definitely agree that one can paint themselves into a static-typing corner where refactoring becomes painful (but as I said, usually because of OOP). Also in big projects care needs to be taken to keep compilation times manageable.
> Languages should facilitate thinking of enforced correctness as a gradual concept, like gradual typing in Python, so the programmer has freedom to make these choices.
Depends on the domain. In web-based non-critical applications, maybe. If there's a bug, users gets and error etc. reports it, you push a fix to prod and things are usually OK. In things I mostly worked with (embedded, automotive, on-premise appliances, systems, p2p, finance), the cost of debugging and fixing small bugs is far too high.
I am supporter of gradual typing though, if someone wants to go with dynamic typing route.
> “ Depends on the domain. In web-based non-critical applications, maybe. If there's a bug, users gets and error etc. reports it, you push a fix to prod and things are usually OK. In things I mostly worked with (embedded, automotive, on-premise appliances, systems, p2p, finance), the cost of debugging and fixing small bugs is far too high.”
I disagree here. The main systems I worked on using dynamic typing and treating safety and correctness like resource trade-offs have been high-frequency trading, banking, ecommerce fraud detection, and real time image processing - all cases where failures are costly and stakes were high.
In fact when the stakes are high, it’s even more critical to use gradual notions of safety & correctness, because you even more urgently need to put the trade off decisions in the hands of engineers and business / product leaders.
I started off in PHP. It was wild. Anything could be anything. Refactoring was a nightmare. Our codebase was littered with mystery variables like $hold, $hold1, and $holda, which were re-used all over the place. (Granted, this two other problems entirely orthogonal to types.)
Then I got a job at a Java place. My god, it was beautiful. I could suddenly know what arguments functions were expecting, even if I hadn't seen them before. I could be instantly aware if I was trying to use the wrong variable somewhere, or pass in invalid data.
It was as if someone lifted me out of a dank cellar where I had previously subsisted on whatever mushrooms the rats didn't deign to eat, into a brightly lit dining room full of steak and pastries.