Hacker News new | past | comments | ask | show | jobs | submit login
38% of bugs at Airbnb could have been prevented by using types (reddit.com)
358 points by deterministic on Feb 11, 2019 | hide | past | favorite | 390 comments



Relatively recently I've adopted the stance that JS (via V8+NodeJS) has the best feature set of the "scripting language" tier of interpreted programming languages, assuming ecosystems for your problem are comparable. The combination of compile time type-checking support (via Typescript), parallelism support (recently in v11 w/ worker threads), and built in async IO paradigms give Node a huge leg up over ruby, python, and perl.

Weirdly enough, JS is the best in my opinion precisely because it does so much to not be itself. The crazy iteration speed of the JS ecosystem means that people have done crazy things, but the good stuff has remained. Python relatively recently added typing[0], and Matz has mentioned that ruby has no plans to include types in the language proper (though people are working on it). Perl might be the closest in terms of features since it has thread support but...

If you enjoyed this hot take, here are a few others you might enjoy:

- "extensive" unit testing is a poor man's type checker

- developers that don't think specifying types is helpful are still early in their careers (those who suffered in the dungeons of Java get some slack)

[0]: https://docs.python.org/3/library/typing.html


I believe the most amazing feature of JS is the ability to make their devs think they got it all figured out.

> Python relatively recently added typing[0]

The typescript compiler and the mypy type checker both appeared in 2012.

Function annotations have also been officially part of the Python syntax since 2006, although they could be used for more than typing.

Browsers still do not support the typescript syntax as of today, you need a transpiler.

> parallelism support (recently in v11 w/ worker threads), and built in async IO paradigms give Node a huge leg up over ruby, python, and perl.

Python has supported non blocking processing as far as 21 years ago with threads. It's actually the language of choice for pentesting a networks because it's so easy to implement a custom service scan.

Python then, in 2002, saw the rise of Twisted for pure asynchronous I/O + callbacks, as it was for JS and AJAX at the time. Ansyncio came way later, yet Python implemented async/await and yield before it was part of JS.

Python implemented multiprocessing as well, which allows to leverage multiple cores, more than 10 years ago. I remember that producing the equivalent result ishard to use accross all major browsers, and nodejs, consistently. The Python multiprocessing pool has even not changed API when we migrated from Python 2 to 3 and works with both (I just tried it to be sure).

> Weirdly enough, JS is the best in my opinion precisely because it does so much to not be itself.

I think you should try to put in production 4 or 5 other languages before stating such a strong opinion. Your post feels more like over enthousiasm due to lack of experience than a real analysis of the state of dynamic programming languages.

There are strong qualities to Perl, PHP, Python and Ruby. The fact they are so popular without having an accidental absolute monopoly on the most extraordinary plateform of the world (the Web) should at least give you a hint of that.


> The typescript compiler and the mypy type checker both appeared in 2012.

> Function annotations have also been officially part of the Python syntax since 2006, although they could be used for more than typing.

> Browsers still do not support the typescript syntax as of today, you need a transpiler.

I did not make a statement about the typescript compiler/transpiler nor mypy, I made a statement about the typing module which was brought about by PEP 484 (proposed in 2014) and accepted in 2015 a full 3 years after the typescript compiler came on the scene. JS got gradual typing first, even if it was through transpilation -- there are even earlier examples of syntax enhancement (though it was mostly sugar) with coffee script from even earlier.

JS is undeniably more innovative in this aspect -- it's also made more mistakes as well.

It doesn't matter what browsers support, because the code is transpiled. The point is that you have access to better tools as a developer while writing code.

Is your point that python have been specifying their types in their code in any sort of numbers since mypy was invented? Because I sure don't see that on slides PyCons over the last ~5 years.

> Python has supported non blocking processing as far as 21 years ago with threads. It's actually the language of choice for pentesting a networks because it's so easy to implement a custom service scan.

> Python then, in 2002, saw the rise of Twisted for pure asynchronous I/O + callbacks, as it was for JS and AJAX at the time. Ansyncio came way later, yet Python implemented async/await and yield before it was part of JS.

There's a difference between what's possible and what's easy/facilitated well by the standard library of a language. I did not imply that python could not do non-blocking processing. My point was that these patterns were not easy and were not standard -- and the community floundered for years between alternate solutions (gevent/twisted/tornado), eventually creating asyncio while NodeJS has had support for this feature built in from the start.

Also I'm not sure that pentesting stat is correct -- ruby is extremely popular and so is/was perl. Scripting languages with decent standard libraries and easy semantics are popular, not necessarily non-blocking processing support.

> Python implemented multiprocessing as well, which allows to leverage multiple cores, more than 10 years ago. I believe producing the equivalent result is still hard to use accross all major browsers, and nodejs, consistently. The Python multiprocessing pool has even not changed API when we migrated from Python 2 to 3 and works with both (I just tried it).

Please see the other comments. No one is saying multi processing is impossible in python, I specifically called out that it's done by starting other processes. There are subtle differences between how node worker threads and multi-process setups work, in particular the shared memory space and communication mechanisms. The GIL is also an issue.

> I think you should try to put in production 4 or 5 other languages before stating such a strong opinion. Your post feels more like over enthousiasm due to lack of experience than a real analysis of the state of dynamic programming languages.

> There are strong qualities to Perl, PHP, Python and Ruby. The fact they are so popular without having an accidental absolute monopoly on the most extraordinary plateform of the world (the Web) should at least give you a hint of that.

So your point is that everything is different and everything has it's own benefits? Well that's impossible to refute so I'm just going to restate my point, maybe you can try and refute it directly.

Javascript has the accessible (and thus widely adopted) and easy to use implementation of the following features, today in 2019:

- Rigorous and powerful compile time type checking

- Parallel computation (without spawning another process or the effects of a GIL)

- Asynchronous IO

The combination of these three features being accessible is what makes Node stand out in front of Perl, PHP, Python, and Ruby.

Python may have "had" the capability to do the things above at various times in the past, but it's technically been capable of doing just about anything, like any other programming language -- you can launch sub-processes from bash, doesn't mean you should use it for large applications. What makes a good language is accessible and easy to use features that make you both correct and productive.

Node has more of these tools available, and they're more accessible in Node than they are in Python, and more people are using them. Typescript offers more robust type support than typing+mypy. Node worker threads, though very recent, provide parallel computation without child subprocesses, in a shared memory context with no GIL. Asynchronous IO is second nature to JS developers, while that's definitely not the case for python developers.

[0]: https://www.python.org/dev/peps/pep-0484/


> I did not make a statement about the typescript compiler/transpiler nor mypy, I made a statement about the typing module which was brought about by PEP 484 (proposed in 2014) and accepted in 2015 a full 3 years after the typescript compiler came on the scene. JS got gradual typing first -- there are even earlier examples of syntax enhancement (though it was mostly sugar) with coffee script from even earlier.

> JS is undeniably more innovative in this aspect -- it's also made more mistakes as well.

JS has none of that. Typescript has it. They are not the same thing.

I'm comparing an external tool with an external tool. Mypy could type check Python before it was officially in the stdlib, just like the typescript type checker can make it so. Type annotations are not part of JS.

> It doesn't matter what browsers support, because the code is transpiled. The point is that you have access to better tools as a developer while writing code.

It does very much. Transpiling is not a free operation, and comes with a lot of pros and cons.

It's also not a universal operation. Most JS projects do not use typescript. Many projects still don't even use a transpiler at all.

> There's a difference between what's possible and what's easy/facilitated well by the standard library of a language. I did not imply that python could not do non-blocking processing. My point was that these patterns were not easy and were not standard

JS async were not easy neither at the begining. We had a decade of callback hell, then another war about the promise implementation, all that wrapped up in code to make browser differences manageable. Not to mention the 2 nodejs forks in it's small lifetype, JXcore rewritting the entire async model.

Let's also not forget we managed to scale web services way before NodeJS was a thing, with dozen of languages.

Yes, making an async HTTP request is more natural in JS than in Python. It would be a shame, JS is a Web oriented language while Python is a polyvalent language. However, I have yet to witness a service where this would have made such a huge difference all in all. Even facebook use long polling if I recall, something you can do with threads easily on the client side, and support with a proxy such as nginx as a front end even with nodejs as a backend.

> Javascript has the accessible (and thus widely adopted) and easy to use implementation of the following features, today in 2019:

> - Rigorous and powerful compile time type checking

Not JS. Typescript. And typescript and mypy came out together.

> - Parallel computation (without spawning another process or the effects of a GIL)

Web worker consume a similar order of magniture of resources than a python instance, and communicate data by message passing the same way. And no GIL.

The difference ? Python came fist, and has a more consistent support (you need fallback in JS if you use it in a website). I don't think we should be proud of it, but it's not the middle age you are talking about.

> - Asynchronous IO

> The combination of these three features being accessible is what makes Node stand out in front of Perl, PHP, Python, and Ruby.

The advantage of JS is historically a better async API. Let's not overstate this though.

And even if JS were vastly superior in all those area (which is, again, overstated), it hardly is enough to draw the conclusion that:

> as the best feature set of the "scripting language" tier of interpreted programming languages

Readability, ease of automation, rich stdlib, ecosystem, ability to deal with formatting/parsing/date handling, community culture and so much more are as important, if not more important, for a scripting language. Scripting, after all, goes far beyond the Web.

> Python may have "had" the capability to do the things above at various times in the past, but it's technically been capable of doing just about anything, like any other programming language -- you can launch sub-processes from bash, doesn't mean you should use it for large applications. What makes a good language is accessible and easy to use features that make you both correct and productive.

Youtube having been rewritten in Python proves that it was more than being capable. It also powers instagram and reddit.

When google rewrites their Python services for perf, they don't go for JS, they choose Java, C or Go. That's because again, Python is at the same order of magnitude of perf than JS on average.

> Node has more of these tools available, and they're more accessible in Node than they are in Python, and more people are using them.

Yes, JS is a Web language. That doesn't make it the ultimate scripting language.

> Typescript offers more robust type support than typing+mypy.

This is a gratuitious claim. Espacially since typing is in the stdlib, while typscript annotation are a syntax error in Emascript.

> Node worker threads, though very recent, provide parallel computation without child subprocesses, in a shared memory context with no GIL.

Objects passed to web workers are serialized and deserialized, just like with python multiprocessing.

> Asynchronous IO is second nature to JS developers, while that's definitely not the case for python developers.

Doing Ajax call being the based of today's web, the contrary would be alarming.


> Transpiling is not a free operation, and comes with a lot of pros and cons.

You could say the same for languages that require compilation, it's not exactly an uncommon pattern. JS will almost always have some kind of operation applied to it when running on the web (minification etc) so adding TypeScript transpilation is often a very small step.

Is that ideal? No. But Python doesn't even run on the web, so of course it doesn't have these issues.


> developers that don't think specifying types is helpful are still early in their careers (those who suffered in the dungeons of Java get some slack)

I'm not high-confidence about this heuristic because 1) the "class label" is a bit subjective and 2) I don't have a massive sample size, but attitudes toward typing is one of the best filters for engineer quality I've come across.

(No, I haven't put this in practice in interviews, and I don't ask any related question. It's just a correlation whose strength I've found really striking)

It's part of a broader class of heuristic I've been noticing recently (again, still a very preliminary, low-confidence model): things like "lack of reliable, quality devtools" and "inability to customize interfaces" bother highly-productive engineers more than less-productive ones, which is what you'd expect given that inefficient interfaces to their work has a far greater cost to the former. Along these same lines, if (eg) you're already pretty bad at understanding complex systems by reading code, you're much less likely to notice or care about how much more difficult dynamic typing makes this.

I should be clear: this isn't a general-purpose argument against dynamic typing, and I'm not suggesting that the situation is as simple as "dynamic typing bad, static good". They both have their strengths and weaknesses, which make each appropriate for different situations; I ended up picking Python for a company whose tech org I was responsible for building, and as much as I personally find engineering in Python to be horribly unpleasant, it was the right choice for the company given the environment it was in, and I would make the same choice in the same situation.

I don't think my theory above is particularly novel, and I understand the self-serving bias that it aligns with, which is part of why it's still such a low-confidence model. But I've been unable to wrap my head around the number of nonsensical defenses of dynamic typing I've seen over the last few years, and this model is the first to offer an explanation: if you can't tap into the benefits of static typing, of course the accessibility and flexibility of dynamic typing is going to seem like it far outweighs the foofy, theoretical benefits that static typing brings to stability and comprehensibility.


> It's part of a broader class of heuristic I've been noticing recently (again, still a very preliminary, low-confidence model): things like "lack of reliable, quality devtools" and "inability to customize interfaces" bother highly-productive engineers more than less-productive ones, which is what you'd expect given that inefficient interfaces to their work has a far greater cost to the former. Along these same lines, if (eg) you're already pretty bad at understanding complex systems by reading code, you're much less likely to notice or care about how much more difficult dynamic typing makes this.

I think for me it is the other way around actually: I believe I'm worse than the average programmer at keeping complex states in my head, thus I push heavily for high quality code and tools to reduce the times I have to fight through particularly nasty debugging / refactoring sessions .


This is the exact rationale I expect experienced developers (at least the ones I'm capable of recognizing) to have -- I look for architecture, solutions, and tools that make it impossible for me to make mistakes, instead of trying to rely on having tons of discipline. Pay the cost once, then never worry about it again.

No one ever talks about it, but one of the most valuable things I've learned to start doing is writing pre-push (and sometimes pre-commit) hooks into my projects, adding the corresponding `dev-setup` or whatever command to put them in place, and running linting & unit/integration tests before every commit/push. You could solve the problem of committing/pushing buggy code by just "writing better code", but with this methodology that problem completely evaporates.


> I think for me it is the other way around actually: I believe I'm worse than the average programmer at keeping complex states in my head

If you think you are actually good at keeping complex states in your head (not just better than the average human/programmer) and therefore don't need high-quality code/tools, you are setting yourself up to be destroyed by your own hubris. There are no humans who are good enough at keeping complex states in their head to actually understand the gigantic programs that we work on these days. Back when a few hundred kb of memory was a lot, some people might have managed it, but not anymore.


Reliance on high quality tooling to take care of the minutiae for you in a principled way isn't something that goes away for highly-productive engineers. If anything, IME (and in line with my GP comment), it grows. If you can hold X units of complexity in your head, and you can map Y units of functionality per unit of complexity, then any improvements in tooling that increase Y will be more effective for those for whom X is higher.

You're moving more quickly and at higher levels of abstraction, so getting dragged down into the mud of reading every line of some shitty code to figure out an ambiguous type or side effect has a far greater cost (the lack of const or anything similar makes it even worse, since you can't write off a function that takes an object even if your sphere or concern is just that object). By contrast, if the functions you're writing are trivial and you read and understand code line by line, dropping into the stack just blends into the rest of the code-reading (or so I assume). (I still remember inlining an absolutely grotesque type-hint in order to get some ammunition to convince someone to refactor his sewage spill of a function. I wish I had written it down, but the type hint had something like twelve tokens and three levels of nesting at it's deepest point. Even that guy wouldn't have written that code if he had had to write out the type during development, or at least I like to think that. Shudder)

The rabbit hole goes very, very deep when it comes to engineering talent (or lack thereof...). You're probably a much higher-percentile quality engineer than you think, as I've met people working in industry whom[1] I had to fight tooth and nail to institute the absolute basics of common-sense engineering practices. The fact that you're aware of the value of static typing is a pretty good signal IME, and as I said, hoping that typing et al will "save you from yourself" just means you don't have the brain of a computer, and isn't a habit that talented engineers lack.

[1] the long and short of it is that I took an ill-advised shot at being the first employee of a company with two non-technical co-founders; I wanted something very specific out of the opportunity so it was fine for me (or so I thought...combining founder psychology with two non-technical founders is a freaking nightmare), but I wasn't willing/able to tap people in my network, all of whom were at much better organizations doing much more interesting things and getting paid tons more. So I got a nice look at the underbelly of the engineering labor market, enough to make me stay far, far away from small startups unless I have a very, very good reason to want to work with them or very good assurances that they'll be able to hire at at least the baseline level of a big company.


I've never really understood the advantage of dynamic typing. Not having to specify type to me seems like it's barely an advantage and immediately gets outweighed by the fact that you don't have compile time type checking (or have to jump through hoops to get it).

Typing out code tends to be a minor part of writing software, so you don't gain much there. Furthermore, you don't want to mix types in the same variables either, because this comes at a cost of performance and complexity.

So, what kind of flexibility do you get from dynamic typing?


As someone who is very pro-static typing, the big annoyance for me is that sometimes you have to write a load of boilerplate purely because of the type system.

It's got a lot better in the last decade, but with the more advanced features of a language you can find yourself slowed down in a few ways. Maybe by having to explicitly declare types where you'd normally implicitly declare them. Or you might have a load of methods that actually throw 'Not Implemented' errors because you are forced to match a massive interface but it only has to do one thing (yes, C# has an actual NotImplemented exception because this is so common).

It's annoying and can interrupt your flow, mean you have to spend a bit of time searching for the right voodoo code to make it work or you sometimes feel like you're doing a rain dance just because of the type system.

The other thing is that you're sometimes forced to cast a variable to a type that you know is already that type, but a method has returned it as an 'object'. It feels really clunky, e.g. event listeners.


I'm playing a bit of devil's advocate here, but:

> Or you might have a load of methods that actually throw 'Not Implemented' errors because you are forced to match a massive interface but it only has to do one thing (yes, C# has an actual NotImplemented exception because this is so common).

I would heavily argue that if you encounter such a scenario you are probably Doing It Wrong® or you were unfortunate enough to inherit/have to work with some badly designed code.


In an ideal world maybe, but there's no point fully fleshing out an interface where you know you're not using those methods. I'm old enough and ugly enough to comprehensively know what I'm doing.

It's simply because interfaces in a framework are occasionally quite big and unwieldy, I'm sure their developer thought it was for good reasons, the other option is to split the interface up and have a class inherit from a ton of interfaces, which is just as annoying.

There's a few other replies here I'm rolling my eyes at because real-world programming is messy and being too dogmatic with ideals simply isn't practical.


Then you run into the problem of the massive global framework having Done It Wrong and not being able to deprecate.

(note: this is not unique to static type systems)


Yes, but even Java standard libraries throw UnsupportedOperationExceptions. It is so sad.

Edit: This is exactly the point where a type system betrayes its promises.


So try and avoid it, like Scala - you have now found yourselves trapped in an incomprehensible maze of GenTraversable and the like.


Or you might have a load of methods that actually throw 'Not Implemented' errors because you are forced to match a massive interface but it only has to do one thing (yes, C# has an actual NotImplemented exception because this is so common).

That’s where the “I” in SOLID comes in.


Hum... I think it was upper on this thread that some GP was preemptively excusing the people hurt by Java. Well, C# is the same.


> Or you might have a load of methods that actually throw 'Not Implemented' errors because you are forced to match a massive interface but it only has to do one thing ... The other thing is that you're sometimes forced to cast a variable to a type that you know is already that type, but a method has returned it as an 'object'. It feels really clunky

These are not "problems with static typing", though. Dynamically-typed languages have you do these things all the time, it's just that you don't even notice it anymore because it's so common and you don't have static types as an alternative there.


I’ve hated dynamic typed scripting languages for over two decades and could never understand why anyone could possibly advocate for them. But, I was “forced” to learn Python within the last year and it was also my first time doing any type of automation, mostly with AWS. Before then, I only did “real programming”.

I have to admit for short programs without a lot of business logic where you are just providing a little glue - ie every time a csv file hits S3 run this lambda that executes a sql script to import it into a Redshift (OLAP) database, transform it with some queries, export the results back into S3, import it into Aurora (Mysql) and then load it into this appropriate tables - Python is my go to language out of the three “approved” languages (c# and JS being the other two).

I still don’t know why I like Python but hated Perl and PHP. But, I do know why and when I reach for Python over C#.

- when I know that more than one developer won’t be working on the same code and you don’t need the benefit of strong typing to help ensure that changing the signature one place won’t effect code somewhere else.

- Little conditional business logic. When you can easily gain complete code coverage just by running the script.

- small program that I can easily reason about.

- in the case of AWS and lambda, you can edit and test code right in the browser for both Python and JS. If you don’t need anything but the AWS SDK (Boto3), there is nothing to package. Once you get your code right, you copy and paste your code from the AWS consuls, export your configuration as a CloudFormation snippet and create your CI/CD pipeline.

It’s also easier to debug directly on the server without having to either have a compiler on the server or trying to set up remote debugging.

Until about a month ago, I had only written a REST API in C#. I wrote my first service in Node/Express just because I wanted to be one of the cool kids. I hated every second of not having a strongly typed language especially with my JSON objects. The tooling by definition can’t be as good especially for refactoring. Yeah I know about Typescript and I’ve played with it. I didn’t want to go down that rabbit hole and try to get buy in from the company.

On a side note. I’ve used Mongo before but only with C#. With the C# Mongo LINQ driver, you are working with strongly typed collections and strongly typed LINQ queries. I can’t imagine the hellish landscape of using a weakly typed language and a weakly typed data store.


So I probably agree with you more than you think, but I'm trying to tilt the scales towards dynamic typing a little to counteract any bias I may have from the fact that all my work with static typing has been with talented engineers and all my work with dynamic typing has been with garbage ones.

I agree with you about there being no concrete advantage to not being forced to specify the type of something; honestly, I don't even like type inference in eg C++ all that much, except as syntactic sugar over really ugly template types for whom the type just pollutes the code without adding any signal. But for example, I don't even really see the advantage of using auto in:

for (Widget widget : widgets) {

That being said, I do believe there is a concrete flexibility advantage, that I pray you never need to take advantage of but that sometimes can't be avoided in seed-stage startups. There were a couple of times where I monkey-patched the definition of a function (including an external library once! Oh the shame...) in order to get something working for a sales demo, and we just wouldn't have had the velocity to take shortcuts like that in a language like Java or C++, though I would personally make the fixing of the hack a P0 or at least a P1. These kind of "intentional bugs" allow you a bit more flexibility with using tech debt, and in the hands of skilled enough engineers, can provide you with boosts of velocity when you need it in the short term (at the cost of overall velocity).

Similarly, some pretty nifty stuff in libraries is possible due to the flexibility of these languages. I'm not a huge fan of frameworks in general, but my understanding of Django's ORM is that it compares quite favorably to ORMs in other languages. Given the amount of times I needed to do database surgery on prod (did I mention how much this company sucked?), it was a lifesaver to 1) have such an expressive interface to the DB and 2) have both an interpreter in the same language as our code and the ability to import and run production code. I'd imagine that pretty much every part of this would be more difficult in a less permissive language.


I’m just started to get into Python and I generally don’t see the point of ORMs since they really don’t add much in my opinion. I’m really curious about Why you like the ORM for Python.

The only ORMs I’ve ever found useful are those based on LINQ because LINQ makes any ORM (and whatever you call the Mongo LINQ driver) first class citizen and it integrates into the language.


I have the same rough opinion of ORMs in general and I've never had the occasion to examine it: it was the right choice for our org because there's no way in hell the engineers were going to learn to write good SQL when they could barely write good Python, especially when I'm far from a SQL expert. I was at the time capable of writing good SQL (presumably after a ramp up period), but I doubt my capability to teach it well without significant time investment. The ORM was a path of least resistance thing, providing a simple, easily-comprehensible interface in the language they were already writing.

In the specific context of stuff-is-on-fire, production database changes (I'll reiterate, this place sucked), the fluency of the ORM and its seamlessness with our code was helpful. I don't know much about LINQ, but I gather that the syntax is declarative, which is a perhaps-minor speedbump that would nevertheless have made me less fluent in the specific above-described scenario.

In the general case of engineering, LINQ looks like something I would far prefer.

This isn't a criticism of your comment or anyone else's, but I have to say I'm amused by how all of my responses downthread of my comment are in defense of me being charitable towards more permissive languages (as I said, my bias probably swings solidly the other way, and if I never engineer in a permissive language again, I'll die happy). I kind of assumed it would go the other way, with HN jumping on my comment as elitist or something.


Every other ORM is bolted on.

For instance if you write the following using LINQ:

var x = from c in customers where c.Age > 65 && c.Sex == “male” select c;

var results = x.ToList():

If customer represents an in memory list, it gets translated as regular old compiled code.

If customer represents a sql database table - it gets translated to SQL and run on the database server.

If customer represents a Mongo collection. It gets translated to a Mongo query.

Of course you can do stuff like switch out what customer represents at runtime and it will be translated appropriately. You also get autocomplete help and compile time type checking.

You can pass the where clause around:

var expression = c.Age > 65;

And pass it into a method.


> Similarly, some pretty nifty stuff in libraries is possible due to the flexibility of these languages.

You pay for that "flexibility" quite directly in lack of verifiability and of adequate performance. It's not a free lunch - if it was, we would all be using LISP anyway.


If those were the reasons for LISP's lack of popularity, then LISP would have been more popular than Python or PHP.

What happened is that 1980s hardware was not ready for languages slower than C. C became the dominant language, and C-style syntax became the dominant syntax, with C++, Java and JavaScript following its lead. Eventually people adopted languages slower than C, but they didn't adopt paren-prefix notation.


People used slower-than-C languages throughout the 1980's and onward: for instance BASIC-s (eventually Microsoft's Visual Basic starting in 1991), Ashton-Tate's dBase and its derivatives ("xBase").


Yes, I didn't say it was a free lunch. The entirety of my comments here are about how there are pros and cons to every language that manifest in different situations, and things like "ability to go into ugly technical debt for short-term speed boost" being a pro has nothing to do with the claim that it's an unalloyed good, but there are situations that play to its strengths and weaknesses. I hate to be rude, but I'm really not sure how you thought I hadn't already made this point if you had read the thread we're on. If anything, my original comment comes close to saying that static typing is superior, period (though the same caveats apply about different situations requiring different sets of strength and weaknesses).

The obvious example here is that I unquestionably do my personal scripting in Python instead of C++, despite very much enjoying engineering jobs in the latter to those in the former. Different situations, different requirements.


> I've never really understood the advantage of dynamic typing.

It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.

The focus is now on writing bug-free programs as best as we can, but I remember that when I started programming about 15 years ago there was still a vibe of "let's make it as easier as possible for people to program so that everyone can write code, bugs don't matter (at least not that much)". That was the general vibe behind HTML and browsers (the way that you can see the actual HTML "code" that displays this very page, even if it may be "broken" HTML), behind early JavaScript (anyone remembers how one was able to include different JS-widgets on one one's HTML page?) and behind very cool projects like One Laptop per Child [1] (which got killed by Intel and MS among others and which, presumably, was allowing its users to see the Python source-code of each program they were using). Those were the dying times (even though I didn't know that at the time) of "worse is better".

But in the meantime the general feeling around writing computer programs has changed, correctness and "bug-free-ness" are now seen as one of the main mantras of being a programmer, and the programming profession itself has built quite a few walls around it (TDD was the first one, "use static types or you're not a real programmer" seems to be the latest one) so that the general populace can be kept at the gates of the "programming profession". In a way that suits me, because I'm a programmer and less people practicing this profession means more potential guaranteed earnings for me, but the idealist in me that at one moment thought that programming could (and should) become as widespread as reading stuff like books and newspapers feels let down by these industry trends.

[1] https://en.wikipedia.org/wiki/One_Laptop_per_Child


> It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.

I really don't understand this argument. What typed languages have you used?

Decent type system are able to infer most or all of your types. Meaning you ge the benefits of a type system without having to explicitly type everything.

Also, when you code, is your bottleneck serious how fast you can type? If it is, I would suggest you need a different approach.

> But in the meantime the general feeling around writing computer programs has changed, correctness and "bug-free-ness" are now seen as one of the main mantras of being a programmer

We are engineers, or at least we should be. Correctness has to be a goal, although I feel like even saying that is laughable right now. The amount of effort to prove a program is correct is extremely high for anything nontrivial. Robust type systems at least help cover our bases and help us reason about the structure of things, and more powerful type systems allow you to push even more invariants to the type system.


> Also, when you code, is your bottleneck serious how fast you can type? If it is, I would suggest you need a different approach.

My brain can process only so much information, the less thing it has to process (like the less type declarations) the better.

> We are engineers, or at least we should be. Correctness has to be a goal, although I feel like even saying that is laughable right now.

That’s what I was trying to explain, in a very convoluted way, that back in the day programmers were not intrinsically viewed as engineers, and that that viewpoint was seen as a valid one. Engineers were seen as building cathedrals while programmers (or hackers, if you will) were seen as building bazaars and other random stuff (like forums written in Lisp-like languages like Arc).


> My brain can process only so much information, the less thing it has to process (like the less type declarations) the better.

Ignoring that I've already mentioned inference that makes this a non-issue. My other problem with this is that just because you don't have a type system or don't have to write types, doesn't mean that the constraints of that 'information' goes away.

Your function still has a set of invariants that need to be maintained, you've just opted to make them invisible and have your program crash when they are violated. Instead of knowing statically when you've made a mistake.


>It's easier to write code in dynamic-typed languages because (generally speaking, I'm pretty sure there are exceptions to this rule) dynamically-typed languages are not as verbose as static-typed languages.

I don't understand this reasoning though. Actually typing out the code takes a fraction of the time that you spend on writing software. Furthermore, we have autocomplete and had autocomplete for a long time in statically typed languages.

I was formally taught programming about a decade ago in Python, but I naturally gravitated towards Java (and C) because of static typing. The same arguments were made back then and I just didn't get them.


As my comment near the top of this comment tree hinted, I pretty much do find statically-typed languages to be overall quite superior, but the reason that I find myself defending dynamically-typed languages all over this thread is because of the hard-line being taken by many that they couldn't possibly have any advantages in any situation. I do plenty of scripting, to improve both my personal and professional productivity, and I don't even think twice about using Python instead of Kotlin or C++ or what-have-you. I iterate rapidly on my scripts, they're not part of large engineering systems that a reader may be bouncing around, the rare bug that may pop up isn't production-critical, and the ability to be both fluent and concise just makes the code faster to read and write, given the small scope of the program. This is sort of a lazy example, because it's not far from the truth to say that scripting languages are suited to scripting and that's why they suck for engineering, but it illustrates the point that different situations can highlight or downplay the importance of the relative strengths of weaknesses of various languages and paradigms, and there are few common programming-language characteristics[1].

The problem here is that half the people on this thread are looking for a simple, low-dimensional model to cram the issue into, requiring dynamic languages to have _zero_ advantages in order to sustain their view that statically-typed languages are superior overall.

I'm not really sure how to fix this tendency, and it's by no means limited to just disputes within the field of programming, Simpler, black-and-white models are much easier for the brain to handle, so people gravitate towards them even when they don't match reality as accurately.

[1] though it is fair to say that a big chunk of the advantage of dynamically-typed languages is "accessibility to those who barely know how to code", which doesn't affect the calculus for someone who's capable of writing in either type of language


The problem is not that it takes a while, but that it takes a while when I'm in the middle of a complex task. Because of language verbosity, I might need to context-switch away from solving a hard problem.

For instance, if I am writing a complex function `frobnicate(x)` and I notice that I need to pass a configuration object to the function, I can just `frobnicate(x, opts)` in the function declaration, and `x.foo` to access the relevant option, and go back to the complexity that is still fresh in my mind.

With a verbose type system, I need to pick a type for `opts` NOW, which means I need to define a new type NOW, which will typicaly involve a new file with at least a dozen lines. By the time I'm done generating those stubs, the hard bits of the original function will have faded from my mind.

The code I write, in either case, isn't the final code. Once I have an initial working version where the complexity has been embedded in actual code instead of floating around in my mind, I can start iterating, refactoring, annotating, and so on. In TypeScript, at this point I will define a new options type and annotate `opts` with it.


Or you could just not. I mean, you could just write out your code as if you already had the types specified and then fill in the boilerplate and fix errors afterwards. That's essentially what happens with dynamic typing anyway, so the only major difference is that your IDE/editor might notice.


In a statically typed language, the code will not run until I have convinced the compiler that it should. If I wanted to explore the behaviour of `frobnicate` before committing to a design, I would still have to define all those satellite types that may well become unnecessary.


TypeScript lets you add the "any" type to say you don't care now. Solved?


I mentioned Typescript as an example of language where I could do this, so... Yes?

Typescript is not a language I would say has a verbose type system.


I think the main reason some people say they prefer dynamically-typed languages is because of a feature that historically correlated quite well with it: they provide almost instant gratification.

There are two parts to that: no required boilerplate (“Hello, world!” shouldn’t require more than twice the number of characters, and requiring a manual compile-link step is a no-no) and low time to first output (who cares that each value carries a hundred or more bits of type info and takes an indirection to access, as long a as that “Hello, world” makes it to the screen in <100ms)

Historical counterexamples to the claim people prefer dynamic typing are the ROM-based Basics of the 8-bit computer era. Statically-typed, but popular, IMO because they ticked both boxes.


forget autocomplete, what about type inference? We've had that for decades yet new languages still come out without good support for it.


> Typing out code tends to be a minor part of writing software, so you don't gain much there. Furthermore, you don't want to mix types in the same variables either, because this comes at a cost of performance and complexity.

> So, what kind of flexibility do you get from dynamic typing?

I'm the same, I don't get it at all. If you're going to voluntarily opt-out of automatic safety checks you better have a mind blowing list of good reasons for it.


Serendipitously, I actually had the occasion to do a code review of some Tensorflow code in C++ today and man was it narsty compared to what it would have been in Python. this is pretty much the kind of thinking of when I mentioned libraries being cleaner in ways where the trade-off is worth the loss of information.


> attitudes toward typing is one of the best filters for engineer quality I've come across

I tend to agree, and i classify this as part of a larger attention to detail / conscientiousness / desire to get things right trait which i associate with good programmers.

> things like [...] "inability to customize interfaces" bother highly-productive engineers more than less-productive ones

This i'm skeptical about. Enthusiasm for tricked-out vimrcs, heterodox keyboard layouts, and the like seem evenly distributed across the talent spectrum. If anything, there's a weak negative correlation.

But, as with wutbrodo's, it's likely these theories are partly self-serving.


> This i'm skeptical about. Enthusiasm for tricked-out vimrcs, heterodox keyboard layouts, and the like seem evenly distributed across the talent spectrum. If anything, there's a weak negative correlation.

Thanks for your perspective. We may be running in different crowds but tricked-out vimrcs (or even using vim/emacs) haven't ever been that popular around me. Even at Google, there were MacBooks everywhere. There just didn't seem to be any interest in using vim or Linux to signal that you're hardcore, as opposed to actually getting value out of it. At the company I'm currently at, the person generally understood to be the most talented guy on the team I'm joining just happens to be the only one on emacs instead of VScode. To reiterate, all of this is pretty low-confidence for the reasons in my orig comment, but within my small dataset, the correlation has been quite strong.

In my personal experience: I spent an hour writing the base of a customized vimrc one morning at Google at the start of my career. In the years since then, the process has been 1) notice annoyance or repeatable process, 2) spend 5 minutes scripting it, 3) go on with work and life and see a positive ROI usually within weeks (ignoring the much more important benefit of not interrupting my thinking with manual tasks). I do something similar but much more ambitious for my desktop (currently an i3/Debian system that's almost entirely operated by keyboard). This incremental approach has worked out _really_ well for me, and the accumulated effect over the years is a system that's very, very customized to my use-cases and needs. It's to the point that, at the job I just started at, I'm a little in shock to see people stretching a single chrome window across one massive monitor and an IDE across the other, with a dozen floating windows behind each. By contrast, I have 5-7 workspaces in my working set at any time, each one of which has multiple windows in use. Again, I'm pretty sensitive to the risk of feeling productive instead of being productive by building tools that aren't worth it, but the incremental approach generally constrains me to the stuff that I'm sure will be helpful.

(There are of course, places where the value/effort graph has a big discontinuity, and I take the time probably once every ~six months to handle stuff like that, in the form of a mini-project that I do after thinking about whether the ROI would be worth it. I also often get the chance to try my hand at new things in the course of doing so, which helps shift the balance. Eg, right now I'm working on breaking the frustrating interface barrier that Chrome presents: as much as I like the web, shifting so much into the browser and away from the OS/WM has been a giant leap backwards for usability, particularly given how obsessed website designers are with mouse-heavy interfaces)


> things like "lack of reliable, quality devtools" and "inability to customize interfaces" bother highly-productive engineers

I've seen people go down a rabbit hole of optimizing tooling to the point where it crowds out using those tools to actually accomplish anything. At some point worse is better.


Of course; if you ignore the possibility of optimization, then the claim becomes almost a taoutology. In practice however, I notice this a lot less often than I notice the reverse.


Nitpick: If you are using another language that compiles to JS, I think an improved formulation would be "JS has the best feature set of the runtimes / ecosystems used for scripting".

(I like ClojureScript and Elm more than TypeScript for web development, and prefer Python for non-web scripting, YMMV)


I think that the language tooling is really powerful.

I work in both Python and JavaScript, and I often miss Typescript’s power. I think Python still has big advantage in the libraries.

The standard library is full featured and well documented. But perhaps just as important is that libraries are also really often high quality with clear documentation.

I think there’s a lot of interesting JS libraries and experimentation. But I love not feeling a need for better Python libs, because they’re already so mature.

I think that a “tidy verse”-style project in JS to offer a “proper standard library” across the ecosystem would do wonders for the bit of a mess server-side libraries are at the moment. Something that could be as effective as jQuery was at corralling DOM API . The Node.js APIs are good baselines but not super fun to use directly...


For me personally, the huge turnoff for JS/TS is the library ecosystem, and specifically this notion that everything is better as a myriad of tiny 10-line libraries with a spaghetti dependency graph.

Python is much more traditional in that it has a rich standard library to begin with, so most third party libraries are also bigger, and it's easier to track of what does what.


> a myriad of tiny 10-line libraries with a spaghetti dependency graph.

I'm curious why people dislike this. I'd rather have lots of little decoupled libraries than huge monolithic dependencies that attempt to do everything in one big folder. The big libraries often duplicate functionality found in smaller tools, and I would rather they simply consume a bunch of small tools that can be shared across the ecosystem.

It reminds me of the oldschool pre-systemd UNIX philosophy of "do one thing, and do it well". Did everybody change their minds on this?


The JS libraries usually fail at the "do it well" part.

* UNIX tools don't get updated every week.

* UNIX tools don't get deprecated and replaced every month.

* When UNIX tools do update, the updates don't carry a risk of adding shady code from new maintainers because the original author "got bored lol" and handed the repo over to the first person who asked.

* When UNIX tools do update, the updates don't introduce unvetted and undocumented code that will, for example, start displaying Christmas decorations in your UI on certain dates.

How many UNIX tools actually come in "writing this from scratch is faster than searching for a tool that already does this, so I'll just write my own" size (except for `yes` and such)? Trick question: is trying to install a new UNIX tool similar to this [0]?

[0]: https://what.thedailywtf.com/post/1441806


In counterpoint, just for fairness' sake, we have:

The jokes in the Linux "man" command that indeed did trigger at a certain magic time.

* https://unix.stackexchange.com/questions/405783/

The failure to take on board the improvements to Linux "ifconfig" from 2009.

* https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=359676#41

Changes to GNU "ls" that did not go unnoticed.

* https://bugzilla.redhat.com/show_bug.cgi?id=1361694


That ls change is absolutely bizarre!

One of the aspects of, ahem, bazaar-style development that we don't talk about much is that decisions get made by whoever can be bothered to turn up. If it so happens that the only people maintaining ls are the three people crazy enough to care about maintaining ls, then ls gets crazy decisions.


Another difference is that the unix tools are a bounded and known set of utilities that work independently and compose well together and try not to overlap in functionality. Meanwhile, js modules are an unbounded set of unknown modules that are intertwined and aren't designed to work together and overlap in functionality.


There are some downsides to it. The most critical issue is the left-pad threat: A tiny, unknown dependency with 3 stars on Github ends up being part of a huge project. Nobody in JS audits every single dependency up the tree, because there are too many of them. So situations like what happened with left-pad absolutely can and will happen again.

But there are other misc issues. For example, each package is a separate download with a bunch of metadata and overhead, so on download it spends a lot more time on those two lines than it would if they were just part of the library.

I like code splitting and there's a lot of benefits to the approach JS is taking, but it's not without its faults.


To talk about "monolithic" systems and "UNIX philosophy" is to not really understand. It's an oft-made mistake, even just considering systemd alone.

* https://news.ycombinator.com/item?id=19024256

* https://news.ycombinator.com/item?id=13390245

The actual ideas in computing when it comes to modular systems are cohesion and coupling. Modular systems should have high cohesion and low coupling.

The objection to leftpad levels of granularity is that it is high coupling, lots of little modules inevitably requiring one another in large numbers and complex ways. And a lack of any grouping at all is actually a lack of any sort of cohesion, coincidental or otherwise.


The definition of "one thing" can vary a lot. For example, libpng does one thing: it reads PNG images. But that's a much bigger thing than something like left-pad.


Interfaces should be simple but functionality behind that interface should be non-trivial, otherwise it is not a net value add. John Ousterhout speaks about this much better than I could.


I've worked on lots of nodejs projects with 1-2k transitive dependencies. I've seen plenty of simple projects end up with close to 1 gig of files in node_modules. And almost all of those files are never used at runtime.

I'd be a lot happier with npm if tiny libraries took up less space on disk. Storing a local copy of each dependency's readme file, license file, package.json and test suite is overkill when the module itself is only a few lines long. Its also very common for sloppy package maintainers to accidentally ship binary files in their npm module (image banners for their readme.md, test data, etc.)

Npm should only download the javascript file for small single-file modules. Nodejs itself already supports this - if you have a javascript file in your node_modules directory, require() will pull it in like normal. The license might need to be prepended to the file in a comment block, but this could easily be done automatically.


The problems come in when one of your little dependencies itself depends on something else. You want a single level of dependencies. Monoliths are good for this because they try to be self-contained.


People don't know how to handle libraries that update frequently because it requires more buy-in on a specific aspect of CI/CD that folks don't often invest in (admittedly because it's not immediately obvious).


I think if we look at it another way, you're going to find that this spaghetti dependency graph is the natural state of every codebase.

When people need to do a thing that has already been done a 1000 times there are at least 3 ways:

(1) write it yourself

(2) copy/paste someone else's code

(3) import/vendor someone else's code

(1) is definitely not a sustainable choice, especially for something that is frequently done.

The real interesting bit here is that I think (2) is just a shitty version of (3). You inherit the benefits and the bugs of the code you copy/pasted, and likely understand that code less than (1), which is increasingly true the more nuanced (and probably performant/good) the code is.

I think the question is whether you can see your spaghetti dependency graph or not, and how big the strands of spaghetti are. It's reasonable but likely pointless to argue about the size of spaghetti.

JS's library ecosystem lays bare the actual nature of bazaar-style development. The thing about cathedrals is that they take a while to build, and even then without the right people you can build bad ones -- JS's alignment to the bazaar style of doing things is key to it's iteration speed, and is it's best tether to the UNIX model.

All that said, not having packages be immutable from the get-go, and willy-nilly updating of dependencies without manual review are indeed problems with the ecosystem, with immutable packaging being the only one anyone can actually fix with code.


Don Knuth, for one, disagrees with your assessment of (1). (http://www.informit.com/articles/article.aspx?p=1193856)


Of course he would. He's likely to write it better than anyone wrote it before him. That's not something that works for other people, however.


Ruby and Python have had built in support for threads since the beginning, and they both have 'community standard' libraries for doing async IO:

https://twistedmatrix.com/trac/

https://github.com/eventmachine/eventmachine

(I guess Perl does too, but I haven't used Perl for 10 years)

I admit the promise pattern may be simpler to understand, but if you need to build a heavily evented system Node doesn't provide anything unique that can't be achieved in other ecosystems.


JRuby has Real, Proper, Actual Threads too. If you want parallel computation threads in Ruby, that's the way to go.


This is right, but only with a very limited interpretation for "support for threads", which does not include actual parallel computation. Not only is there a problem with the GIL, the only "threading" that ruby and python support natively is the spawn-another-process-and-find-a-way-to-coordinate kind (ala multiprocessing[0] in python).

Also, the async IO support via libraries are there for ruby and python, but that brought along with it significant fragmentation (at the very least in the python case, I know less about ruby) -- no one was 100% sure whether to use tornado, twisted, gevent/eventlet for a very long time.

A quote from another comment I wrote[1]:

>> If we really want to get in the weeds, we could talk about python 2.x's floundering around async IO -- IIRC Twisted/Tornado/Gevent/Eventlet were all close to equally good options (and thus mindshare was split) while node had it built in, with a virtualenv-by-default packaging system. Also there's WSGI -- the interoperability it provides is cool, but also means more moving parts than a simple node+express setup.

[EDIT] - It's a little unfair of me to decry the fact that multiple good solutions existed as a bad thing when I'd say it was a strength of the ecosystem. I do think it's valid to discuss the issues with packages that weren't compatible across interface boundaries between these async io libraries though.

> (I guess Perl does too, but I haven't used Perl for 10 years)

Same here, it's been a similar duration since I've used Perl for anything but even back then, the major difference between perl and the others was extremely good native regex support and the ability to spawn system threads that actually ran in parallel without coordinating another process.

> I admit the promise pattern may be simpler to understand, but if you need to build a heavily evented system Node doesn't provide anything unique that can't be achieved in other ecosystems.

My point was about the ease of achieving it... In the last ~3 years I've seen a wave of articles in the python blogosphere about how to properly use asyncio and all these new async-enabled libraries pop up (the old ones that relied on twisted/eventlet/whatever else are still around of course and perfectly good) -- what I'm saying is that JS has had this since the get go, and it's been thrashing python/ruby on benchmarks that favor the usecase for the same amount of time, especially if someone doesn't use one of the appropriate libraries.

NodeJS's async io story is "batteries included", and that's huge in this day and age. Whether node has the right interface or if they made it ergonomic enough is a whole 'nother thing though...

[0]: https://docs.python.org/3/library/multiprocessing.html

[1]: https://news.ycombinator.com/item?id=18756083


> the only "threading" that ruby and python support natively is the spawn-another-process-and-find-a-way-to-coordinate kind

Are Javascript workers something different?


According to the Node 11 docs, worker threads are more lightweight than processes, but still heavy.

That makes me think they run in the same address space, but use separate runtimes. So it's more scalable than ruby and python threads or multiprocessing.


JRuby doesn't have a GIL to begin with so there's no requirement to use multiprocessing to host Rails apps etc.

They're an improvement over a single GIL but not by much. IIUC, they're like Racket's Places: a complete duplication of the VM but sharing an address space. The basic implementation doesn't really save much memory but it's a bit more convenient.

There's been a couple of attempts to implement subinterpreters for CRuby too.


JRuby is an incredibly underrated piece of kit. That startup time, though...



The GIL still allows for parallel IO so typical applications never need async. Modern Ruby just runs one process per core much like NodeJS in clustered mode or on JRuby with no GIL.


We stopped solely using strong-types after decades of only doing JAVA and C#, and it hasn’t decreased our code-quality.

I think most people who’ve spent a significant amount of time with strong-types do them by heart.

We didn’t leave strong-typing to leave strong-typing btw, we just started doing more and more with Python.


Lots of things work when your people and processes are good enough. We don't need seatbelts if we never crash. That can lead to problems with scaling your team if needed, however. At some point you might have to let a moron in, and stronger typing just might be the thing that prevents a catastrophic bug.

I did Python for embedded systems for a year. Python is so expressive and well equipped, but the lack of pre-execution type analysis leaves you vulnerable to severe defects unless your testing coverage is 100%. I would prefer Java over Python just for the safety the compilation step provides(and I don't care for Java). I love programming in Python, but system reliability trumps my personal desires in this case.


I'm sure you know this already but you might want to check out:

- https://docs.python.org/3/library/typing.html

- http://mypy-lang.org

Also flake8 has also helped -- it's just a linter but using it in conjunction with the typing module and running that in a watch loop in another terminal window has done wonders for me.


And that's awesome, but if you're dealing with a large legacy codebase it will take a while to annotate your code.

I love flake8. Properly tuned, it can be a great help. It saved my butt more than once.


You mean static-typing.

Both Java & Python are strongly typed..

e.g. in python, <3 + "2"> gives <TypeError: unsupported operand type(s) for +: 'int' and 'str'>


Strong typing is static, but usually entails more strictness (e.g. not allowing implicit nulls) and more features at typelevel (e.g. sum types, higher level types).


This really depends on definitions of strong and weak and this might be another point but C could be considered a static and weak language. Not sure of any other language like this.


> We stopped solely using strong-types after decades of only doing JAVA and C#, and it hasn’t decreased our code-quality.

How do you know this? It's exceedingly rare that anyone actually keeps track and monitors rate of defects in any application, but there have been studies done[0][1] on the ability of stronger typing in languages to reduce errors.

My condolences for your time using Java and C# -- they are often what people first consider when they think of languages with "good" type systems, but I dislike them both because what they introduce as "good" type systems are often very much not, in comparison to languages like Haskell.

It's a bit disingenuous to act like Haskell has been a performant/production-ready option as long as those languages but still, almost every time I talk to someone and we get to the topic of typing I almost always assume that their bad experiences came from those languages.

> We didn’t leave strong-typing to leave strong-typing btw, we just started doing more and more with Python.

What are your opinions on https://docs.python.org/3/library/typing.html ?

[0]: http://ttendency.cs.ucl.ac.uk/projects/type_study/documents/...

[1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2935596/


I work in public service, we track and benchmark everything. Often way more than is needed, but that’s the consequence of a governing body that is very interested but doesn’t necessarily understand what you do.

I think we’ll see more and more use of the typing library, especially on big projects. Similarly we’re having debates on typescript for front-ends, not because we need it right now, but because it’ll probably really suck if it turns out we were wrong about not needing it.


I have years of experience with typed ECMAScript (3 years of ActionScript 3, 2 years of Java and 1 year of TypeScript).

What you're saying here and in your previous comment makes sense. From my experience, engineers who have a LOT of diverse experiences with different paradigms tend to agree that types don't reduce bug density in the code.

Types are a learning tool like training wheels on a bike; with comparable costs in terms of efficiency. Don't get me wrong, training wheels are great but you won't find them at the Olympics.


> Types are a learning tool like training wheels on a bike; with comparable costs in terms of efficiency. Don't get me wrong, training wheels are great but you won't find them at the Olympics.

That is such an idiotic comparison, I don't really know what to say here.

1.) Proper type inference means you don't have to explicitly write out most/any types, so there is no "cost in terms of efficiency". Some annotations might be required from time to time. If the type checker throws an error that means that your types don't align and that you further reason about your program - which is something you need to do in dynamic languages as well, just without any assurance in the first place.

2.) Types allow performance optimisations not available in dynamically typed languages, so again - no "cost in efficiency".

3.) In dynamic languages you don't get as much help from your tools as in static languages, so you can't exclude whole classes of bugs that your compiler finds for you.

So, where does this "cost in efficiency" occur? How is static code analysis a "training wheel"?


>and it hasn’t decreased our code-quality.

But it is.


> "extensive" unit testing is a poor man's type checker

Wow! No!

If this is what you feel, you might have been doing unit testing wrong the whole time! Unit testing is a powerful technique that is useful regardless of your language. the first unit testing libraries and frameworks started with Statically Typed languages.


I don't think that the point is that unit testing is a poor man's type checker in general, but that "extensive" unit testing in a dynamic language may include a lot of testing that could have been weeded out by automatic analysis based on a static type system.


Just chiming in to say you're correct -- that's exactly what I meant.

For example, if you write a function that only makes sense on numbers greater than zero, then make it take a `GreaterThanZero<Int>`. It's a contrived example but people often pride themselves on writing unit tests that check for "out of bounds" cases like that, and defensive coding in dynamic languages usually starts with "make sure they gave you the right things and nothing is null".

There are cracks to slip through with Typescript (it's not as good as using a language that enforces this stuff down to the runtime -- i.e. haskell), but in general you're going to avoid these kinds of errors in the first place if you write sufficiently useful types.

Another hot take:

The value of different types of testing is as follows (most valuable first):

- E2E tests that test customer flows

- E2E tests that prevent regressions (made after a bug was discovered)

- Smoke tests

- Integration tests

- Unit tests


> - developers that don't think specifying types is helpful are still early in their careers (those who suffered in the dungeons of Java get some slack)

Developers who believe specifying types doesn't have any drawbacks are still early in their careers :D


TypeScript is a great middle ground in that regard. Doing something that is made tricky by types? Just cast it to "any" and do whatever you want.


My experience is that developers should really realise quickly that strictly adhering to defining all your variables is a very good thing.

But with the rise of JIRA and the pressures to close tickets with the least amount of work a lot of newer developers don't realise this or are not allowed to spend the time to do it right the first time.


Python relatively recently added typing[0]

Python always had "types" of course. What Python 3 added was type hints (an approximation of static typing, if you will).


> via Typescript

null/undefined is an inhabitant of every type in typescript, I would suggest purescript instead.

> Python relatively recently added typing

I think you mean the ability to add explicit types to variable names. It always had types.

> "extensive" unit testing is a poor man's type checker

Certainly.. iff you use dependent types.

> developers that don't think specifying types is helpful

Why specify types if they can be inferred? Why do work that our computers can do for us?


> null/undefined is an inhabitant of every type in typescript, I would suggest purescript instead.

If you want absolute certainty in your types with TS, you should be using the `--strict` flag, which makes it so this is not the case.


"absolute" is a bit of a funny qualifier to use here, given the many soundness holes in TS.

For example (included with --strict):

https://www.typescriptlang.org/play/#src=class%20Base%20%7B%...

(I like TS, I just don't think it fully solves this problem!)


> null/undefined is an inhabitant of every type in typescript, I would suggest purescript instead.

You're right -- but I've found in practice that strict null checking[0] is more than enough. Also I assume that Purescript has a similar bottom value as haskell[1] (please correct me if I'm wrong).

> I think you mean the ability to add explicit types to variable names. It always had types.

Yep -- I actually meant literally the `typing` module[2].

> Certainly.. iff you use dependent types.

You can get some good guarantees with container types (`newtype` in haskell terminology) that are constructed in a very specific way, assuming you have immutable values. For example, defining some type `NonZero<Int>` in typescript w/ `private readonly` interior value.

> Why specify types if they can be inferred? Why do work that our computers can do for us?

You're right -- I misspoke (misstyped?). By all means if inference is possible that is also fine.

Somewhat seperately, the statement stands as far as type signatures (i.e. `doThing :: Input -> Output` or `function doThing(in: Input): Output`) go -- they're great benefits to a codebase in my opinion.

[0]: https://basarat.gitbooks.io/typescript/docs/options/strictNu...

[1]: https://wiki.haskell.org/Bottom

[2]: https://docs.python.org/3/library/typing.html


> null/undefined is an inhabitant of every type in typescript

This isn't true as far the way you stated it. TypeScript types are not nullable by default.

How can you use existing JS libraries without supporting null anyway?


Wanna do typechecking in Python? Use pylint. No type hints needed


That's a linter, not a type checker.


If I have to run a transpiler it's not a scripting language


Is python not a scripting language because some versions of it are compiled?


implementations are not languages


scripting language to me has a definition. Maybe I made it up but my definition includes "runs immediately". So, basically, any language who's most common usage is either via "#!/bin/path-to-interpreter" or "interpreter script" or register some extension ".bat" in DOS/Windows

This fits python, perl, sh, node

It does not fit C/C++/C#/F#/Java?/Go/Swift/Typescript. Those are 2 steps at least. One to compile which generates an executable. Another to run the executable.

But I'll acknowledge I made up that definition. No idea of others agree. I get that it's arbitrary. I could certainly make something that given "#!/bin/execute-c-as-script" takes the .c file, compiles it to some temporary place and runs it. But that's not "most common usage" so I'll stick to my point as part of the definition of scripting languages. For me, if it's most common to have a manual compile step separate from a run step then it's not a scripting language.


It's basically meaningless when you try to line languages up with that sort of criterion.

Go compiles fast enough that you can use it with a wrapper "interpreter": https://github.com/erning/gorun So does C: https://github.com/ryanmjacobs/c So does Haskell: https://stackoverflow.com/a/8676377 So does f#, via `#!/usr/bin/env fsharpi --exec` I think

You can compile python: https://github.com/dropbox/pyston You can compile JS: http://enclosejs.com/ I wouldn't be surprised if someone found a way to AOT-compile bash, but I'm not going to bother looking for that.

The only criterion I've found which makes some sort of sense is whether the language designers consider "scripting" a use case to actively support and use to drive language features, or if it's a second-class citizen.


You seem to have ignored "most common usage". None of your examples are "most common usage"


I think there's a bigger issue with the narrative here. Sure, types (and tests) help avoiding bugs, and they are definitely useful in large projects like Airbnb. However, they also add a cost that is always ignored, and it's even taboo to talk about it. You don't want to be labeled as an "anti-good-practice" person.

But for some cases, this cost is tremendously important IMHO:

- Startups: being X% faster at launching your product might mean the difference between life or death for the company. This includes Airbnb back in the day!

- Early libraries: not being forced to use a specific type makes it easier to change your mind midway and experiment. Later when the library API is more stable, sure, feel free to add types. Thinking too structured is dangerous when trying to innovate.

- Personal projects: where it just doesn't matter if it breaks in some situations and having to build a whole babel tower might discourage you from getting started for fun.

As an example, doing "horrible" things I could launch a fullstack prototype in a couple of days, or a decently working prototype in a week. Granted, I would not want to work with that code after 1 month and there would be some bugs, but the ability to get something quick-and-dirty is also very important and something categorically ignored. As an added benefit, this also allows you to iterate quickly, and throw whole pieces of code altogether because you avoid the "sunk cost fallacy" both from yourself and 3rd parties.

Please use types, and tests, and other good practices in principle. But also learn when it's good to break those! I get the chills when I think on someone who blindly believes on only using X for everything and anything.


The sweet spot would be to not have an all or nothing proposition, but to be able to work with fluidity in the beginning, and then to harden the result later on by adding in various layers of sanity checks, of which typing could be one.

It's unsurprising that Airbnb—after having succeeded—find that they could have eliminated some of their current problems by making a trade-off earlier on.

A parable: a baker has a shelf of many different flours that she uses to make different kinds of bread. Being the only person in her small bakery, she relies purely on muscle memory and habit to know which flour goes where. She experiments a lot with different mixtures and kinds bread.

Later, some breads have turned out to be more popular than others, and she's doing well. She hires a couple more bakers, and now finds that she needs to label the flours, write down recipes for the successful breads, and has to establish procedures and guidelines for restocking. Different people have different habits, and muscle memory transfers poorly from person to person.

When she was starting out, she benefited from short feedback loops. The shorter, the better. Later on, the benefits of short feedback loops wouldn't be any diminished, but the need for reproducibility has become much larger than it was at first.


Can you provide an example where static typing slows you down compared to dynamic typing?

I've been using static typing for decades and the only problem I can imagine is having to add an object property definition instead of just using it. And I can't imagine this extra operation being a significant drag. Is it for you?


This is my question as well. I think about types even when I don't explicitly declare them.

"Function x will return a list of a"

"Function y will return an instance of b or null"

"Function z will return an instance of c or throw an error"

How much time am I losing by putting that information in the function declaration? I doubt it's more than the time I lose when a function returns or throws something it wasn't supposed to.


Right, you’re treating the code as statically typed even when it’s not. In that case you never gain the advantage of dynamic languages.

Thinking so strictly about what types a function will receive rules out making use of things like decorators in python that operate on function arguments regardless of type.

Or generic validation utils that take many types and return the passed in type after performing internal validation logic relevant to the type.


"Takes many types" ≠ "cannot be typed". TypeScript supports decorators, union types, and `any`/`unknown` types.

Those generic use cases are good examples of flexible types, but I have a hard time seeing how

* "I don't know whether the `getMyReplies()` function returns a list of items, an iterator, a non-rewindable generator whose results I need to cache, or a closure with/without memoization" and

* "I don't know whether the singular Reply entity has a field called pid, parent_id, parentId, ParentID, ParentGUID, or parent_message_identifier"

are advantageous to rendering a list of replies.


For code written in JavaScript you check the source code to see what arguments it takes and what it returns. If it's not written in JavaScript, for example a Browser API's or NodeJS built in module, you read the documentation and use console.log's. Then you rely of good naming, for example (nrA, nrB) => sum vs (a:Number, b:Number) => c:Number where the one without static types is more clear of what the function does and what it returns.


Having to search through source for the argument types of a function is usually an example given in favour of strong typing, not against it.


There are not many reasons to read the code of your dependencies, you will learn a lot doing so. One issue when you work with the compiled code is that the type annotations have been removed, and that type annotations leads to more terse code as it would otherwise become too verbose eg. number:number tends to become n:number and when the type is removed it just becomes n, so you kinda become dependent of your tooling and can't easily just stop using it in favor of something better.


Compiled code doesn't have to be unreadable.

Languages like C and C++ made it readable with debug symbols.

The web stack does the same with source maps. The TypeScript compiler also offers to emit .d.ts files with type information for reuse in other projects.


> In that case you never gain the advantage of dynamic languages.

Treating the code as statically typed makes the code predictable, less prone to bugs, and easier to maintain. If a function's return type is determined by dynamic factors at run-time, it becomes a maintenance and debugging nightmare; the effect is compounded as more and more unpredictable dynamic functions are chained together.

> operate on function arguments regardless of type

parameterized types and other techniques allow you to describe this type of behavior in a type safe way.

> Or generic validation utils that take many types and return the passed in type after performing internal validation logic relevant to the type.

All these things are possible in typed languages.


> Right, you’re treating the code as statically typed even when it’s not. In that case you never gain the advantage of dynamic languages.

Both the things you mention can be done in statically typed languages. The only difference is in dynamic languages you don't have to declare interfaces, subclasses, etc. I just wonder whether that really is as much of an advantage as people think.

I've spend years using both types of languages and I think I think the safety of statically typed languages is usually worth the slight amount of friction it adds.


You do this because that's how you have trained yourself to think about code. It's a little self centered too assume this for everyone else.


Thinking about data contracts for inputs and outputs is pretty basic stuff. If you don't do that, how do you know what your code actually does?


They don't, but they're fast and fail often.


There's a gap between things it might make sense to consider as data contracts, and things that can be expressed in practical type systems in common use.

One example: a function which, when given a single element, returns a single transform of that element. When passed a list of elements, returns a map of that transform over the list. You can argue as to whether that's a good idea or not, but I've seen JS functions that make sense in context which do that.

This might just be my lack of type system expressivity knowledge, but how would one annotate the type of that function? I get that you could use a union type on the param and the return value, but not how you link the two.


> how would one annotate the type of that function

Some languages support function overloads like Swift and Kotlin.

    fn foo(input: A, xform: A -> B) -> B
    fn foo(input: [A], xform: A -> B) -> [B]
Hmm, maybe we can generalize that into a Functor.


Yes, this is very common in JS, and somewhat common in other popular dynamic languages. In TypeScript, for example, you'd implement this by defining several overloaded signatures. Note that this isn't C++ or Java, so you still have a single function implementing all of them - the various overloads just specify what happens depending on the arguments you pass.

https://www.typescriptlang.org/docs/handbook/functions.html#...

This is a limited approach, because the overloads aren't a part of a function type - i.e. you can't have an anonymous function value that is overload-typed, it must be a named function. In principle, a similar mechanism could be used for types as well, it just doesn't usually come up in that context (e.g. callbacks typically don't return different things), which is probably why they didn't do it.


How could one think about functions differently than about what their input and output is?


Your statement is just a tiny bit too general and that's where you're missing.

"What about their input and output is" => "what type their input/output is"

The next question is what does the type of the input/output actually tell you about the input/output? In the most popular languages (using interfaces), it basically tells you that the input/output might have a, b and c set (or they might be null), and it could also have anything else set.

So the actual information you have is that you might or might not have (a,b,c) and also might or might not have (anything). Not as foolproof as you think. The natural objection is that aha! I defined this class to require (a,b,c) in the constructor and do a nullity check, so I know much more than this! But this is back to having "types in your head", that constructor signature and nullity check isn't compiler enforced (as in if you change it, nothing will fail in the typesystem at the point of the function that we're checking, as long as those fields remain on the class). So the useful part of the static typing is actually living in your head either way.


>Can you provide an example where static typing slows you down compared to dynamic typing?

I spend an annoying amount of time trying to compare/add/multiply... different types in C++ like time, durations, dates, ints, floats in my day job.

> And I can't imagine this extra operation being a significant drag.

If you have an open source C++ web framework that you think is comparable in features to python django or ruby on rails, I would be very interested as I do want the performance improvements but all the ones I have looked at look like way more work to implement what I want than using django/RoR.


> I spend an annoying amount of time trying to compare/add/multiply... different types in C++ like time, durations, dates, ints, floats in my day job.

Multiplying ints and floats "just works" in C (although you have to mind overflow, but that's not a type issues).

Other examples don't make a lot of sense. E.g. why would you need to multiply time by duration, or compare one with the other? They're different quantities, reflected as such in the type system. That it won't let you do something like that is the whole point - it would be a bug if it did (and if not, then the data types were chosen wrongly).

OTOH, adding a duration to a date is semantically meaningful - which is why std::chrono has operator+ overloaded for various combinations of time_point and duration.


> and if not, the data types were chosen wrongly

That is kind of my point. When I use C++ in my day job, there are highly optimized libraries/classes which makes sense considering the amount of work that needs to be done.

However, if I am just trying to do some quick data analysis and the data set is small, I just want to do some quick math on some basic data and not have to write code to convert between every single type under the sun.

With dynamic types, I can quickly convert data between dictionaries, sets, lists and what not and run it extremely quickly. That is much quicker than trying to compile and run code to transform data into the results I am trying to create.


Isn't this more due to automatic type coercion in the language and not from the lack of types?


It's conversion, not coercion. Coercion is what C does to crash programs.


C++ no, C# yes: ASP.NET MVC is pretty good.


For example, having a compiling system. That is a drag for everyone, but especially for newcomers and people getting started.

Edit: can you really not imagine any situation where typescript slows down some team in some situations? :)


These people reap the greatest benefit, because they are least familiar with the codebase and get the most out of having the computer check their changes make sense.


Totally agreed. I'm not saying that they would not get benefits! I'm just saying there's also a cost and that might be the difference from someone continuing learning or giving up.


I was assuming professional programmers with at least some experience - the article is about the toolchain in use by Airbnb.


I’m not sure. A newbie experiencing a truly interactive REPL-style environment might be way faster than being stuck in an edit-compile-debug loop.


Depends on compile/deploy times really. 5 minutes per cycle is painful. 30 seconds not so much. You often just compile the part you need and that can be very fast. Depends on the language too. I've found .NET turnaround times faster than Java for example.


There is a cost to breaking up your logic into various modules/packages on the file system when you could simply write all your code in one gigantic file and not worry about any of that. Just because it saves time doesn't mean it's the best choice.


Is there any situation where it'd be better to keep all your JS to a single file?


Premature extraction can solidify premature abstractions and create pressure against reabstraction.

The creator of Elm has a talk that mentions this: https://www.youtube.com/watch?v=XpDsk374LDE


Depends on the problem.

As a user if both Haskell and various LISPs since 1990, I would say I personally am as productive in both strongly typed and dynamically typed languages, assuming I have a match between my problem and my platform, and avoid overengineering my types (heh).

I am also equally productive in what I consider weakly typed languages (hello, Java), when I can avoid boilerplate hell.

Bottom line is I think the productivity of the developer is more a function of the familiarity with the language, platform, tooling and problem (and proposed solution) than how strongly typed the language is.

I do find that having a REPL improves productivity in essentially all cases.


Exactly. In fact, I find dynamic types to be much slower to work with in the long-run.

For example, with a statically typed language, just one look at the method signature is sufficient to know exactly what it has to be passed. In the worst example of dynamic typing, in Javascript I don't even know how many arguments have to be passed. I probably have to look at the API documentation, and hope the author gave details about what the function is expecting.

And compile-time errors are much cheaper than runtime errors.


It could just be that the devs on a team don't know how to use those tools yet. Having to learn how to use TypeScript effectively while establishing the patterns that underpin a new project would definitely slow you down. Maybe that's a cost you can bear early on, but maybe it isn't.


probably when not using a JetBrains IDE


- Slow debug cycle due to increased compile time. If using TDD, it takes longer to run tests. The delays adds up - Especially on large projects and especially if you're not used to having a build step during development.

- Sporadic source mapping issues/bugs which makes it hard to fix things (I had an issue with Mocha tests last week where it was always telling me that the issue was on line 1). I've worked with TypeScript on multiple projects across different companies but every single time, there has been source mapping or configuration issues of some kind.

- Type definitions from DefinitelyTyped are often out of date or have missing properties/methods. It means that I can't use the latest version of a library.

- Third party libraries are difficult to integrate into my project's code due to conflicts in type names or structural philosophy (e.g. they leverage types to impose constraints which are inconsistent with my project requirements).

- Doesn't guarantee type safety at runtime; e.g. If parsing an object from JSON at runtime; you still need to do type validation explicitly. The 'unknown' type helps but it's not clear that this whole flow of validating unknown data and then casting to a known type adds any value over regular schema validation done in plain JavaScript.

- Whenever I write any class/method, I have to spend a considerable amount of time and energy thinking about how to impose constraints on the user of the library/module instead of assuming that the user is an intelligent person who knows what they're doing.

- Compiler warnings make it hard to test code quickly using console.log(...); I'm more reliant on clunky debuggers which slows me down a lot and breaks my train of thought.

- The 'rename symbol' feature supported by some IDEs is nice, but if a class or property is mentioned inside a string (e.g. in a test case definition) then it will not rename that; so I still have to do text search after doing a symbol rename and manually clean up.

- It takes a lot of time think of good names for interfaces and they are renamed often as project requirements evolve. It's not always clear whether similar concepts should be different types or merged into one. It often comes down to what constraints you want to impose on users; This adds a whole layer of unnecessary mental work which could have been better spent on thinking about logic and coming up with meaningful abstractions which don't require arbitrary constraints to be imposed.

- Setting up TypeScript is a pain.

- Adds a lot of dependencies and complexity. It's a miracle that TypeScript works at all given how much complexity it sweeps under the carpet. Check the output JavaScript. It's ugly.


" not being forced to use a specific type makes it easier to change your mind midway and experiment."

Is that really true? whether the original declaration is strongly typed or not, its likely that changes to the type will result in code changes needed wherever the value is being used.

The only thing that no typings does is make it impossible for the compiler to help you find where it is being used.


Yes, through duck typing. At some parts of my code near the entry point I might not care about what the input "file" is, but later (or the underlying library) for instance might accept both a path or an object for instance.

This is specially useful for options that are not used near the entry point, but instead are passed down. If the later implementation changes, you'd have to change the middle implementation as well to accept the new/different types.


Duck typing is completely orthogonal to dynamic typing - you can have static duck typing. C++ templates are one well-known example, but OCaml's intersection of its OO system with type inference also makes it kinda duck typed - e.g. if you define a function that calls a method on an argument, the type of argument is inferred as "some object that has a method with such-and-such name and types". If you call another method, that also gets added etc. But it's still statically typed - at call site, the actual type of the argument must have those methods, or it won't compile.

Or, getting back to C++, we have polymorphic lambdas there now (which are implemented in terms of templates, but with syntactic sugar that makes it all much cleaner):

   auto f = [&](auto x, auto y) {
       return x + y;
   };

   cout << f(1, 2); // 3
   cout << f("a"s, "b"s); // "ab"
   cout << f("abcd", 2); // "cd"
   cout << f(main, true); // compile-time error
Again, all statically typed, although the type checking basically consists of substituting the types in the body and checking if the result makes sense. Which is to say, the error messages from such code are very similar to the ones you get in e.g. Python if you have the types wrong, except here the errors are compile-time, not runtime.


The Agg C++ library defines every class as a template, taking duck-typed parameters:

- type of Agg object passed into object

- type of "algorithm" used (or similar)

I read matplotlib's Agg C++ backend. Guessing which Class A could be passed into various template parameters in Class B was miserable because of duck typing. And it also broke "jump to definition".

``` Class B<A> { A a; } this->a.foo(), but you don't know what type a is. ```


All languages with structural typing support duck typing out of the box.


> Startups: being X% faster at launching your product might mean the difference between life or death for the company. This includes Airbnb back in the day!

Developing software since the mid-80's, worked in a couple of startups, never saw a case where using strong typing would be the cause of business failure.

> - Personal projects: where it just doesn't matter if it breaks in some situations and having to build a whole babel tower might discourage you from getting started for fun.

It is no fun to go back to something that was left hanging 6 months ago and reverse engineer what the code actually does.


I've seen too many projects fail because they are way over-engineered. A complex pipeline, build system, developers not familiar with the practices, etc. all adds up. Now I do not know how much typescript might add in cost to all of this, but it's definitely part of what I'd consider an over-engineered MVP.


You can equally over-engineer in dynamic languages, don't see how a type system is to blame.


There is a cost to not using types -

Dealing with bugs at runtime that would of easily been caught by a compiler.

Difficulty in understanding your own code base - functions available, parameters on those functions, the properties on the objects passed into those functions.

Refactoring - some of the most refactoring you'll do is when quickly iterating on a small application. Refactoring is super error prone, and typing minimizes those errors.

As always the typing is optional. So if it is slowing you down or some library doesn't have types, or you just don't need it in a particular case - then just use 'any'.

I've experienced typings increase my productivity in even the smallest projects. The compiler is your friend, the typings you need are often minimal and mostly inferred.


Agreed, I never said the opposite. But you'll agree that the costs of using types are mainly upfront, while the costs of not using them are mainly in the form of debt.


Except the costs of using types are usually in the form of a learning curve: i.e. when you are learning the type system it's relatively expensive, but once you've mastered it it's just part of your coding practice. When you do run into the rare issue, it's usually east to solve it.

By contrast, dynamic typing problems never go away. They're usually because of a faulty assumption, or due to a typo, or they're the cumulative hours you spend trying to figure out what exactly has to be passed into a function since it's not explicitly part of the signature, and the maintainer of the library you're using didn't document it well.

In the long term, static typing is much less expensive, and it's not accident that people tend to prefer more types over the course of their career rather than less types.


I'm sure for many people this cost is reversed and they are wasting time in dynamic language for looking up methods or fixing bugs that would not exist in typed code.


If I could give this comment a year's worth of up-votes I would.

Dynamic languages slow me down.

I have a terrible memory when it comes to method syntax, for example I would still have to look up Javascript's substring syntax every single time even though I have worked with JS for like 12 years.

Static typing's ability to supercharge autocomplete is a massive time boost for me.


Yes JS is probably the most egregious. After having transitioned mostly to statically-typed languages, it seems positively backwards not to have the types, or even number of arguments specified as part of the signature to a function. I have even had to dig through the source of a library when the author has neglected to document an API well.


Bizarre attitude. Types help you work faster. They eliminate the entire class of “undefined <something>” errors that you usually have wade through when writing js. I’m guessing that you don’t have a lot of experience with modern typed languages.


As you say I think that perception is largely due to lack of experience with statically typed languages in general, and also maybe lack of experience with projects of non-trivial complexity and/or managing production code.

Compile-time errors feel like an unnecessary hindrance when you're not familiar with them, but when you understand that they are replacing much more vexing and difficult runtime errors, you never want to go back.


Right, some compile-time errors for me are like; wow, thank god that didn't happen in a runtime environment


I think you're generally right. One thing I can really appreciate from TypeScript, though, is that is strikes a really nice balance in compensating for point 2. If something turns out to be really hard to type, or simply too much work, the `any` type makes it really easy to opt out of typing in a specific location. Once you've settled on the API and code that works, you can introduce the typings anyway. (Although I must say: exactly for those uses where I am expecting to be refactoring soon, I find I feel much safer to experiment knowing that TypeScript will help me with that.)

Furthermore, with TypeScript's current popularity, point 3 is starting to be less and less relevant too. When I start a React project, I just run `npx create-react-app --typescript`, and I'm up and running. TypeScript support by default is getting more widespread, which can make it a no-brainer to use in personal projects as well.


This is why I'm so bullish on clojurescript!

The 'type system' is called spec - so the idea is that you can move fast but once you know some invariants you'd like to keep you can put them in the specification. Having the spec as your type system maintains sanity and gives programmers that are new to the code a high level overview.

It just makes so much sense, really feels like having & eating the typing cake.


There is some truth to your point. And now think about using TypeScript, that gives you the edge right from the beginning. :)

I can assure you, that you want a somewhat stable software basis when the traction hits your software product. It is extremly hard to fix (dumb) bugs during those times. Certain edge case bugs suddenly become not so edgy bugs due to more customers using your product.

For me it is an anti-pattern nowadays not using TypeScript or even React/Angular.


I just switched from python to go, and I’m the most productive I’ve ever been as a programmer, honestly. I don’t think types slow you down, personally.


AirBnB is a copy of ages-old vrbo with modifications. It wasn't a breakneck race to see who could launch code first. Coders overestimate the criticality of code to the success of a non tech business that operates on the web/apps. Airbnb isn't high tech, it's a business with modern business process of automation software.


There are endless articles written about this. If it's a taboo, nobody seems to be paying any attention to it.


> You don't want to be labeled as an "anti-good-practice" person.

Oh god, so much this. Engineering theatre is a menace.


If HN gold was a thing, I would gild you.


Thanks! I'm considering writing a blog post about rapid prototyping and aggressive refactoring, which sometimes contradicts some of these best practices.


There's been some discussion regarding what these Javascript devs are calling "The Typescript Tax" ( Presumably the perceived increased cost of development in a statically typed language ). What greater 'tax' could you possibly conceive of than a whopping 38% of your development time on an enterprise product being spent debugging problems caused by type inference/coercion? You can directly quantify the costs, and it'd probably make management's heads spin. Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming.


> What greater 'tax' could you possibly conceive of than a whopping 38%

I always find it fascinating looking at the number of lines of code which do the same thing in different languages. Foundationdb maintains official bindings in Java, Go, Ruby and Python. The implementations have almost complete feature parity. They're all maintained by the same team, and as far as I can tell they all follow best practices in their respective languages.

The sizes (minus comments and blank lines) are[1]:

Python: 4053

Ruby: 2397

Go: 3968

Java: 10077

38% seems like a lot but language expressiveness dominates that. If ruby and java code were equally difficult to write, you'd be paying a 320% tax moving from ruby to java for an equivalent program.

I'd love to see a "programming language benchmark game" that only allowed idiomatic code and compared languages based on code length.

I have no idea what the equivalent size ratio is for javascript and typescript, but having worked with both, I find typescript projects end up bigger. I write more typescript because the type system and the tooling encourages verbose and explicit over terse and clever. (In typescript I end up with more classes, longer function names and fewer polymorphic higher order functions).

The typescript tax is real. That said, I believe the defect rate increases by 38% too. Depending on what you're doing the tax might be worth it.

[1] Measured from commit 48d84faa3 using tokei "code lines". Includes JNI binding code and makefiles, but not documentation


Java size has nothing to do with types and everything to do with the language.

Look at a file like this: https://github.com/apple/foundationdb/blob/master/bindings/j...

You have equals, hashCode, toString which are all uncommon in other languages.


> Java size has nothing to do with types and everything to do with the language.

Yes. I'm making two claims. First to disagree with the GP's claim that a 38% difference between languages was 'whopping':

> What greater 'tax' could you possibly conceive of than a whopping 38%

The fact that the type system isn't entirely to blame is also clear looking at the ruby and python sizes. Ruby and python have very similar type systems, but the ruby code is only about half the size of its python equivalent.

My second claim is that if we did the same comparison with javascript and typescript, typescript would be bigger. I don't have any stats for this though - just lots of experience. The type system seems to push code in a bit more of a classical OO direction because there aren't higher kinded types, associated types, or any of that fun stuff.

As others have said it'd be fascinating to compare typescript and purescript / elm codebases. I'd expect the haskell-derived languages would come out way ahead on expressiveness. Its constantly surprising to me that we don't have any actual numbers.


> First to disagree with the GP's claim that a 38% difference between languages was 'whopping'

??? You're comparing LOCs and number of bugs. That 320% larger java code might have 50% less bugs and may have taken 50% less time to write. Terse code is harder and usually takes longer to write than expressive code.


When writing in Ruby the "what the hell is this object I'm dealing with here and what methods does it have available?" Tax is like 320% for me. I love Ruby. It's so much fun and it's just neat in general but I find it much more taxing than a nice statically typed codebase I can navigate and reason about with ease.

Everything is just taxed differently. No such thing as duty free programming.


Is LOC really a significant measurement? Sure you have to type more characters when working with a strictly typed language, but that takes a couple of seconds opposed to however long it might take to track down a bug in a massive enterprise system. Not to mention the type system often forces you to consider cases you might otherwise overlook when writing dynamic code, hopefully resulting in better code to begin with—it imposes a bit of discipline.

In my opinion whatever perceived tax a type system entails is superficial—sure, coming from a dynamic language, having to specify types for everything seems like taking on an extra chore—but this is just the surface experience of using a type system—once you’ve used one for a while and furthermore, had the benefit of reading typed code, you really start to appreciate having types around and any so called “tax” involved quickly becomes negligible.

While it might be a “tax” when writing code it’s defintely a boon when reading code—it’s much faster to look at types to get a handle on an api than it is to have to dig through and read the actual implementation to try and decipher what exactly a function expects and returns. When using a Haskell library, for example, I rarely ever have to read documentation extensively or read implementations—looking at the types of functions and a brief description (one sentence) is often sufficient. It takes a couple of seconds. This is the norm with Haskell code. Contrarily, this is an exceptional case for dynamic languages—if the documentation isn’t superb you always wind up having to peek at the implementations at some point—this typically takes longer than glancing at types and a one sentence description.

Dynamic languages might be more “expressive” from a writers perspective (since you can leave out types) but they’re far from expressive from a readers perspective—since the writer convenience often puts the onus of sussing thing out on the reader. In enterprise systems, reader convenience is arguably more important.


> Is LOC really a significant measurement?

It is. Number of bugs per line of code is nearly the same across languages. So the best way to reduce bugs is to reduce LoC


>Number of bugs per line of code is nearly the same across languages.

Has this been studied?


I can't give a more complete answer than this SO answer: https://softwareengineering.stackexchange.com/a/185684 :)


15-50 being quite a big range and the 2004 publication date of the book (the comments hint at the reference being earlier) make me reluctant to take it at face value without looking into it in more depth, especially regarding new technology like TS, Rust, or enterprise Haskell.


Read it as 1.5-5% of lines contain bugs.

In the original post AirBnB said 38% of their bugs could be prevented by static typing. This would bring this to 0.9-3%. Not insignificant, but LoC is still more important.


1-line Perl programs are looking at this comment with approval.


> 1-line Perl programs are looking at this comment with approval.

1-line Perl programs are far from being representative of real systems and generally only exists for code golfing.


[Citation Needed]?


I can't give a more complete answer than this SO answer: https://softwareengineering.stackexchange.com/a/185684 :)


> Sure you have to type more characters when working with a strictly typed language

You might be interested to hear about type inference.


I expect that Java codebase isn't comparatively enormous due to types.

Do they have a Haskell or OCaml client? I'd expect them to be fairly terse and have even better compile-time and run-time guarantees.


Haskell has the reputation that if it compiles, it works.

Whether it’s true or not, it would be interesting to consider languages in their ability to produce correct code more quickly.

While I don’t like the idea of writing 3x the amount of code, if one implementation is only 30% longer, for example, and is more likely correct on the first run, it’s still a huge productivity improvement.


This is my experience with statically typed languages. It may be a bit more verbose in the implementation, but it takes much less time to reach a correct solution.

I find that whatever "savings" there are in terms of code volume in dynamically typed languages is usually more than made up for in the need for more tests.


Java has a lot more boilerplate—or near boilerplate—than the others. For example, you probably want to implement some kind of toString method, if only for your own sanity during debugging. You can do that in Python, but __str__ is often good enough.

I also wonder how many Java "lines" are just a closing brace or something trivial.


and haskell is even smaller - `deriving show` and `deriving eq`!


> I'd love to see a "programming language benchmark game" > that only allowed idiomatic code and compared languages based on code length.

You just described the old Debian language benchmark game. You could select the various coefficients to apply to code size, program speed etc. I remember it played an important role when I decided to give OCaml a try: I selected 1 for size, 1 for speed, 1 for RAM and zero for everything else, and that resulted in OCaml and Pascal, the two languages I was prejudiced against from school years. Of course I went with OCaml.

From the same era:

http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependabil...

Also relevant: https://redmonk.com/dberkholz/2013/03/25/programming-languag...


Java is a terrible exemplar for a statically typed language. It's incredibly verbose, and is often held up as a bad example for how to do typing. More modern languages manage have type systems which are more effective at identifying issues at compile time while also being more flexible in terms of the design patterns they allow.

You might have also noticed that Go is statically typed, and it's actually less lines of code than Python (maybe a lot less if trailing braces are counted in that LOC number).


LOC is a meaningless metric.


Except when it comes to comparing verbosity.


If half the lines in your programs are

    {
and

    }
it hardly does that.


>What greater 'tax' could you possibly conceive of than a whopping 38% of your development time on an enterprise product being spent debugging problems caused by type inference/coercion?

Is there a link where they say they spent 38% of their development time on this? The slide just says 38% of bugs and fixing bugs is not the only way development time is spent and not all bugs are equal in terms of time to debug.


> Is there a link where they say they spent 38% of their development time on this?

No there is not. The comment you're replying to has no factual basis for this claim.

Nor is there a basis in the article for 'what these Javascript devs are calling "The Typescript Tax"'. In the article there's one comment that refers to such a tax, but the commenter doesn't claim to agree, and every reply disagrees.

Nor is there a basis in the article for "Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming." The article says nothing about the total quantity of bugs, bugs per line of code, or other comparable metric.

The comment you're replying to seems to have a predetermined agenda tangentially related to the article.


My apologies, I was commenting very generally regarding attitudes typical of Javascript developers I have encountered in my professional life, rather than addressing the article directly. Also, you're right. 38% of development time is not the same as 38% of bugs, mea culpa, but there is an overall point to my statement about the productivity cost that I believe remains correct regardless.


You gained more reputation by owning your mistake than you lost by making it.


You’re correct, but it’s also true that debugging is often a greater percentage of time/effort than the original coding, so 38% of bugs is nothing to sniff at.


Although typing bugs are often the simpler type of bugs to fix.


The bigger problem is that "Simple fix" !== "Simple ramifications". Runtime issues caused by missing very simple typing bugs can bring down critical systems in Node. I worked at a company where a payment batching system built ( not by me ) with a MongoDB database failed to work correctly because the aggregation query was checking a field was `null`, without checking if it was there in the first place... No one even noticed for several weeks until someone finally investigated the low volumes. That's a very kind example too, just because they're simple to debug when they pop up doesn't mean that they won't cause massive destruction in prod.


This is only true if the bug is caught by the runtime checker. Lots of type bugs aren't, especially in languages with type coercion.


Until they’re not. I’ve had some codebases where it took 3-4 hours to figure out that some really bizarre issue was really just a type bug. And I don’t consider myself a terrible debugger.


Obviously. Any distribution will have some exemplars toward the tail.


I don't think that's really true as a general rule. Passing a variable of an unexpected type or nullable could have completely unpredictable behavior depending on what assumptions the code makes about the incoming parameter. It could have no effect or a catastrophic one, there's really no way to know without reading actual code.


So true and simpler to detect too.

The kinds of bugs which TypeScript prevents are those which could have been caught by the simplest of integration tests using the simplest inputs.

Every time I rename or change something in TypeScript, I have to change it in so many different places. In JavaScript, changes where much more localized.


Not at all. Typing bugs often leave code looking perfectly fine until you discover that one of the variables you expected to be y was actually x. And the only way to find out is to just print out everything


Can happen with TypeScript too. There is no runtime type checking so if you parsed a JSON object and cast some unknown property `const result = someRemoteObject.someUnknownPropertyWhichIsAString as number`, it doesn't guarantee that the result is actually a number. You still need to do actual schema validation just like with old school JavaScript.


Better languages solve this. In haskell you define a datatype for json before you parse it.


> Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming.

Why is that? There were dynamically-typed backend languages long before Node.js became popular. And you now can easily have a TypeScript-based Node.js backend, which gives you the nice benefit of having the exact same language and types for both your backend and frontend. I'm not saying the Node runtime is perfect, but I don't understand why having a runtime that lets you use JS/TS as a backend or local language is quite such a bad thing.


I have absolutely no way to back this up, but my gut feeling is that the node ecosystem's tendency towards small modules, coupled with a lack of explicit interfaces in the language, increases the number of places where a developer is likely to introduce a type error.


It is a bad idea to intermingle the problems of dependency management with the concerns of a type system as though these are somehow related concepts. If you are not managing your dependencies sloppy things will follow regardless of the type system in place. There is an unrealistic expectation that JavaScript developers shouldn't have to write code because somebody has already written it for them in an NPM package.


I don't just mean external dependencies. The culture of "small modules" also means internal modules are vulnerable to this.

Edit to add: on further thought, I think it would be a mistake not to look at dependency management and type-safety at the same time. If my external dependency exposes a defined interface, that gives me more information about that dependency which I can use to manage it (replacing it, for instance) than without.


If I import a Nuget package into my C# code that changed a signature behind my back or even one that correctly versioned the package as “we have breaking changes”, I’ll know at compile time. With an NPM package I won’t know until runtime.


Do you have integration tests to ensure your applications do all the things it claims to do and that dependencies don't break those things? If dependency health isn't a part of your product testing bad things can happen.

Another thing to consider is that maybe you might not need a given dependency. Sometimes it is faster to include a bunch of dependencies at the beginning of a project just to produce something functional in the shortest time. As a product matures there is a lot of opportunity to clean up the code and slowly refactor the code to not need certain dependencies.


I’ve found just the opposite especially going into companies with less mature engineers and “architects” who’ve been at the company forever and haven’t had much outside experience. They both tend to reinvent the wheel badly and write code for cross cutting concerns that have already been solved better like custom authentication and custom logging.

Then again, I’m a big proponent of avoiding “undifferentiated heavy lifting” at every layer of the stack.


I think this comes down to reinventing the wheel. I frequently see less mature developers misuse that term to avoid writing code or making architectural decisions. That term isn't an excuse to off load your work onto someone else's reinvented wheel. The idea is to avoid unnecessary churn, whether that means you writing original code or using a dependency, and even still the concept is not always the best course of action.

As an example using a framework to avoid touching the DOM is an external reinvented wheel. As a leaky abstraction the need for that framework may dissolve over time as requirements evolve. That evolution of change can result in refactoring or a growth of complexity. Accounting for those changes means reinventing the same wheels whether or not the abstraction is in place. That is because a framework serves as an architecture in a box, but if you are later making architectural decisions anyways you have out grown at least some of the framework's value.


I’ve seen it before where “I don’t need a framework”. Then after the project grows, the “architect” ends up reinventing the same framework badly. I have no problem with opinionated frameworks especially with larger organizations. It also makes it easier to onboard developers. As a new dev on a team, I’d much rather come into a company that uses Serilog than used AcmeSpecialSnowflakeLogManager.


I have seen that as well. Many people will complain that if you don't use a framework you will end up writing one anyways, which is completely false. For many people who have never worked without frameworks writing original code means inventing a new framework, because that framework-like organization is how they define programming to themselves. This behavior is safely described as naive, or a lack of exposure to differentiating methodologies. This reminds me of that quote: when you're a hammer everything looks like a nail.

https://www.dictionary.com/browse/naive


So what is your definition of an agreed upon method that an organization uses to produce results....


Depends on the goals and constraints of the organization.


And what happens once you have 3 or 4 developers?


I think what you are ultimately getting at is: How do you ensure they write code the same way?. That isn’t a real problem and so it isn’t something you need to address. This comes up because insecurity is a very real concern.

Instead provide a business (not code) template to define requirements for integration. This should define what is needed in their documentation, what they output/return, their inputs, automation test requirements, and so forth. That list delicately does not include code style or any more of vanity. Most of this can be automated but you need code reviews to hold people to account and to use as a training opportunity. Be honest and critical during the code reviews. If people find respectful honesty offensive talk to them about it outside the group and if they still can’t get with it transfer them out of your team.


So you’re suggesting that one person decides that they like JQuery on the front end and one decides that they want to use plain Vanilla JS and one decides they like the look of Material UI, the other Bootstrap, and one happens to like sites that look like MySpace that’s ok?

And on the backend, if one developer liked log4net for logging, the other Serilog, and yet another developer likes to build his own logging framework that’s okay?

Let’s take this to a logical extreme, since one person is an old school VB developer and another likes F#, let’s just do that and throw in a little COBOL.Net for good measure....

And when a new dev comes in snsr takes forever to ramp up. No big deal. While we are at, let’s not follow the REST semantics either and no one needs those pesky frameworks like Express and ASP.Net. Let’s just go bare metal...

Or you could just have agreed upon standards....

If you’re going to have standards anyway, why not just use frameworks where you can open a req for the required framework that lets you get some developers who you can make sure already know the basics?

You’d be surprised at how many cheap developers you can get to throw together your yet another software as a service CRUD app using React.


> So you’re suggesting that one person decides that they like JQuery on the front end and one decides that they want to use plain Vanilla JS and one decides they like the look of Material UI, the other Bootstrap, and one happens to like sites that look like MySpace that’s ok?

The expectation is that developers are literate in their line of work, even though most companies don't hire like that. Since developers are expected to be literate don't waste time worrying about how to write code. If a developer suggests a tool that helps them on the "how to write the code" reject the tool.

Set business principles that specifically communicate the acceptable scope of work. Such principles can be something like this:

* code will execute within X milliseconds.

* code will allow up to X total dependencies (choose wisely).

* code will not exceed X bytes total size. If they want to include jQuery their code solution just increased 69kb.

The common failing is confusion around standards. Many developers find it easier if everybody just did everything the same way so that things appear standard. That approach ignores everything relevant to product quality and the goals of the business, which is stupid. It is stupid because it sacrifices simple for easy at the lowest level of leadership by lazy or weak managers. You don't get to offload management onto a framework and still hope to be a strong effective manager. Ultimately the success of a manager is tied to the work of that manager's team whether the team can retain/promote effective people.

When you set concrete and specific business limitations the limits are easily understood and based on objective criteria. Innovation and creativity are not stifled as developers are free to work within the constraints anyway they please.

This all makes sense once you accept that the group is worth more than the individual, but the product is worth more than the group. Sacrificing business goals for group cohesion is backwards just as sacrificing group integrity for an individuals option is backwards.

> You’d be surprised at how many cheap developers

That is also a common failure. Developers are never cheap unless you offshore. In hoping for cheap you will actually get developers that cost the same, do crappy work, don't value themselves, and that aren't valued by the organization. That is toxic. Instead hire people with potential to do great things and manage them effectively.


Dependencies should be tested, yes. But I enjoy this sentence in top comment: > "extensive" unit testing is a poor man's type checker

:)


I wouldn't be surprised if it is actually faster to develop in statically typed languages once projects reach a certain size. It makes refactoring so much easier, and it's a lot easier to reason about what a functions arguments could be. I've had cases before where the easiest way to tell what a function accepted as an argument was to just run the program and print out the input. If it was statically typed, I know exactly what the arguments are, and can easily see the effects of modifying one of those arguments. With dynamic typing, I spend a lot of time just looking at how a function is used elsewhere in the code to make sure I don't break anything.


"Tax" may indeed be the right word. You pay something up front, and in return you get benefits that tend to outweigh the costs, and which would be even more expensive to pay for directly. I say that as a seasoned dynamic typer, who finds himself wishing frequently that the X3J13 standardized optional typing in Common Lisp a bit more.


What greater 'tax' could you possibly conceive of than a whopping 38% of your development time on an enterprise product being spent debugging problems caused by type inference/coercion? Y

As opposed to 86 percent of all statistics, which are basically made up. (Or presented grossly out of context, which is pretty much just as bad as "made up").

Seriously, it takes a lot more to trigger an assessment of "whopping" significance in my mind than a side-angle photo of some random slide presented with no background context whatsoever.

Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming.

Wholehearted agreement with you there, at least.


> Surely Node.js is going to be looked back upon as one of the worst mistakes in the history of 21st century programming.

Why? You did not elaborate on this.


I might be wrong, but I think you have misunderstood this, or we've come to wildly different conclusions. My understanding is that they were using a dynamic language without types, and 38% of these bugs would have been prevented if they were using static types. My conclusion is that developers would spend at least 38% less time debugging type issues. This is because code editors (e.g. VS Code) have really good TypeScript support, such as autocomplete, and you get instant feedback if you ever make a mistake.

You seem to have arrived at the opposite conclusion, which is that developers would require the same (or more) effort to prevent and fix these bugs while using TypeScript. I think you've missed the fact that most type issues don't take that much effort in the first place. Once you set up the "guard rails" by making a decision about types (or just assigning a value to a variable), writing the rest of the code is pretty effortless.

I would take it a step further and say that TypeScript development is actually more productive than plain JavaScript. (Again because of the better autocompletion and instant type checking in my editor.) I make far fewer mistakes, but I also write code faster.


This, and also: You don't want to deploy errors to production ;D


There's a hidden tradeoff. For example, if you have to spend 1000 extra cumulative hours to use types, and you've spent an extra 500 cumulative hours fixing all of the bugs associated with not using types, then, from a business perspective, it would have been worth it to not use types (and vice-versa).


This completely ignores the fact that bugs which make it into production erode the reputation of the company in the eyes of their customers.

Lost reputation is much, much worse than being wasteful with dev hours.


It depends entirely on the bugs. If a bug's reproduction-path is rare enough that a non-critical mass of users runs into it, then it's not going to make a difference. Reddit, for example, is currently bogged down with login bugs, redirect bugs, redesign bugs, etc. and they're simply not prevalent enough to get people to mass exodus off of Reddit. Reddit will continue to grow fast even with these bugs.

The vast majority of user-seen bugs are not common. The common ones are usually caught with testing.


I use to believe that to always be the case. But then I worked for a company that didn’t really care about quality they just wanted to add features to mark off a checklist in a pursuit to either get more funding or get acquired

That can work - see Twitter.


Right, but when customers are putting up with a crappy product, they are just waiting for a better alternative to come along. That's a huge liability to be sitting on -- just ask MySpace.


To your point though, I can see a strategic path of "crappy until we get enough funding and users that we can go back and redo it".

In practice though, most companies never seem to polish a shipped feature -- they instead focus on adding more mediocre features.

I would love to see this trend reversed of course! But software in our age seems doomed to short lifespans and mediocre quality.


That actually proves my point. The original founder sold to News Corp just as Facebook was taking over in 2009. The strategy was a success depending on where you stand....


Our difference of opinion hinges on the ambiguity of “success”. Here you use “success” strictly in the sense of making a financial gain, while MySpace’s users would have called it a failure.


Isn’t that the metric that all investor funded startups use - either being acquired or going public at some multiple of their original investment?

Unless I’m working at a job where I’m serving some greater good, my definition of success is did I help achieve the company’s objectives, get paid well, and will I be able to use my coworkers and management as references in the future.


Your position is that financial gain is the only metric that matters and that the quality or longevity of the product is irrelevant, and my position is that that's a concise summary of everything that's wrong with "tech". We agree to disagree.


How much code that is run by most user facing businesses Is the same code that was used 10 years ago no matter how good it is?

No matter how great my Perl code was running 15 years and 5 companies ago, I doubt they are still using it.

The code that runs Amazon.com, Google.com, etc. is nothing like what they ran 10 years ago.

Besides, we aren’t talking about feeding starving children here. We are talking about the death of a social media platform. But more generically, if yet another software as a service CRUD app dies, whose hurt besides the investors who expect most of their startups to fail and the developers who can walk down the street and get another job in a week?

As an individual, why am I going to fight the loosing battle of trying to implement great code quality and test coverage when I’m being incentivized by how many features my team can pump out as cheaply as possible by hiring a bunch of poorly paid developers overseas?

We call what we do “engineering” but there is a world of difference. If my program fails that is running a SAAS app, the user gets a 500 error. If a mechanical engineer rushes, people die.


Are you implying that dynamically typed programming languages are a mistake?


IMO, dynamically typed languages made more sense when type systems in mainstream statically language languages were much more primitive, and that limited what you could do - and it was often verbose when you could.

But the kind of type systems your average dev has to deal with improved a lot in their expressiveness since 90s. Type inference is basically mainstream, too. There are still things that cannot be described nicely, but they tend to be a lot more "magic" - the kind of stuff that lets you write code faster, but not necessarily write more readable and understandable code.


Like everything else, there are pros and cons. It’s not like dynamic languages are worse or better per-se, but they surely make it harder to scale a team or a codebase 10x the original size without introducing what effectively is some sort of typed system (or strict “typed” conventions that the team agrees upon) in place. Which may be done right, or disastrously wrong. A typed language generally doesn’t leave this step up for interpretation later on, since everybody agrees on it since day one.


As a newbie to clojure, I’m very curious to see what a large codebase looks and feels like. The clojure attitude seems to be that types come at the cost of reduced code reuse, which I tend to agree with. After a decade spent in strongly-typed OO, it is starting to feel like a bunch of sound and fury which seems to end up eventually just getting in my way. I’m definitely open to new ideas in this area, and am looking forward to taking a deeper dive into clojure this year.


Alternatively, you can break the project up into modules; each of which are maintained my smaller teams. I think that is a cleaner approach.


Absolutely not, but horses for courses. I use all of these dynamically typed languages regularly and I enjoy programming in them too, but choosing the right tool for the job should take precedence over personal preferences.


So then you are saying every single use case which node.js currently fulfills should use statically typed languages? This does not match my experience of node.js being used in many of the same cases as other dynamically typed languages. What cases specifically are dynamically typed languages suited for?


> debugging problems caused by type inference

Type-inference has never been the cause for bugs in any languae.


Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: