Hacker News new | past | comments | ask | show | jobs | submit login
When to Use TypeScript – A Detailed Guide Through Common Scenarios (khalilstemmler.com)
331 points by stemmlerjs on April 7, 2019 | hide | past | favorite | 240 comments

I agree with this article. Good stuff. One minor gripe:

> if you care about the code, you need to have unit tests for it.

That's not really true. You can test at a differently granularity and still have the same confidence in your code. Scores of brittle unit tests crowding your productivity is not what you do to code you care about. I wish people would just stop propagating this detrimental rumor that is backed by zero evidence. I prefer to keep my unit test suite lite and sweet (try to test what is very sensitive at the unit level - easy to miss or algorithmic things) and then try to get the most out of my acceptance test suite. Sorry, but I just worked on too many systems where the unit tests suite breaks no matter what you touch and you spend more time updating the tests than actually getting shit done.

If you write a library, or expose a public interface, I do expect a lot of unit tests. Unit tests can run offline, fast, during build time. I don’t mind integ-unit tests that perhaps simulate a database in memory to avoid mocking the entire storage layer. I rather have tests that are technically integration tests as they include more than, well, a unit, as long as they are idempotent, test mainly one thing, and can run fast and offline. Not sure if there is a term for it but I call these integunit tests... (probably the worst name possible)

But unit / unit++ tests are not just for you, they are for me, they give me confidence I can touch your code without inadvertently breaking something critical that you may have missed in your acceptance tests. It gives you peace of mind to be able to refactor. An acceptance test may tell you they something broke, but a good unit test will show you exactly where.

> breaking something critical that you may have missed in your acceptance tests.

So you can miss things in your acceptance tests but can't miss them in your unit tests? Why do you feel more confident with unit tests than with acceptance tests? Maybe because unit tests break more often so they are providing you with a false sense of security? Do you know that when a test fails for a reason other than the specified assertion its a called a false negative and the failure is meaningless? Do you know if your tests fail and no functional or non-functional requirements of the program has changed the failure is also meaningless (you are testing implementation)? Ask yourself how often does your group give you time to do these hypothetical large refactoring that give merit to large unit test suites? How many times, even, do you do small refactorings and find all the test have to go away or significantly change anyway because they made some implementation assumption that is no longer true? How many times have you gone into a test suite and have seen that its been patched up so many times that you can swear its testing something but you just don't know what? I know that Uncle Bob tells us about test for refactoring and this and that, but I just don't buy it any more. The emperor is naked god dammit. Uncle Bob and the like are very famous because people swallow their anecdote without due scrutiny.

One thing I can't find much evidence of is Uncle Bob's experience in delivering commercial software. I get the feeling he works mainly on toy examples which might explain some of the things he advocates that, like you, don't match my experience at all.

If you write a library or expose a public interface, I expect functional tests that cover that interface. Unit testing would require going below that level, and test implementation details - which is exactly why those tests tend to be so brittle, and the overhead of maintaining them is so high.

I agree. I think it's important that your test suite gives you something useful with every test, instead of writing a bunch of pointless extra tests just to get to 100% coverage because you read somewhere that that's good.

I feel like I ought to write more on this one of these days, but I have a few testing pet peeves, as far as tests you shouldn't write. Don't write tests that are a copy-paste of the code you're testing. Don't write tests to validate things that should be proven by your type system and lack of compile/parse errors. Beware of tests so tied to implementation details that they make your code harder to refactor. Don't write tests for things talking to external services - the only real test is that it correctly handles the actual service responses.

Tests should be testing behavior, not implementation (this was the original intent of unit testing but it got sidetracked over time).

Biggest benefit of having a compiler in your toolchain is that it often serves as the majority of your integration testing layer.

That said, you should proliferate unit tests for algorithmic logic. That can be a small subset of your code, but a large part of your domain (depending on richness). Integration testing can be written manually, or you can pick a compiled language that does a lot of the lifting by type checking and producing the binary. Acceptance tests guide the overall happy path and can protect against weird regressions.

"Write tests. Not too many. Mostly integration."

+1, though this is an unpopular opinion.

IMO appreciation for automated test ROI is a great litmus test for a "good programmer".

Thanks for reading the article. Perhaps I should have been more specific about what needs testing. In The Clean Coder by Uncle Bob, he says that your unit tests should approach the asymptote of 100% test coverage.

Here's where I agree with his statement:

On the backend, I agree that all of the domain-layer code should be tested. This is a hard requirement for me. It's also code that has 0 dependencies so it should be easy to test. TDD pairs really nicely with DDD.

Unit tests give you the needed confidence in order to do refactoring. You can't refactor code without tests. If you do, there's a risk involved. Therefore, in order to safely improve the design of existing code, it needs to have tests. This "detrimental rumor backed by zero evidence" is one of the fundamental takeaways from Martin Fowler's book on "Refactoring" in addition to Uncle Bob's chapter in The Clean Coder on Unit Testing.

Here's where I disagree with his statement on 100% test coverage.

I used to spend a lot of time writing brittle tests by testing front-end UI code. I used tools like Selenium and Cypress. Because the front-end is the most susceptible part of a system to change, I found myself spending an equivalent amount of time maintaining these tests in addition to adding new code. This is a hard place to find a balance. In Angular, I merely ensure that I write tests for services. In React, I spend a lot less time writing Enzyme tests on rendering and a lot more time testing the redux operators.

Uncle Bob clears this up in his book by saying USUALLY, it's not necessary to write UI tests. I'd say by UI tests, he means "rendering tests". His solution is to ensure that you have a way to run acceptance tests that work through the API, as it should be a lot less susceptible to constant regressions. This way, you're essentially testing the features that the API is executing. If you're using DTOs, the inputs and outputs of your system should remain relatively the same anyways, and you should spend less time changing old tests, and more time adding new ones.

> You can't refactor code without tests

See this is where the conversation goes south. You will use the phrase "unit tests" when it suits you, and then, like a magician using indirection, generalize to the word "tests" when it suits you.

Why are you listening to Uncle Bob so much?

I mean you can do as you like. However, the strong "Uncle Bob" style assertions should probably be left out of most articles in favor of a more humble "this is what worked for me/us" approach. Universal rules for any discipline are very hard to come by. Sharing what worked for you on your projects is great but it becomes somewhat obnoxious when you try to generalize it to universal rules that works in all environments for everyone.

Also, I think that large unit test suites, where most of the tests are redundant, come about in competitive environments where if you try to make a commit without a test some other competitive coder will try to use it as a "I know better" stepping stone and call you out on the change - "Where is the test". After 3 years of this behavior what do you think your going to have. A nice clean suite of tests or a monstrous big ball of stubs, mocks, faked, copy pasta that resulted from defensive social coding.

Honestly, write a light suite of tests that get to the point, and keep your ability to code swiftly and make changes quickly. As the code matures you'll find the trouble spots and focus testing on those areas. Don't blindly follow methodologies that are going to have you writing large test suite for version pre-alpha 0.0, and now your updating a large test suite for every micro-change just to make it to beta 1.

Here is an old article by DHH where he rails against unit testing dogma. Ironically it was ideas from the Ruby community that formed a lot of the basis of the mad dog McCarthyan style unit test dogma. https://dhh.dk/2014/tdd-is-dead-long-live-testing.html

agree. Acceptance Tests hit the sweet spot for me in terms of productivity and quickly catching causes of regressions (coupled with a well-groomed, atomic commit log).

> will most often write vanilla React.js apps when: the codebase is small

This one never ceases to amaze me. Any codebase is small, until it gets big. And once it's big, the effort to rewrite is hard to justify.

You don't build a house and add the foundation later.

Rewriting vanilla JS to Typescript isn't that big of a deal though. It would be more like building a house on a budget, then adding some fancy paint, furniture and alarm system.

In my experience, if you spend a few weeks in the "exploratory" phase writing ES6, rewriting to TS won't take more than one or two days.

Nowadays I'm a lot better at Typescript and will use it from the get go, but for someone who is less skilled in it, it might much faster to produce working code first and add types later.

Alternatively, you can write in Typescript from the get go with most of the strict checks disabled, then only enable a certain check when you see the need.

I see so many HN comments endorsing "build the product to exact specification using the minimum reasonable tool set." When you then ask "what happens when the specification changes?" you get hilarious answers like "just say no to the user", or "just extend the app!"

This is why I always over engineer a bit, especially on new projects. I'll happily, for instance, add a framework before it's strictly necessary. I've never regretted that decision in the end.

If people here want to use a 5 gallon bucket on an initial 5 gallon job, go for it. Best of luck to you. I'll be over here starting with a 10 gallon bucket and not sweating when the customer needs to add another gallon...

One of the things that I've learnt is that the world isn't clear-cut enough for YAGNI[1] all the time, and that it's usually worth building things slightly more generic/flexible than the original ask, because it's rare that the original requestor understood the problem enough first time round.

[1] https://martinfowler.com/bliki/Yagni.html

There are so many automated safe refactore and build time safeguards you can do with statically typed language, I’m much more comfortable making major changes and extension with Typescript than vanilla JS.

The problem with overengineering is that you don’t have a good grasp on a generalized architecture until you have two or three use cases. Of course anything dealing with cross cutting concerns like logging and authentication it’s easier to add up front.

I don’t know any modern front end frameworks, but when working within a team, I’m all for opinionated popular frameworks. It’s s lot easier to onboard people and from a completely selfish standpoint, developers who are concerned with their careers want to be able to put a transferable toolset on their resume.

It’s rare that I find a comment which validates my own practices like this. I personally love creating systems and abstractions, and admittedly I will sometimes create one when a simpler solution would suffice. However those “over-engineered” solutions have frequently saved me lots of time much later when business requirements change.

Your stance sounds different to the grandparent's.

There's a world of difference between adding a framework and over-engineering something that could have used a simpler solution.

My experience has been the exact opposite of yours, whenever I added something complicated, it either never got used and unnecessarily complicated the code, or when the time came to use the fandangled cleverness I lovingly wrought, it never quite met the need I actually ended up having.

So I stopped doing that years ago and now always write the simplest code. Never regretted it. I will add frameworks early though.

That is an excellent approach for new projects. On stable projects it's pretty common to have lots of little "special" bits where someone over-designed and created an unholy mess. (I'm guilty of this as well, of course.) In stable systems simple is almost always best.

I think it depends on what you're engineering. For example, adding static typing to a language has immediate gains regardless of code base size. It is also low risk and easy to implement.

Designing a spring quartz scheduler backed by a DB when a simple cron job would do? That's overkill.

There’s a difference between an extra layer of abstraction/indirection and going crazy with the a bunch of new hot (unnecessary/inapplicable) technology.

Now the non-strawman version: it's a 5 gallon job, vanilla React.js is already a 10 gallon bucket, and typescript comes in buckets up to 100 gallons.

It's often a matter of how much you think the job is likely to expand, how much you should frontload work to make that simple.

Saying "Don't do premature optimization" is easy. By saying "premature" you're already implying that you can tell when something is premature. What's hard is when you don't know when something early will come back to bite you later.

I had this problem last year, I ended up just rewriting an entire web application in a couple weeks. It wasn't a big or complicated app, it was only going to be used by a few people, but I had one requirement change that made me redo a lot of work. If I "over engineered" a little bit more it would have totally been fine.


I always focus more on the "foundation" part, in this case, it's common React Hooks and its API for the components to use.

The good part is, if some hooks are wrong, we can just write new one and replace old ones within old components without the need of taking care of the old wrong Hooks.

Aren't Hooks too new for you to make claims about what you "always" do with them?

Before, it was HOC and RenderProps. This is just an example with Hooks, it's mostly how i approach the problem though.

Right on, makes sense.

I couldn't agree more with you on this.

I often see good, actually simple solutions get turned down due to people saying "that's too complicated for what we need", only to later hear, after a couple of months in production, that nobody wants to touch the codebase anymore since it's unapproachable and the rewrite will not happen due to high risk since it's critical production code now.

There must be some bias here in what your remembering. Are the projects that didn't need to be extended, simply doing what they need to, sticking out in your memory

You might be right and maybe there is some bias in the projects which I chose to think of. Certainly there are many projects that don't require the same amount of thought from day one.

What I'm referring to in my previous comment is when people underestimate the complexity added by the easy and quick solution today vs the high cost over time. Specially in critical projects.

> This one never ceases to amaze me. Any codebase is small, until it gets big.

do y'all work on the linux kernel or what ? most of the projects I've seen across multiple jobs are less than 5000 loc. And they never will "get bigger" because they are not "products" that get evolved over time.

I don't know why people make such a big deal of types. They don't add that much time to typing the code.

I also have noticed that if you write daily using some code style (for example using types) it takes time to switch to another style, for example without types. So you are better of using types, because you are used to it and you will code faster that way anyway.

For me, it is less about typing and more about being sidetracked with hard-to-debug Typescript issues.

I also use HOCs a lot and it's painful with Typescript.

Well, in React's case if you use PropTypes, the rewrite is mostly mechanical, and can be automated via things like https://github.com/lyft/react-javascript-to-typescript-trans...

But many tools, flow, Python, TS allow you to incrementally add types to an existing codebase. That way refactoring isn't some herculean effort, but something you can do for new features, or gradually over time.

So many of these "do you really need" articles are this, authors who don't understand the idea of scaling

You don't need to rewrite to add types.

If you want type-safe code you will usually have to rewrite some parts of your app, or most of it if someone butchered the initial implementation.

In theory maybe you are right. In practice it never just adding types. E.g. you need to add null checks, rewrite old parts to follow same style (in case there are old parts in the code) and etc.

This usually happens anyway when you figure out how to do something better. Then need to apply it everywhere else.

This doesn't seem like a common occurrence, judging from all the praise of TypeScript sung on this thread and elsewhere, but I've had an overall negative experience working with TypeScript so far.

Here's some thoughts on what has contributed to that so far:

1. TypeScript pushes you towards its own build pipeline (based on tsc) that doesn't play nicely all the time with mainstream JS build pipelines (usually based on babel).

With babel 7 came @babel/preset-typescript, which I hoped would narrow the gap, but so far it's very clearly a second class citizen in the TypeScript ecosystem, with new features built only with tsc-based pipelines in mind (Project References is the one that stands out because we sorely need it for our monorepo codebase, but can't use because we chose to adopt @babel/preset-typescript), and having personally ran into several issues stemming from what seems to be fundamental incompatibilities between tsc and babel that have no real workarounds (here's one that I can remember off the top of my head: https://github.com/babel/babel/issues/8361).

The reason this is so frustrating is TypeScript could have been just a type checker, like Flow. But instead, it had to introduce it's own compilation tooling which is still vastly inferior to babel in terms of overall flexibility and extensibility. All of this seems to be due to what I believe are a few fundamentally poorly thought-out decisions at the beginning of the project to allow TypeScript to specify its own language features that have runtime semantics (things like Enums and class visibility modifiers, neither of which are anywhere close to becoming standardized in JS-proper, by the way, and both of which are completely orthogonal to TypeScript's main responsibility of type checking), and implementing them using a separate build pipeline instead of as extensions to babel & its own type checker.

If I were to use a JS type system for a new project today, I'd personally choose Flow over TypeScript in a heartbeat because of this.

(this is getting a bit long so will continue in a reply)

2. TypeScript's type checker, at least in its current state, has been downright painful to work with for functional programming with functional composition and higher order functions in general, with errors that are incredibly opaque and unhelpful, and its poor inference introduces so much seemingly avoidable type-related verbosity that it completely distorts the signal to noise ratio in our code.

A prime example for this is ramda's pipe function: https://github.com/DefinitelyTyped/DefinitelyTyped/blob/mast...

A simple function that composes functions together in reverse order requires:

1) The definition to be an exhaustive list of all the variations of input functions that pipe can accept (which of course would technically require an infinite number of declarations to fully specify, so ramda had to call it quits at 10).

2) The user of pipe to specify output types for every single function being composed at every step, even though that should be possible to infer using the return type of each function in the pipeline.

These kinds of limitations plague our codebase everywhere we try to define higher order functions in our own code as well, because it means we have to ask users of the function to specify the result of the function argument as a generic parameter even though it should be perfectly inferrable from the function argument itself, and the noise buildup becomes exponential as you start composing higher order functions together due to pipe suffering the same limitation. The issue also rears its head when using higher order components with recompose, which we make heavy use of as well.

I'd love to hear how other teams working with functional programming in TypeScript work with these limitations. Perhaps we're missing something fundamental that could vastly improve our experience?

3. TypeScript only supports positional generic parameters (i.e. you can't give them names).

It's well known that positional semantics don't scale when it comes to argument lists, because when changing the API of something with a positional argument list, adding an argument in an arbitrary position becomes a breaking change that requires all usages to be updated.

Unfortunately, part of the fallout from point 2 means we often need to define higher order functions that require a decent number of generic parameters, all of which have to be positional due to this limitation.

There's a hack that we discovered that allows you to have poor-man's named generics by using a single generic argument with named properties, inspired by this comment: https://github.com/Microsoft/TypeScript/pull/23696#issuecomm...

But it's incredibly verbose, and doesn't support optional generic arguments or defaults as far as I can tell. We still use it despite this, because we believe the verbosity and having to specify every generic argument is the lesser of two evils compared to offering APIs that requires breaking changes for every addition. But that we have to make this choice points to a glaring omission in the design of the type system.

You can kind of give generics names by declaring them in an interface and passing T extends Interface as the generic parameter

This is probably not what you wanted to hear, but I’m glad they don’t optimize for Ramda. On projects where I’ve had to use it, it just generally results in a bunch of fanciful syntax flourishes from the Ramda enthusiast on the team. They’ll jump on anything that looks like array manipulation, eager to try some neat Ramda tricks. In the end a “dumb” solution with loops usually ends up being shorter and simpler (not to mention, much easier on the memory without all the copying), and this is the style of code that Typescript optimizes for.

... and then they go, leave the code behind and make another team happy.

You realize that your problems with Rambda were mostly fixed in TS3.4?

This is exactly my experience. If your project doesn't have a nice build pipeline for TS then it can become quite awkward and take more work, especially if you have isomorphic code that isn't all in TS already. Also, a lot of the interface syntax can get pretty weird when you are trying to type untyped libraries, which definitely slows me down. I love types in languages like java where it feels like the types have first class support--but TS by definition often involves calling out to libs that aren't written in TS which feels awkward and loses a lot of the advantages.

My experience is exactly the opposite of yours.

Babel has always caused me headaches. Setting up Babel-based build systems is troubling—the same setup will work sometimes and not others even on the same machine. TypeScript vastly simplified working with modern JS for me. It’s consistent, requires far fewer dependencies, and is much more succinct.

YMMV, of course...

I don't know if what you've described is an _opposite_ experience. I think TypeScript would work fine for me too if I never needed anything compilation-wise outside of what tsc is already capable of. But as soon as you do, the lack of extensibility of tsc compared to babel becomes painfully obvious.

I think it is the opposite. Even without TypeScript Babel has grown to be overly complex.

If you’re talking about front end libraries like React you’re better off ditching Babel altogether and going with something like Parcel. It just works.

The TS developer experience ya sim proved working with JS enormously for me when I have to do it, and nudged me toward simpler, smarter development environment setups.

I can’t speak for you or anyone else. Just adding my two cents.

Babel's designed first and foremost with extensibility in mind. Of course an opinionated build tool with no extensibility to speak of (tsc), is going to be easier to set up and get started with than one that designed with extensibility as a core principle.

That doesn't mean one's necessarily fundamentally better or worse than the other, but the great thing about extensible tools is that they can often be used as building blocks for other more opinionated tools that abstracts away configuration details from the user by providing an opinionated set of defaults, and still offer the ability to customize when users' needs are not adequately met by the defaults.

Case in point - Parcel actually uses babel under the hood:


All it does is configures a default set of plugins & presets for you. You can override any and all compilation behavior Parcel provides by default simply by adding the appropriate babel config files.

TypeScript could have done the same with their build tooling, by extending babel with plugins/presets and providing the exact same set of functionality to users as what tsc's is capable of currently in terms of compilation features (which is currently afaik a strict subset of what babel + its plugin ecosystem is able to provide).

That would give users the exact same out-of-the-box experience as tsc does currently, and offer them the opportunity to extend compilation behavior with additional plugins and presets when the need arises.

That extending the compilation behavior of tsc, after so many years of development and wide adoption, is still not even on the roadmap right now is the core of my disappointment with them choosing to develop their own _non-extensible_ compilation tooling vs just extending babel, where they would have been able to offer extensibility for free from day 1.

It's the Maven (dumb) vs. SBT (very clever) build tool argument. Wherever I've been, there are people in the first camp and people in the second camp. I personally don't like highly extensible build tools because I can't manage them and end up with lots of complexity on it's own, taking time from me building features.

You can do tsc -> esnext syntax -> Babel -> whatever, though it's a pain to set up.

I do agree on the extra stuff beyond types (enums, etc) being a mistake that Typescript would be better off deprecating at some point.

The article mentions object-oriented programming several times as a helpful paradigm (especially for domain-constrained problems where DDD is helpful).

I’d also like to point out that functional programming is tremendously helpful for solving these types of problems, especially when combined with use of modules. TypeScript is absolutely capable of modeling, checking, and otherwise handling types in a functional programming context. — this is one of its best strengths in my opinion, and one reason I now prefer TypeScript over C#.

I really hope I don't come off sounding needlessly contrarian, but I've had the complete opposite experience when it comes to TypeScript and functional programming.

I posted about the specific issues I ran into here, and they seem to be pretty fundamental w.r.t the ability of TypeScript's type inference to work with function composition in general: https://news.ycombinator.com/item?id=19600475

But the fact that you (and presumably plenty of others) seem to be having a great time with functional programming in TypeScript leads me to believe that there might be something I'm missing that could vastly improve my experience with functional programming in TypeScript as well. I'd love to hear any further thoughts anyone might have on this topic.

> But the fact that you (and presumably plenty of others) seem to be having a great time with functional programming in TypeScript leads me to believe that there might be something I'm missing

I've noticed "functional programming" turned into an umbrella term.

For some people it means Haskell: strongly typed lazy evaluation with a focus on ADTs and functions that operate on them (i.e. methods) to achieve composition. Heavy use of pattern matching.

For others it means Clojure: dynamic language that favors raw duck-typed data manipulation. Heavy use of macros.

Then there's the pointfree crowd: very little use of literal functions, most of them are just composition of curried/partially-applied primitives.

For many it just means "I use map/filter/reduce".

And for most, it's just a mix-and-match of subsets of all those.

My experience matches yours: Typescript is mostly fine with map-filter-reduce and that's it. I'd love to be proved wrong though.

I found functional programming in TypeScript a bit cumbersome. I think Elm is much more accessible if you’re looking to do functional programming in a JavaScript environment.

Although I haven't used TypeScript extensively, I fail to see how it has better support for functional programming over C#.

Could you provide some examples?

> TypeScript over C#

TypeScript is a transpiler for JS. C# is a server side language. Can you please clarify how former is the replacement for latter?

I don’t want to speak for the OP here, but when I refer to Typescript in comparison to other languages I tend to mean Javascript + the type system provided by the Typescript.

Typescript is not even necessarily the transpiler nowadays. I use Babel for that and use Typescript exclusively for static type analysis in CI and during development.

They could just be referring to the language itself e.g. syntax and structure, not necessarily its primary use case.

JS is also a server-side language.

I didn’t mean as a replacement for C# or an alternative for a given project. I meant my personal preference as a developer (I do often get to choose what language to use, if only by where I choose to work or what projects I take on).

"C# is a server side language"

Node.js is very popular, making TS/JS available 'on the server side'

It's a default for me now, on every new project. It doesn't slow down development because TypeScript allows you to easily "step down" type constraints as desired, and writing nontrivial code without any kind of type hints is just unimaginable to me now.

Yep I feel the same. TypeScript improves my workflow tremendously and I use it for everything now. Going back to regular javascript feels like driving without a seatbelt. Sure you can but why? Type-checking and smarter autocompletion mean I don't make mistakes as often and my productivity is higher. (No longer run the app, see I misspelled a function call, fix and run again, etc.)

I also feel unit-level tests are less necessary with TypeScript. I used to write a lot of JavaScript tests that essentially just confirmed that "this sh*t is all tied together correctly". With TypeScript, if a code change or refactor broke something structural, you know immediately!

> I also feel unit-level tests are less necessary with TypeScript.

Especially with strict compiler settings, including strict null checks, and the good type conventions they encourage. To give a small example, if you have a value that can either contain an error or a successful result, you can model it as the union of a success and error state. This prevents you from accessing the value until you check the result is successful (and the compiler will nicely infer the type in subsequent usages once you do.)

For me, with rigorous compiler settings and a good proficiency in type modeling, I've reached the point where I rarely have any runtime errors at all once the code compiles. Sure it might take 2% more time to write code and think through my data model, but I spend 95% less time debugging it and the end result is much more maintainable. Even using typed languages like C# and Go now leaves me slightly dissatisfied!

Honestly, good integration of union's (Algebraic Data Types) is a good chunk of what makes Rust such a comfy programming language for personal projects. Being able to use a common, standard method to say "there may be a result or there may not, and if there is, here it is" is so expressive. Since the Option type is built into the language, it comes with lots of utility methods to make things even easier on me.

With C#, I am pretty sure you can get some of what you mentioned by using stricter compiler settings.

This doesn't come as a surprise to me, since Anders Heijlsberg, wo created TypeScript, was the technical lead of the team that developed C#.

I'm using TS for front/back end but you do need runtime checking as well at a few places, even though the compiler tells you it's redundant.

Relying strictly on typing is dangerous as you might've given wrong type from data coming back from database queries and you can't tell until it has run.

The major advantage of static types is replacing tests with proofs. Tests always have (limited) coverage, but type proofs are invariant. That said, it's very possible to misunderstand what a test or a type actually means.

> unimaginable

I imagine adding types gradually as a design solidifies, rather than slow velocity early on. Best not to be dogmatic on these things.

I have never understood why typed languages make you slower at any point in time (early, late etc.).

Because you have to hit more keystrokes to write your program?

That doesn't compute. typing is the thing you do the least amount of when programming. In all best practices we are taught to not save keystrokes. Name your variables expressively. write small functions. document code. Write tests. All ""excess"" keystrokes nobody questions. But when you have to type " ... : number" it's slow velocity?

In my opinion people vastly over-estimate productivity boosts of saving keystrokes. It's the tedious, boring part of the work that you would like to skip entirely, hence excess keystrokes feel way more "slowing you down" than they acutally are.

Although it doesn't matter, you actualy save keystrokes: autocomplete, auto-refactors and less low level unit-tests compensate the couple of type annotations easily.

I completely agree with you, but the criticism isn't usually with typing number, it's typing complex higher-order functions or this kind of things. Where the time lost isn't in typing out the code, but in understanding what the type should be or (worse) what does this cryptic typing error means

I don’t tend to write or encounter many complex higher order functions in other languages. Are they common in JS?

It is not the typing per se. Anecdotally, I started adding types to my python code and it is somewhat useful.

That usefulness is diminished when you are fighting the type system instead of getting things done.

Like when you have to make exceptions in typescript by using Any. Like when an interface for a js library is missing and you don't want to do all that work.

Or when an API returns a field with a different type in between two calls and you have to make exceptions in the code when parsing with jackson in java.

Python’s type system is a particularly bad example. It’s not ergonomic and the type checking is immature.

Not sure what immature or ergonomic means in this context, but I'm pretty sure handling of "private" variables (self._foo) is abysmal.

Though you have a point, writing down type annotations can seriously slow you down when you are doing exploratory (prototyping) programming. It can also make your code less readable as the annotations can get in the way of the logic you really care about (easily fixed in an IDE, but people want to still use notepad to edit/read code). Then there is figuring out what the annotation should be in the first place, again easily enough automated in an IDE but the notepad problem remains.

What usually slows me down about dynamically typed code is you have some variable returned by a function only denoted by var myVariable = myFunction(); and I'm sitting there staring at thinking... "what are the contents of that variable? what properties do I have access to on that object? what can I do with it?"

And figuring that out becomes a massive time suck when that's a constant across an entire code base. Statically typed stuff means I can very quickly find that out even if it's the first time i'm exploring that particular area of the code.

I agree, in a type inferred language or a language where annotations can be supressed, this is just another matter of IDE affordable.

Some dynamically typed languages support type annotations that are only enforced dynamically; Julia is the primary example of such a language ATM.

A well designed dynamic language like clojure can have explosive velocity. Stuff just fits together smoothly: a predicate can be a function or a set or a map or a regex, but most importantly, libraries are built of these mutually compatible blocks. Reminds me of (Sussman's?) description of lisp codebases as organic.. of course, "explosive" can be very literal when type mistakes propagate and the data model goes to bizarroworld. In clojure, you can defend yourself with spec, which catches these things at the site of the mistake. It's a tradeoff I often like (velocity vs. compile time checks). It all depends on the domain.

Yes. Extra typing is not a huge drain, I agree, but it is a minor one. Shorter is better until it impacts readability. Visual noise is a thing.

Code is read much more often of course. You may argue you can read a fifteen-page book as fast as a ten-pager, but it's pretty clear that's not the case.

Of course tools and inference change the balance, but even so I avoid putting code into the the kiln until the design is fully understood and at least satisfactory. And that can take a few revisions.

You'll find that the more you work with types, the less this becomes an issue. You begin to think in types as a first class construct.

Types don't slow me down, and I can't imagine working without them.

You also have to type, refactor, and test them. I work in multiple languages and the typed are definitely slower at the beginning, but pay off later. The strategy I outlined is the way to get the best out of both.

Also working in both I don't have this issue. I generally find types make refactors faster - I have less bugs from the refactor.

A mature refactor yes, a change in design often not.

> types as a first class construct

A wild Idris programmer appears

Well, imagine type-first programming, where you design your system first just by writing the types, look how it works, and only when satisfied go bother with implementation.

I'm not sure TypeScript is good enough for doing that (I don't know TypeScript that well), but it's simply much better than starting with code.

I recently had to write a tool to output a Swagger file from a proprietary api testing format, and writing all the interfaces for Swagger based on the Swagger specification was hugely helpful. I’m not sure I would have been capable of completing the project without TypeScript.

Also coming from JavaScript development first, TypeScript made learning C# and templates a breeze.

What if the specification changes ? Being static is has both advantages and disadvantages. You can do the checks without running the code, but it doesn't guarantee run-time correctness. I think a better strategy is to type check the actual code via inference, maybe adding some doctype comments to help the static analyzer, then add run-time checks where things are likely to break. And you can make your API easier to use by checking the parameters, throwing helpful and descriptive errors, allowing several different types as input, so the user doesn't have to convert his/her model to your model.

The spec changing is the best part! (Or conversely, the worst part when you are 100% dynamic). You change your types to match the new spec (eg adding a case to a discriminated union) and you immediately get feedback on places in your code that are definitely wrong/incomplete now. It won’t find every change you need to make but it’s a big load off and frees up some cognitive capacity for what’s left.

TypeScript works reasonably well for this approach; it's how I tend to do things. It has some gaps when you want to do really advanced stuff, but it gives you tools like typecasting for crossing those gaps. One of the great benefits of JavaScript type systems is that since they're overlaid on a dynamic language, you can just "turn them off" as needed (when the type system fails to understand that what you're doing is safe). You can comment out type checking for a particular line, or even let your whole project successfully build while type errors remain. The type system then becomes purely a tool, and never a roadblock.

> design your system first just by writing the types,


> look how it works,

Beutiful except it doesn't work yet because it's just types that make some shape. You could literally draw the thing with pen and paper.

> go bother with implementation.

And only then discover that something you have thought of as implementation detail is one of the main problems with your solution.

No problem. I'll just refactor some type. It's easy. Less beautiful but at least it'll work. Except it doesn't. Butchering your elegant design for hours you come to hard earned realisation that the shape you imagined that you wrote and carefully named all your types to represent actually has nothing to do with the one you need.

If you are a bad person you ship your mangled typed monstrosity that you just barely made to work with traces of your struggle fossilised for any future maintainers.

If you are a good person you toss everything to into trash (save for few hard earned implementations) and start fresh with knowledge and humility wishing you worked with TypeScript where you can start with implementation and only after you figured out the hard parts, add types that reflect your solution (finding few additional bugs in the process and getting more confident about the thing you wrote).

> You could literally draw the thing with pen and paper.

Paper does not tell you when something you draw is wrong.

> And only then discover that something you have thought of as implementation detail is one of the main problems with your solution.

What is no different from learning that something you implemented is wrong, except in that you spent less time to get the problem. Or do you expect some magical designing tool that will lead you to the right solution before you understand your problem?

> Paper does not tell you when something you draw is wrong.

With some things it does. At least better than your brain does. Typesystem does it even a bit better. But you end up with a picture not a program nonetheless, before you start writing implementation.

> What is no different from learning that something you implemented is wrong [...]

What is different is that with all the types you wrote, the process of problem discovery that happens during implementantion is stifled by what you wrote so far. Also you have additional temptation to contort your types and to leave it in a bad state because you wrote so much already and you'd very much want to avoid tossing that out.

My point is that you discover problem through experimentation and types make experimentation harder because they make the code in some ways harder to change. TypeScript avoids that by allowing you to use as much types as you want at any stage of problem/solution discovery but not forcing you to.

Well, again, what of the things you said do not apply to any form of planning?

The only benefit of a plan is that you may discover it's wrong, and it's easier to throw away than the done thing. If you are not willing to reform or throw away your plans, you are doing it wrong.

What you described is the waterfall approach, which brings so many problems in most real world software development scenarios.

The opposite of waterfall is not “no planning whatsoever”.

It doesn't as you've already made that decision when adding a property. Almost always you get rid of the property entirely and replace it with a new one (and the type), you extremely rarely change the type.

And you will almost always know what type a property needs to be. Name? string. IsActive? boolean. Last modified? Date. Etc.

Also typescript just has number, so you're not even faffing around thinking "int, float, long, double, decimal, unsigned, signed" when adding a number. Although this is ultimately a bad thing.

It does, I’m talking more about module/system architecture than properties. I might go through three designs on a brand new project and futzing with types and tests while factoring it is an inefficient use of time. Waiting until satisfied with design is my advantage.

Why do you imagine types slow velocity? My experience is the opposite. I occasionally prototype in Go and then backport to Python (to integrate with our application) because I can iterate so much more quickly (yes, Python has type annotations, but the ergonomics are poor and the type checker is immature).

Experience. Again, not talking about a mature project, but a new one in the design phase. Types (like unit tests) are extra work at this point until a good design is discovered. Which takes time to think about.

Wait, type checking is good again?

Suits me.

Welcome to 1985 folks :-)

Yes, but not the rigid ways of traditional typing but when using duck-typing.

Don't like pedantic attitude just because some recent paradigm matches that of a while ago. Back then, you don't know dynamic types won't be too welcomed.

Also apparently SQL is #good now. But only because all the people who just spent the last 6 years doing Node said so.

I'm also using it as my default. I also find it great that with every release, they are pushing features that promote immutability; ReadonlyArray, and const assertions, to make developer workflow better and safer.

> It doesn't slow down development

I don’t know about the rest of the world, but I’ve never once been slowed down by a compiler saying “It won’t work like that. Don’t waste your time”.

> > It doesn't slow down development

> I don’t know about the rest of the world, but I’ve never once been slowed down by a compiler saying “It won’t work like that. Don’t waste your time”.

I'm right there with you, I really don't understand this train of thought that catching errors faster somehow makes you slower. I like having the compiler there to catch my errors when I make them rather than some number of minutes/hours/days/months later when everything is on fire and I've forgotten the context of what I was doing.

Maybe learning a statically typed language (nearly) first damages your brain irreparably, and there's no longer any hope for me.

Your remarks apply only to cranking out new code. Static typing prohibits making fundamental changes to an existing body of code, like changing a core, widely-used data structure. To make such a change, you have to be extremely sure that you want it and that you're doing it right. Then you can commit to the days and weeks of refactoring at the end of which you once again have a program you can run.

Under dynamic typing, you just do a small part of the job, yet the program builds; you can get in there and try things. Decisions about how to proceed can be guided by errors you run into when you step on the things broken by the partial refactoring. Seeing things working right away can be a motivator to get the whole thing done. Or it can help you see that, oops, the change is not worth doing or bad for whatever reason: you dodged a bullet with a small amount of sunk cost (the few changes you made that you can throw away without a whole lot of regret).

Static typing helps with small refactoring where the ripple effect is small. The compiler pinpoins things that have to change; they happen to be few in number, and don't have too many ripple effects of their own. If your change breaks everything, then it's moot; the compiler tells you there are breaking changes everywhere.

There is another angle: under dynamic typing, changes can be done in ways that things continue to work. You can write modules that have "absolute genericity": they work with objects of any type without being recompiled. Such things are bullet-proof against all conceivable refactoring.

> Your remarks apply only to cranking out new code. Static typing prohibits making fundamental changes to an existing body of code, like changing a core, widely-used data structure. To make such a change, you have to be extremely sure that you want it and that you're doing it right.

I’m reading this and I have just no idea what to say.

With static typing you don’t have to be very considerate in this work, because you will have a compiler telling you if anything is broken, not to mention refactoring-tools which can help you reliably and automatically make those changes across your entire code base in a verified 100% correct way.

With dynamic typing however, you just have to guess and pray.

Clearly static typing is the superior option in this specific scenario where you are calling out static typing to be a disadvantage.

So what am I missing?

I don't necessarily want the compiler diagnostics to go away; I just don't want them to prevent me from running the program.

A good language has a run-time type in every object, regardless of the type checking system, and allows programs to be run safely even if they are incompletely checked or if inconsistencies have been positively identified.

My pet theory is that it's proof of the Mandela Effect[0]... That or different people think differently and value different things. Nah, probably the Mandela Effect.

[0]: https://en.wikipedia.org/wiki/False_memory#Commonly_held_fal...

> Your remarks apply only to cranking out new code. Static typing prohibits making fundamental changes to an existing body of code, like changing a core, widely-used data structure. To make such a change, you have to be extremely sure that you want it and that you're doing it right. Then you can commit to the days and weeks of refactoring at the end of which you once again have a program you can run.

I have exactly the opposite experience here. If you're making a sweeping change to a core data structure, I would feel much more comfortable making a large change in a statically typed system. It would take much longer for me to feel comfortable releasing a change like that in a dynamically typed system.

> Under dynamic typing, you just do a small part of the job, yet the program builds; you can get in there and try things. Decisions about how to proceed can be guided by errors you run into when you step on the things broken by the partial refactoring. Seeing things working right away can be a motivator to get the whole thing done. Or it can help you see that, oops, the change is not worth doing or bad for whatever reason: you dodged a bullet with a small amount of sunk cost (the few changes you made that you can throw away without a whole lot of regret).

Is this how you work? That's terrifying. How much time do you have to spend poking and prodding after you make a change like that in order to be sure you've fixed everything? How much time do you spend tracking down the source of bugs caused by this weeks or months later?

> Static typing helps with small refactoring where the ripple effect is small. The compiler pinpoins things that have to change; they happen to be few in number, and don't have too many ripple effects of their own. If your change breaks everything, then it's moot; the compiler tells you there are breaking changes everywhere.

If your change breaks everything, it's too much to change at one time. Period. Smaller refactors that deliver some amount of value are so much faster than cleaning up for months after throwing caution to the wind.

To use the example of a change to a core data structure: If you want to make a breaking change to a given struct, the first place I'd look is to see if you can convert between the old style and the new style seamlessly.

If that's possible, then you copy the data structure, make your changes, and add a method on each to convert to the other. Then you can slowly roll out that new data structure to the rest of the application until the compiler allows you to delete the original. Speaking from experience this route is so much faster than swapping it out at the source, fixing it up everywhere, and hoping things continue to work as they should.

If it's not possible, not all is lost, but I'd need to see the specific example to give advice on what to do.

> There is another angle: under dynamic typing, changes can be done in ways that things continue to work. You can write modules that have "absolute genericity": they work with objects of any type without being recompiled.

If your code can work on literally any group of assorted types, but still do something of value, then I'm interested to see what it's doing. Sounds like it doesn't do much of anything except pass pointers around. Great for a collection struct, but that's about it. As soon as you need to get things out of it and do something of value, you'll need to have some idea what type you're getting back.

> Such things are bullet-proof against all conceivable refactoring.

I am extremely cynical of any promise regarding code that is "bullet-proof against all conceivable refactoring". OOP famously promised that for many years, but my experience has been that it's the cause of far more refactoring pain than it saves.

Hold on. Are you conflating OO languages with static typing? If your experience with static typing is Java then I actually think I understand where you're coming from now.

I find that the first 50 lines are faster / more immediate without types. By 500 lines it's largely even. By 5000 you will have to pry types off my cold dead hands.

I wish I could be convinced by the likes of Rich Hickey that types don't actually help, because he's so smart and charming and eloquent, but my experience screams the opposite.

Yeah, exactly. I think people add way too many zeros to those figures when they try to come up with advantages for dynamic-typing. There certainly are some, but I don't think agility is in dynamic-typing's favor.

The usual saying is that faster development is more important because your business probably won't get users anyways. As if Ruby is going to give you a two year head start before any chickens come home to roost.

I don't know about that. Maybe you will get a half-day head start on me if I'm using a statically-typed language I haven't used before and I'm digging through old projects to remember how to write my .gradle hello world.

Yeah, I haven’t used lisp enough, but I want to believe that there is something inherent in its simplicity that makes it categorically different from all of the other dynamically types languages I’ve tried which all would benefit from static types.

Hi, I'm from the rest of the world. I have successfully made small changes to a program which basically broke it, just to do a proof of some concept. I was able to run the thing and test the concept, while not stepping on broken things.

I don't want to commit to refactoring a large amount of code for the sake of a small change, just so that some code-writing robot gives me permission to run the whole thing.

The amount of fanboyism in these comments is astounding.

TypeScript is a great tool.

At the same time, people have written apps with vanilla JavaScript for a very long time now, and it works just fine. If types are really that big a deal that you have a hard time writing an application without a compiler checking your types, you should reevaluate what you're doing. It's not "dangerous" to use plain ol' JavaScript, and implying that others are foolish for not using it reeks of software snobbery.

Whoa, you mean people really still use plain JavaScript, bruh? I mean, don't you need like punchcards for that? That's how grandpas program, bruh. You can't even, like, scale an app without type-checking. An app written without TypeScript is like a house of cards, dude.

>If types are really that big a deal that you have a hard time writing an application without a compiler checking your types, you should reevaluate what you're doing.

Implying that everyone has exceptional short term memory, and reading old code has virtually no cost.

I never said that TypeScript wasn't helpful. The picture painted by some people that frontend applications without compile-time type-checking are ready to fall apart at the seams and have knobs and springs go flying everywhere, like something from a Looney Tunes cartoon, is patently absurd.

Given what I said, Even for fairly simple examples, it would still be more bug prone, simply due to humans not having perfect cognition. At scale that can add up.

Also keep in mind that this both enables, and follows a trajectory of increasingly complex frontend applications, previously a lot of interactive stuff was done server side.

Patently absurd, if you take it as a strawman, sure. You ended up using that to paint the complete opposite picture, which was essentially that Typescript doesn't actually add any real value, only "perceived" value. Which is true, if all you value is the execution environment. Keep in mind that Typescript was not the first attempt at trying to "tame" Javascript. One example that comes to mind is Coffeescript.

The natural reducto ad absurdum is thus, well why aren't we just writing in ASM, it all boils down to that anyway, right? People were doing fine then too... Could it also be said that accusing people of needing "crutches" is also snobbery?

> The picture painted by some people that frontend applications without compile-time type-checking are ready to fall apart at the seams and have knobs and springs go flying everywhere, like something from a Looney Tunes cartoon, is patently absurd.

This is a great analogy for my experience with JS projects. Most bugs are found at runtime, which is incredibly frustrating and time-consuming.

It's like making your swords out of bronze when steel is just sitting there.

I'm sure there are users of Dart who feel the same way about TypeScript.

> If types are really that big a deal that you have a hard time writing an application without a compiler checking your types, you should reevaluate what you're doing.

Speaking of software snobbery, what does this statement reek of? This attitude is just as condescending and utterly non-constructive as the fanboyism you're describing.

I'm currently building a fairly large application using TypeScript both on the front end and backend specifically because I "have a hard time writing an application without a compiler checking your types". It's working great but according to you this is a valid reason to reevaluate what I'm doing?

People have written apps in C for a very long time now, and they work just fine, but it's not exactly the first language of choice for new projects these days outside of some very specific niches.

Maybe I'm failing to get my point across. I'm not saying that JS is the end-all-be-all of frontend languages, but that some people are speaking as if TS is the messiah and that any JS compiled without it is flimsy garbage. That tells me that people either have limited experience with TypeScript or they were poorly educated when they first began using JavaScript. TypeScript is probably a good tool for them to use. TypeScript is becoming more popular, perhaps with good reason, but the negative attitude TS fans have towards JavaScript is often very immature.

As you say, there are still niches where C is good, and C is going to be around for a very long time because it's used for so many things. To claim that someone can't write a well-built application that is maintainable and scalable is asinine. I myself probably wouldn't choose to write an application in C, but that doesn't mean I'm going to thumb my nose at anyone who decides to write applications with C. C is perfectly fine, and if someone is frustrated with it, then maybe it's just not the right programming language for them.

This is not fanboyism.

" people have written apps with vanilla JavaScript for a very long time now, and it works just fine. "

No, it doesn't, or else TS would not exist.

TS just outclasses JS on almost every front, it's not a religious statement, it's generally true: TS is the standard, it just has too many advantages over JS to allow JS inside the roost.

Though article is only misleading to the point wherein they're talking about TS vs other languages, for solving other problems.

> " people have written apps with vanilla JavaScript for a very long time now, and it works just fine. "

> No, it doesn't, or else TS would not exist.

Think about what you just wrote there.

It's also entirely possible that a given tool can work fine, but other people can build on top of it without irrationally claiming that said tool doesn't work. Elixir and Kotlin are examples of tools that improve upon other runtimes(Erlang and Java), but I've yet to hear anyone from either language community act so dismissive to anyone who chooses to write applications in Erlang or Java as some TypeScript users are towards vanilla JavaScript users.

Typescript is wonderful, especially as someone used to writing a lot of java/groovy. Typescript is like pouring cement around your javascript house of cards. It makes writing front end code painless and predictable.

It also has amazing tooling in intelli-j. Code completion, linting, package recognition, all the good stuff.

> It also has amazing tooling in intelli-j. Code completion, linting, package recognition, all the good stuff.

... and in the free Visual Studio Code, also crossplatform like IntelliJ.

(Nothing against IntelliJ, JetBrains is a cool company with amazing products IMO, I just prefer VSCode myself.)

I think VS Code is almost like a little brother of IntelliJ now but better at a few places. (Especially rendering.)

>It also has amazing tooling in intelli-j. Code completion, linting, package recognition, all the good stuff.

Yes, this makes writing Typescript actually faster for me than vanilla JS. The code completion in IntelliJ is really amazing.

I wonder what people are using to write code with for those that complain vanilla JS works fine. Vanilla vim?

Regarding TypeScript, the benefit of catching way more errors at compile time greatly outweighs the amount of extra work for adding types. If you're starting a new JavaScript project, you should definitely think long and hard if you pledge not to use TypeScript.

And the best part is that TypeScript is just a superset of JS. If you're new to TS, you can use it like you've been using JS. Just by giving your variables the `any` type, or disabling null checking, and so on, you can write JS like you always have. No pressure.

And, eventually, you'll be lulled into properly typing your variables, because damn it those type annotations are super handy! Oh, and you start defining your own interfaces because it makes your code more readable and easier to reason about. Oh, and...etc etc etc.

If you start by gradually adopting the extra features that TS provides, you'll never want to go back.

> greatly outweighs the amount of extra work for adding types

this is such a bad excuse, imo - the "work" required to get your wheels off the ground in a JS -> TS transition is literally just changing the filename. 99% of syntax / TS errors that present themselves after that change are bugs that your code already had.

Why does everyone treat TypeScript like a full language? Besides some features like generics 98% of it is just javascript with type annotations (sorta similar, including in syntax, to py3 with annotations). And in that view its probably foolish not to use it as the cost of these annotations is so low relative to the refactoring/safety/easy-of-use/code-completion/etc it offers. These type annotations are so easy to add!

I can't even imagine writing JS without these annotations esp given all the odd behaviors of the language.

TypeScript is such an unbelievably powerful add-on that I simply refuse to use vanillaJS unless absolutely necessary. It eliminates entire classes of bugs, encourages me to write better code by thinking in JS about contracts (interfaces), and vastly improves my productivity with the aforementioned annotations. ("What was the signature for method X of class Y in library Z? Ah, that's it. Thanks, TS/VSCode!")

I know... when to use typescript = all the time? It's not a huge shift.

Some people love js for it’s functional approach. Mainly because they hated the whole OOP paradigms and the design pattern baggage that came with it.

But now js is adding classes and there’s typescript , now it’s like Java all over again.

Welcome to programming, where the fads go back and forth in cycles.

I also read a commment here saying people love React because now they can add code all over their view file.

Well adding code in view file is ugly and there’s a reason why we tried to get away from doing that.

Sounds to me people want to just go back to the 90s-00s!

> it's actually dangerous for your project to NOT be written using TypeScript today.

Wow! Bye bye Javascript! Such arrogance! In a way it always feels like Typescript developers are way beyond all those poor suckers still coding in Python, Javascript, Ruby, Coffeescript, etc.. Dynamically typed languages are DANGEROUS!!! just as C is DANGEROUS!!! I'm so happy C is still being used and not abandoned in favor of C# or so.

I know I can be way more popular preaching Typescript nowadays, it would make me really cool, smart and up to date. Not going for Typescript proves I'm mediocre at best. This is not cynic, this is real when I talk to fellow web developers.

I believe static type checking should ideally be done by the IDE, we shouldn't need an entire new language for that with all its shortcomings, issues and whatsoever. And we'll see what's left when the hype is over and the next big thing in the Javascript world comes around. At least heaps of Typescript code bases that need to be rewritten.

> static type checking should ideally be done by the IDE, we shouldn't need an entire new language

I am not sure what you are trying to say here. Javascript is barely typed, so presumably you do need a new language to perform type checking.

Unless you mean that your IDE should be able to infer types, which is unlikely because typing defines intent, and we've all read plenty of code where we can't figure out what the code author intended.

Your comment seems to be a result of ignorance of what TypeScript is and how it works.

The main point you're missing is that TypeScript is gradual. It's a superset of JavaScript, meaning that you can use TypeScript when you want it or ignore it when you don't want it.

Any valid JavaScript file is also a valid TypeScript file.

> Dynamically typed languages are DANGEROUS!!!

See above. TypeScript is dynamically typed by default. It just also has a static type checker that you can opt into.

A good practice in both JavaScript and TypeScript is to use "const" instead of "var" or "let" anyway.

> I believe static type checking should ideally be done by the IDE

JavaScript doesn't have enough explicit information for this to be possible. The IDE can do a lot, but it can't do nearly as much if the developer's intentions are implicit. In TypeScript, the developer has the option (again, not the requirement) to make her intentions explicit.

> we shouldn't need an entire new language for that with all its shortcomings

Again, see above. TypeScript is a superset of JavaScript, not an entirely new language.

> At least heaps of Typescript code bases that need to be rewritten.

No, they won't. TypeScript compilers will still be available, even if they're not actively developed. They produce JavaScript, so worst-case scenario, you'll just have a JavaScript code base.

One challenge I have with TS is that when writing interfaces for untyped libs it's easy to make a mistake. I've had issues where VScode was telling me something was the incorrect type because someone else on the team had written an incorrect type in. Instead of debugging it like I would in normal JS I banged my head against the wall because I was convinced that is TypeScript was telling me something was a certain type, it HAD to be true.

Coming from C# background, Using Typescript for front-end programming was a breeze for me. This helped me in picking up newer frameworks and saved countless hours during debugging and build time errors. I liked strict validation of props using interfaces while working with restful APIs.

Such a guide can be reduced to:

If you only have a single file that has a limited set of responsibilities (e.g., just a small tool) and is not shared / distributed then JS may still be okay, otherwise always go for TypeScript.

Honestly, TypeScript never slows me down. The enhanced completion / IDE knowledge, compile-time type checking, and transpilation supporting newer ES features + React is a huge boost.

I wonder if some of the people here, that like Typescript just fine, would consider less main-stream typed-js solution.

I.e. Elm, ReasonML/ocaml/bucklescript, Purescript, Haskell with ghcjs, Rust with web-assembly compilation, or even wasm from Go?

I am somebody who really likes to dabble, but is not a fronted person, so I am thinking, what would make you consider switching?

What I mainly liked about TS is that it's basically plain js with some typeinfo sprinkled on top. So no barrier of entry for existing js developers.

Now I work on a project with Elm for the frontend. It's not as easy to just jump into, so wouldn't use it for a project where lots of developers sometimes have to make additions. But once up to speed, Elm is great.

I know a lot of languages already, so unless there is a significant advantage to using a given language I'd probably pick one of the many I know already.

Typescript has significant advantages over vanilla JS, so that's why I use it. If those other options offered a significant improvement over Typescript AND said features were something I needed, I wouldn't hesitate to use it. For now, getting better at the languages I already know seems like a better use of time.

> I wonder if some of the people here, that like Typescript just fine, would consider less main-stream typed-js solution.

Would love to but TypeScript integrates so seamlessly with JavaScript the language, and the ecosystem, that it's difficult to justify using my preferred language (Scala/Scala.js).

Another thing to note is that working TypeScript with JavaScript feels idiomatic; I can't say that for any other typed-to-js language I've seen or worked with.

Finally, the ecosystem, this is the deal breaker. With TypeScript it's similar to languages on the JVM, you get a huge ecosystem for free. The alternative is manually writing wrappers/interfaces for JavaScript libraries, or, if you're lucky, interface generators are available, but even then there are often caveats/tweaks required to get things working as expected.

Always, unless it's a very short script that uses packages that have no type definitions.

the issue I run into (and maybe there is a quick solution) but when I work on a TS React project and want to add a library that does not have Typings definitions, i get in a world of hurt. trying to quickly add my own typings or fix TS errors w/out disabling major compiler functions.. usually spend an hour or two and give up on the library.. is there a good way to deal with this scenario?

That is a pain point, but I think of it as spending a few hours to save a few weeks down the road.

Honestly, the only downside to me of typescript is occasional grief from @type libraries or some build complications when you're first getting setup. But having working code completion and non-surprising return values/argument parameters is so much worth the initial minor pain points.

The other issue is dealing with module compilation... this part still always screws me over.

Once it's up and running TS is freaking amazing

i recently learned you can actually have typescript annotations without [imo] polluting the syntax or requiring the TS compiler:


does anyone know if there are limitations to this style?

Yes I wanted to do this initially but some things, like function overloads are kind of a mess. It depends really on what tags you need. You should look through the typescript issues related to jsdoc. Additionally, it's a bit of a mess trying to output declarations if you mix ts and js because enable checking js disallows emitting declarations, and declarations are only emitted for ts. You can get around this by setting your editor to check js, but leaving it off in the config. But then at that point I have no idea how you'd create documentation for that.

In general, contrary to what you'd expect, documentation options for ts seem poor. Typedocs is painful and ugly. I'll be trying DocFx next but it looks like it has a complicated workflow.

Also you should imo have some typescript experience to be able to write the jsdocs in a way that they work how you expect (e.g. typing a parameter as an object does not work how you would expect, it).

So next thing I thought was to just write js and accompany each file with a manual .d.ts definition. This works when you consume the exports of the file but it doesn't when you're within that file (see https://github.com/Microsoft/TypeScript/issues/30304) which makes it useless imo. Also even if it worked, now that I think about it, I'm again not sure how creating documentation for that would work.

In the end I've reluctantly settled for just switching completely to typescript and hopefully the documentation situation will improve when tsdoc gets farther along.

No limitations (only that it's a bit more verbose than just TS), this is canonical supported by the TypeScript powered tooling which VS Code provides by default on an JS project.


Cool, thanks. I generally love TS, but I have one project which I converted to TypeScript, and then reverted back to ES7, because the project is quite complex and the overhead of TS was not worth it. I wonder if by using this Jsdoc/ts strategy in critical sections I can get 80% of the benefit.

> the overhead of TS

Do you have any concrete examples? I would argue that a complex project that is already in TS benefits more from remaining in TS, so I’m curious what your reasoning was.

1) Import/export syntax hell. This project is 148 Javascript files, 12k LOC. Everything runs in both Node and Browser environments. Without TypeScript I just use CommonJS syntax and rewrite the scripts on the fly via a server-side pass through for a browser environment. But because of the module hell (import/export/script modules/relative vs absolute paths/defaults) I couldn't find a good formula for write once, run everywhere for this large project. I needed to be outputting 2 different builds to 2 different places which led to all kinds of path problems. This is very much unique to this project though, which has unique runtime require requirements.

2) Compile time overhead when iterating. For many visual development tasks it just takes too long to change something, compile and see results in the browser. Without TS it's 0 latency. With TS it was a few seconds. Without TS I could make 10 changes with 2 seconds latency, and perhaps I make 1 type mistake costing me 20 seconds of debug time, for a total of 40 seconds. With TS I make 10 changes with 5 seconds latency and zero mistakes but now total overhead if 50 seconds. I want to be able to get the documentation and static type checking benefits without giving up that instant feedback.

I suspect you can't do some of the fancier type contortions. For the simple case, I think it works just fine.

My day to day value from typescript is that when you’re on a make things happen team that’s burdened by type pendants or grouchy C#/java/r/whatever “devs” who find themselves career “transitioning” to JavaScript/web-apps throw them typescript as a Turing Tarpit & then everyone shuts up and then makes a thing. Dumb arguments get relegated/channeled into PR/issues on the tsconfig.

I say that half in jest but one of the dumbest & most destructive things in software engineering is tribalism or militant belief systems. Having a “neutral” way to deal with them (or something like “data” seeming like it’s neutral) once and for all does wonders for productivity

People who don’t like or trust typescript, what’s your good-faith case against it. Personal productivity or ugly/confusing syntax are valid reasons I think

> I certainly didn't see the benefit... up until I started experiencing some really annoying stuff. Things like builds not failing when they should, buggy code and typos finding their way into production code somehow started to get to me

It's hard for me to understand how anyone can't see the benefit of types at compile time. I mean if you write more than 100 LOC you are bound to make a mistake that a typing system would catch and think "I wonder if there is a way the my build process could catch such as silly error". I can only imagine that such programmers are "write only" in that they add lots of features but rarely need to come back and maintain their code. With an ide with autocomplete it may mitigate some mistakes and give a false sense of security about non-static typing.

On the other hand. If your code base has become so big that you need something like TypeScript to make your life bearable, you're probably doing it wrong.

Having strongly typed languages has much more to do with those developers who can't live without the autocomplete feature in their IDE. Especially the army of ASP.NET developers who are used to working with Visual Studio.

We'll probably be seeing more of those code generator patterns with TypeScript soon.

Not saying TypeScript is a bad thing. But strongly typed languages do come with their pitfalls. Massive amounts of code providing a simple CRUD interface. An entire team working on their complex microservice solution which is just wrapping some already existing API. Things that can easily be achieved with a couple of lines of cough PHP cough.

I'm writing a DDD helper library in Typescript (https://www.npmjs.com/package/@node-ts/ddd), but found it more difficult doing something similar in Javascript. The richness and safety that static typings bring to a project compounds as time goes by.

I've also found it easier for new developers to become productive sooner when the language and framework guides them in the direction they want to go. I'm all for anything that supports a positive developer experience, and it's more concise to express my design in terms of static types than it is through mounds of documentation or exploring the code.

Seems like you should just learn javascript instead. Though I can see how Typescript might be easier, depending on your background.

Just use it. I found with auto import and other stuff actually writing TypeScript is almost always faster than JavaScript.

The type system is a little bit more advanced than those languages with nominal type systems like Java. Sometimes typing old code is tricky. But the good news is you can escape by using any anytime.

> For example, it's almost always expected that your app is going to still work offline in some capacity; and when users ARE online, it's also usually expected that they're going to get real-time notifications without having to refresh the page.

I never expect something to work while I'm offline, or do they mean the cached contents like a page would work if it were just cached HTML and CSS? As for real-time notifications, I don't know anybody who uses those, the other day someone was telling me how pissed off they were that they always get them from some news site, and they didn't mean to enable them, now they can't find where to turn them off!

Yup- Typescript is mostly used in large-scale enterprise software development where rigorous unit tests are not in place.

I still suspect that a large part of Typescript adoption comes from developers who are used to working in an IDE (Visual Studio, Eclipse, etc), and are uncomfortable with javascript's natural "textfile->compiler->test" workflow.

Its also worth noting that support for typescript (that isn't Microsoft marketing) tends to come from outwith established tech environments. Using Typescript outwith MS environments can be challenging.

Isn't anyone worried that TypeScript will go the way of CoffeeScript?

No, it has a great team and the powerful Microsoft behind it, and MS is dogfooding it enough to make me feel they are not going to pull the plug.

Given the sheer number of Microsoft's own popular products that have significant amounts of code written in TS - from Office web apps to Visual Studio Code - it's not going away any time soon.


I am not. CoffeeScript was never as popular as Typescript already is. TypeScript is a tool which solves a problem, CoffeScript is a language (IMHO) no one needed.

To be fair to CoffeeScript, it introduced and popularized features that eventually made it to the EcmaScript standard. You could make a case for it being the kick that started the wave of improvements to JS that we've seen in the past decade.

As for GP's post, I can only hope that Typescript leaves a similar legacy, even if the language itself ceases to exist.

That kind of historical retrospective is an article I would definitely read. As much as Coffeescript seems like a dying language when compared to where JS is today, its appeal was very strong for it to have become a default include in Rails 3.1, and to have been the primary choice for teams like Dropbox and Github [0].

0: https://en.wikipedia.org/wiki/CoffeeScript#Adoption

I'm pretty sure Jeremy Ashkenas has commented before on how he's satisfied with the role CoffeeScript ended up playing. Don't know if there are full-fledged articles on the topic, and I can't tag him here, but if you're really curious you can always reach out at https://twitter.com/jashkenas!

Its syntax and design was very appealing to Ruby developers. I was more surprised when Fog Creek chose it for Trello in '12, but yeah back then the raw Javascript experience was fairly poor, and Typescript would still take a couple of years to be ready for production.

CoffeeScript was born out of frustration with JavaScript’s (then major) shortcomings compared to back-end languages like Ruby and Python.

Try building an app in ES3 JavaScript (i.e. it has to support IE7 with no transpiling) if you haven’t before. It can be rather frustrating if you’re used to modern features.

Prototype.js was the kick that eventually led to the standard library updates in ES5, and (as another post has said) CoffeeScript was the kick that eventually led to language- and syntax-level updates in subsequent versions.

Babel can transpile typescript now and strip the types off. So I think it's pretty easy to back out of you ever need to.

FYI, the link to "concrete classes" is incorrectly linking to https://khalilstemmler.com/articles/when-to-use-typescript-g....

I assume the correct link should be pointing to https://khalilstemmler.com/wiki/concrete-class/

I appreciate that. Thank you!

At the risk of being downvoted into Hades, I feel that if your project is complex enough to warrant using Typescript, its complex enough to warrant using a more robust and comprehensive language. After using Typescript extensively working on a back-end application in a complex problem domain I feel that while it is definitely a fantastic improvement to the core language, it doesn't really escape Javascript's worst issues. Issues like Promise-fatigue, poor ecosystem, terrible native types are still there under the hood. For the extra tax you pay in terms of build pipeline, tooling and dealing with the ecosystem, you may as well upgrade to a language more suitable for this kind of development for little extra cost. I just feel this debate is somewhat of a false dichotomy. People in this thread are debating Typescript vs Javascript as if these are the only two possible options for web development.

> Promise-fatigue

JavaScript actually handles async IO nicely, and I've never heard this term before (callback hell, yes). So promises and async/await sugar actually make it pretty nice.

> poor ecosystem

Ecosystem is fine, just don't jump on everything new. The well-known problem is standard library, it is indeed a problem, usually addressed with a mix of additional packages.

> For the extra tax you pay in terms of build pipeline

This pipeline is norm in the frontend development, so in case you have people proficient in tooling, it is not that high (of course, it is hard if you have Java developers).

> Typescript vs Javascript as if these are the only two possible options for web development

They are not, but the nice thing about it is that you can have almost the same tooling as on your frontend, if you have some sort of complex application. So that increases speed of development, and TypeScript gives you some sense of scalability.

Last part is that TypeScript can be adopted incrementally, while another languages will require complete rewrite (and different deployment, etc).

I actually just made the term up myself just then to describe my feelings about the fact that all of the methods in our service layer are just `await this(); await that(); await another(); //...` and so on. Please be aware that this post is describing my own experiences in back-end web-app development using Node. I feel I'm courting more controversy here, but if you're using `async/await` ad-nauseum your app might not actually be as async as you think it is. Of course this is a godsend compared to the callback hell that it replaced, but I can't help feeling that this is the right solution to the wrong problem. In my experience, most of the server-side applications I've seen written in Node are only practically asynchronous at the router level. Once you get into the controllers they tend to become entirely procedural and effectively completely synchronous. With the exception of the rare `Promise.all(...).then(...)`.... If you're using `await` on every single function call, it stops being syntactic sugar and just becomes more syntactic salt.

I completely disagree. Or rather I'd say if you're making a bunch of remote calls, and you need the results before taking the next step, then you're going to need to be doing something like this in any case. FWIW I feel a backend controller in Node is much easier to write, and to understand, with async/await than the equivalent patterns in Java.

As you put it, "Once you get into the controllers they tend to become entirely procedural and effectively completely synchronous" but that's because for most people it's far easier to think about a problem as a series of individual steps. Even then, if you have a bunch of serial awaits it's usually pretty trivial to go back and then await on a Promise.all() if you realize things can be run concurrently.

> easier to think about a problem as a series of individual steps

You sound like we can be innovative on how we code but how can steps that do

- Parse inputs

- Query database

- Return output

be asynchronous?

Unless I'm writing a background job that is decoupled from HTTP requests, everything after the router is synchronous.

I think I might not have explained myself too well up there. I was referring to entirely sequential async calls. Like: ``` Thing thing = await loadThing(); Thing other = await thing.doSomething(); thing = await saveThing(); // repeat x1000... ```

I see this over and over.

And what exactly is the problem with that?

"Thing thing = await loadThing(); ": Start loading thing, then await so the main thread can continue doing other stuff while thing is loaded.

"Thing other = await thing.doSomething();": Invoke some other function that does some stuff in the background, then await so the main thread can continue doing other stuff while thing does something.

"thing = await saveThing();": Saving can often be done in parallel so no need to block the App while something is saved. Invoke the save function, then await so the main thread can continue doing other stuff while thing is saved.

This gets you the advantages of async programming while still maintaining a legible coding style that looks similar to common synchronous code.

we have await this() and await that() also in python aiohttp and we will have in C++ co_wait this() and co_wait that() and we have in C# await this() and await that() this is not something specific to JS / Typescript...

I never mind writing out the awaits. But that might be because I remember what writing asynchronous code was like 10 year ago and damn await is nicer.

I agree. Everything should be synchronous unless stated otherwise. The amount of 'await' and 'async' appearing is almost on every few lines but that is Node.js' problem, not TS'.

I think most people think async/await gives them all the previous benefits of async but doesn't

All the previous benefits of async? Did you mean promises, callbacks or async? async/await is just syntax sugar over promises.

What benefits don't we get with async/await?

Other langauges can be adopted incrementally as well e.g. ReasonML and Clojurescript. I am not sure how you concluded that other languages need a different deployment. On the front end everything gets delivered to the browser...

I am not sure about other languages, but I have been programming in Java for many years, and Typescript does what Java does (minus performance on a long running VM on CPU bound computations) at a rate that I assume should be sufficient for most (business) apps, without any bloat.

Can you elaborate on promise fatigue. In our setup we use async/await and the latest ES (although most of us prefer not using decorators if we don't have to - only used it with mobx for UI code) bundled through webpack.

I am not sure what you mean by poor eco-system either. Both JS and TS (increasingly), I feel like is the second best eco-system (in terms of libraries and tooling) after a mainstream language like Java.

There are some warts I think in JS (around numbers, stream handling in pipes), but I think very few languages don't, and it depends on how likely you would be fiddling in those areas.

Extra tax on build pipeline cannot be escaped for anything with any complexity.

I feel like anyone who can write a complex app in good js, refactoring along the way at good speed, and meeting deadlines along the way and not introducing too many regressions, and come back and maintain it after a 6 month context switch, either has way too much time on his hands and/or works too many hours, or is much more smarter than I am and has it figured out.

I've had too many terrible experiences having to maintain and work with buggy and poorly designed Node/Typescript packages. I'm especially thinking of the terrible choice of ORMs available for Node, Typescript in particular. TypeORM seems the de-facto standard and my experiences with it have been horrific. It looks like Hibernate but works like a dumpster fire. There isn't anything as robust as ORMs like EF or Hibernate for Node. I feel that the shortcomings of the NPM community have been pointed out one too many times, but on too many occasions I've felt that development of a product was made much harder by the choices of poorly developed frameworks as dependencies.

FWIW I've helped a team of Java developers transition to Node and the first questions I got were usually about standard library and hibernate. One guy was quite convinced that the NPM world just needed a reimplementation of Apache Commons to see the value in monolithic packages. I also had to do a lot of explanation about strategies for code reuse without inheritance/turning everything into classes.

These are all, at the end of the day, XY-style problems whose suboptimal answers can snowball into unmaintainable code. I do think there is a serious lack of training material in this area, though.

The ORM was your first mistake. They almost always get in the way and are way more complex than the relatively simple language (SQL) they are intended to abstract over.

> performance on a long running VM on CPU bound computations

I'm interested in learning more about this, any other keywords I should throw in if I start Googling about it?

True, because you cannot avoid JavaScript. If you are building a new web startup, most likely you are going to write a LOT of React/frontend code.

So it makes absolute sense to write the backend in the same language...if you can. Typescript goes a long way in making this pleasant.

Nothing is stopping you from writing the backend in different language...but as an organization, you lose the advantage of shared thinking in one stack.

Part of the appeal of TypeScript is that you could fairly easily migrate a team to it from JavaScript if they're all fairly comfortable and competent with the current JS ecosystem. For most JS devs I've worked with, this wouldn't be true with a language like C# or Kotlin.

I'd argue that buying into a more comprehensive language would probably be worth it if everyone was into it, but in my own experience JS devs really, really like JS and have very little interest in learning something better suited to the problem they're solving. This is true outside of software too. People love familiar tools and methods, even if they aren't the best ones.

There's also the fact that your team's output will be best based on their enjoyment and engagement. Even if the tooling isn't perfect, they'll probably build better software if they're enjoying it. JS and TS are good enough in most cases to get teams where they need to be.

What other languages would you recommend for frontend web development, given that you'll still need a build pipeline and so on?

My post was directed at back-end development in Typescript/Javascript. My apologies if that wasn't clearer. I don't really have any experience in front-end development in Typescript, so I can't really comment on that. It could be a real improvement there for all I know.

While your parent clarifies that he meant for backend- ill suggest trying one off bucklescript, Scala.js or elm. They are all very well done and have wide adoption actually

" Issues like Promise-fatigue"

Actually, async-await is a nice feature of the JS paradigm that simplifies things and that I miss in the threaded world of Java.

But yes - use TS where you would have used JS, but TS up against other, more classical languages may not be the right choice for a lot of non-web projects.

Yeah, it's a strength in my book. Node makes it trivial to sequence asynchronous tasks, stuff you may be juggling in a server route.

You can start this promise early, await two parallel DB request + one API request, branch off some more async work depending on a response, await that initial promise that had a head-start because now you actually need it, and then run 10 tasks with no more than 3 in-flight at a given moment. And the code would look exactly how I just described it. And it's ubiquitous.

It's a good tool in the belt and imo the most trivial async implementation out there.

I'm just jumping into TypeScript after a decent break from production coding. Prior to this break it was Rails, well before that Java.

In my own Hades moment -- Microsoft know how to build great developer tools. The developer ergonomics of TypeScript and Visual Studio Code are excellent. It's really quite surprising to see how far the JavaScript world has come.

Similarly, the DevOps world is becoming much (much) more JavaScript friendly. Serverless, Netlify, and things like AWS Amplify -- all with their foibles, but pushing JavaScript in meaningful ways. I don't think this is the case for many other languages/ecosystems.

I would actually agree with your statement about Typescript and vscode. Working with TS has actually been quite pleasant as far as getting up and running goes. I only ever had to consult the docs a handful of times, and each time my questions were easily answered.

Poor ecosystem? Have you used web frameworks like Express/Koa + a good linter + a modern IDE (e.g. VS Code) and of course npm? It's a breeze working with those tools. Productivity skyrockets and if you think a little bit about your design decisions then typescript (plus the proper support for it) gives you a huge hand in writing correct code from the first try. I think that if you want your opinion to be taken seriously you should propose an alternative. I would be interested to see a better ecosystem.

That's why there are compile-to-Js languages that see widespread adoption.

Elm, bucklescript, Scala.js to name a few.

... Clojurescript. A React app is 10x more complicated than a Reagent app. http://reagent-project.github.io/

In the following code segment, the `[:div` `[:p` et al- are the reagent or clojurescript?


  (defn simple-component []


     [:p "I am a component!"]


      "I have " [:strong "bold"]

      [:span {:style {:color "red"}} " and red "] "text."]])

Other than Ruby and PHP, I've only really worked with JS/TS on the backend. Which strongly-typed programming language would you recommend on the backend?

Depending on what it is you're doing, I feel that .net and Java are far superior choices. Obviously these are pretty heavy languages, so obviously decide for yourself whether the size of your app justifies using a language like this. My thoughts run along the lines of: "If people are justifying using Typescript because their projects are large and critical, they're probably large and critical enough to use a language designed for enterprise-scale development." I personally haven't used Go, but I've heard good things about its use in this domain too.

I would like to second this. This idea that everything will move more quickly if the same language is used on the client and server sides is compelling. But .Net or Java have been on the server side a long time, there are projects for which they are well suited.

TypeScript is very easy to use with Node. You can also use the debugger in WebStorm with it just fine, and you'll be stepping through your TS code instead of your generated JS code.

The problem is really the Node ecosystem, which is a mess and a circus of security issues. Deno[1] may fix most of JS/Node's problems eventually.

For now, if you want a type system that's comparable to TS with a great ecosystem, the best you can get is either F# or Kotlin. If you're not a fan of ML, then you might want to start with Kotlin.

1. https://github.com/denoland/deno

Your DDD link is dead (so technically it’s a DDDD link :)

Haha. Yeah, that's the next article for me to write. I just started this blog recently. My approach has been to track which dead links have been clicked the most and then write about that topic next. DDD is far in the lead. Expect it soon.

Thanks for reading!

This sounded more like an OOP pitch than a TS one.

People don't seem to point out but you need a decent editor to benefit from using TS. If your editor doesn't support live checking, then it probably feels like boring additional work by going back and forth between code and compilation error logs.

Recent Typescript convert here: I find Typescript a GREAT tool for those experimental projects. They are the ones that need constant refactoring as you build, which is where Typescript shines.

By the way, fun fact: the script utility for recording a TTY session saves a file whose default name is "typescript".

The link to domain driven development wasn't useful. The post should explain more about that and SOLID.

4 of the most popular language creators here https://www.youtube.com/watch?v=csL8DLXGNlU agree that type systems are useful.

I would suggest definitely not using Vanilla JS. There are excellent type systems over JS, TypeScript, Scala.JS, BuckleScript to name a few each with their pros and cons (I don't know what cons BuckleScript has though, maybe relatively smaller lib-ecosystem)

Not that I don't agree with their opinion, but being a language creator mostly like greatly biases you towards over estimating the usefulness of certain features. As such I wouldn't hold language creator's opinions on the value of certain language features over say the opinions of CTOs or VP engineering of large companies, especially as it relates to productivity, maintenance and onboarding.

I use bucklescript/reasonML extensively.

The biggest con is definitely the smaller lib-ecosystem, creating bindings for functions is generally relatively painless but if you're using javascript libraries with large API surfaces that will be a large time sink. Otherwise it's been a joy to use.

Larry Wall is endorsing strong types? I'll have to watch that now. I guess you can use Perl 6 in a way that approximates it, but it's very optional.

I believe the idea is 'gradual typing', where you add types as you firm up an interface.

Lots of the perl6 code I've seen types most stuff at least at interface boundaries.

Does anyone have a timestamp for Larry Wall talking about types?

There are lots of alternatives to vanilla JavaScript. Why stop at types and encapsulation?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact