Hacker News new | past | comments | ask | show | jobs | submit login
Insights from Adopting TypeScript at Scale (techatbloomberg.com)
180 points by robpalmer 3 months ago | hide | past | favorite | 105 comments



I envy this. If you guys are ever hiring, I'd do anything to be apart of such an innovative place!

This is the kind of work that gets me excited to live every day.

This really shows though, to the point of the article, that TypeScript at scale really lives up to its name, albeit with some quarks (and in this case, some of those quarks are also due to the environment it operates in). I have also found over the years my experience has been positive when you adopt, most notably, the points about keeping all your packages across projects/repos up to date, particularly keep your TS versions rolling upward to prevent definiftion file incompatabilities is necessary.

I'm curious though, as I'm sure there are several authored libraries (not every repo I assume is a pure app), do you guys run into issues where you have to use a lot of complex typings (like using a function type to infer a given type even though the type you're inferring isn't a function, and other type dances) or has this not been the case? I've not had much of an issue with this, though I've had to do some pretty complex extends [[type]] ? stuff to get it to infer correctly in some situations where I want things to be as generic as possible.

This was an informative and interesting read, and I'm grateful you guys published it. Keep up the excellent work!


Thanks for the feedback - I'm pleased you enjoyed it.

It's true the codebase has some gnarly advanced types (generic, conditional, mapped) for expressing types for constructs created prior to TypeScript being introduced. I suspect we are pushing the limits of the compiler - quite literally TypeScript has fixed limits on recursion depth that have caused us breaks in the past. Hopefully as we refactor code with TypeScript in mind we can reduce the overall type complexity.

And only because you asked... if you wish to join us, please check out https://careers.bloomberg.com


I've seen some shops do things like this. Making their own tools to get around odd shitshows they've gotten themselves into. I suspect that there are a number of people within the org that aren't fans of Webpack or some other set of tools that would have made some of their decisions end up with the consequences in the article. They didn't quite state it but I suspect they're trying to use a specific stack instead of a more obvious stack to make typescript work.

edit: just adding though, that this isn't surprising with a dev base of 2000 js developers.


I think the key here is:

> Back in 2005, the company started migrating those apps from Fortran and C/C++ to server-side JavaScript

Since their use of Javascript predates Node (and the rest of that ecosystem, like Webpack or Babel or whatever), they've built up their own Javascript environment that doesn't rely on Node or Node conventions at all. Not so much a "get around odd shitshows" situation as much as a parallel evolution.


Yes - you deduced correctly!

There was a long period of parallel evolution. Over the last few years we've been able to get back on the standards track. The article describes this as one of the guiding principles. It's why we participate in TC39.

Nowadays our tooling stack now uses TypeScript, Babel, Rollup & Terser which I regard as the most mainstream of choices. And we go out of our way to keep things aligned with ECMAScript, e.g. preventing the use of "experimentalDecorators".


Bloomberg often gets accused of "not invented here" syndrome. But often times in the past, it's been the case that what they need hasn't been invented yet.


Would that make it "invented here" syndrome?


Traditionally NIH is the reason for rejecting things. The inverse would be accepting things because they are invented here.


This is endemic in the finance industry. Build something cutting-edge, rake in a bunch of money, and then let your tech rust.


Why did Bloomberg adopt server-side JS in 2005?


I really want to know the answer to this question, seems a hell of a risk to move to JS prior to Node (for me it’s still a risk even after node because JS!).


In the early 2000s it was recognized that the C/C++ code-build-run-debug development feedback loop was taking too long for efficient application development. 10-30 minutes to try out changes. It would have been risky not to try something new.

JavaScript revolutionized this, bringing the developer feedback loop time back down to a few seconds. Andrew Paprocki's 2011 JSConf talk demonstrates this. https://www.youtube.com/watch?v=ODgs0eWAIKc

The main competing technology at the time was Lua which would probably have been a fine alternative. JS won. Four years later Node was released which took server-side JS to a mainstream audience and helped validate the choice. Ten years later saw the ES6 renaissance and the rise of JS as a credible language for application development. Now TypeScript takes it to the next level, enabling large robust systems to be efficiently built using JS.

I'm pleased we bet on JS.


> The main competing technology at the time was Lua

The main competing technology for what exactly? I could easily think of 10 other server side choices off the top of my head from the 2005 time frame; like Java or C# to name a couple. Something is missing here.


Interpreted languages with no compile time it seems.


It wasn't an option back then, but now, if you wanted both a typed language and rapid code-build-run-debug for the server side, you could go even faster and simpler (snappy single-step compile to native executable) with Go. If you were starting now, would that work even better for you, or would other benefits of server-side JS outweigh the more complex toolchain and deployment?


One benefit of working with JS is that it was an easy transition to providing both client-side and server-side JS within the same application/project. Go would work on the server, but integrating it into a native MSVC Windows app, yet alone Chromium, would be... not fun.


This seems to imply something about the client side of your strategy that I wasn't aware of (but maybe I should have realized) that probably matters: that your clients will (almost?) all be Windows machines running the new-MS-strategy MSVC native apps written (at least partly) in JS, where "JS" on Windows now (as in Node) means Chromium specifically.

In a situation where you get to build the client all the way to the hardware, you're like Apple. You can probably get more advantage than most by matching client and server more deeply than the usual talk-to-the-API level. In your case, it sounds as though it's Microsoft's TS-in-Chromium spanning both.


Yes to Chromium, not so much for anything specifically MS-related. Using Chromium as a low-level GDI replacement lets you leverage higher-level constructs and work in web-space to implement rendering, and TS nicely spans between client-side Chromium and server-side V8.

Note -- it doesn't need to be specifically Chromium or specifically Node. It just so happens that Node bet on V8 and Chromium is embeddable, so you can get even more leverage out of using the two together. (We actually also use TS with server-side Spidermonkey as well, so TS has definitely proven itself as a better way to interop at the tooling level across these different implementations.)

This all comes with a Titantic-iceberg-sized caveat, however. Keeping up with Chromium is difficult. Any embedder is not Google and embedded use cases are not the web. You can leverage all the collective work put into making the web work quickly, but you have to back it up with your own resources to leverage the source and make changes and contributions where necessary.[1] It's definitely a balancing act!

[1]: https://github.com/bloomberg/chromium.bb/branches/active


I still cannot understand why anyone would use JavaScript for server-side components. Even Python would be a better choice at that point if you could get around its multithreading limitations (and if they can build their own node, they can). Java was a well proven tech at that point.

My immediate thought is that by choosing JS, they attracted young (and cheap) developer.


They adopted in 2005. It was almost certainly the opposite then.

This predated node, and was just about the time GMail came into the picture.

It’s almost certain finding actual developers (As opposed to web designers with a sprinkling of JS knowledge) with JS experience was extremely difficult then, so it’s not likely to be a reason at all.


No one should interpret a deep-dive article like this to indicate a language monoculture - we also use many other languages on the server-side including Python and a lot of C++.


That is very true. But I still have a hard time to see the justifications of using JS on the server.


I think TS on the server is a great choice for all the reasons that make TS interesting -- esp. the expressive structural typing system which is not found in the other languages. What is missing is a better runtime that supports true shared-memory (w/o serialization overhead between workers) multi-threading (for async thread pools or true parallel workloads) like other modern servers. This would be ideal for me at the moment.


I like the structural type system, but it's not my experience that the power of it is unquestionably a good idea to use for long lived projects. The simpler something is, the easier it is to maintain.


Agree, but I think it excels at it's intended purpose, as a "gradually typed" system where types can be added at the various places, and inferred types that "meet in the middle" will structurally type as intended. If you want to simulate nominal typing in places, you can always use tagged unions, as TS now offers several different ways to do this. (However, this info will typically not get erased at run-time, so you get reflection whether you like it or not :))


I'll offer one advantage we have nowadays, which is to permit writing and atomically deploying apps that have both client and server parts - there's no need to preserve compatibility or worry about coping with independent versions.

Our platform allows you to write this all inside one project - so the ability to use a single language across both sides helps app developers maintain mental flow and reduces context-switching. It's even better now that we have TypeScript to perform instant type-checking across the client-server boundary.

Gary Bernhardt shows off similar powers in his awesome video "End-to-End TypeScript: Database, Backend, API, and Frontend": https://www.youtube.com/watch?v=GrnBXhsr0ng


I can see the benefit, but the context switch overhead of having two different languages in the same repository is not a productivity issue in my experience.


It's more about setting up the guard-rails ahead-of-time to avoid falling into a hole, rather than getting out of one. We know the way to do this is to stick to standards like ECMAScript.

ES Modules are the glue that binds the whole JS ecosystem together. Whilst ESM is the standard, today a lot of people are using ESM only as an authoring format that later gets converted to CommonJS before being published or executed. Many of today's tools rely on this, meaning the ecosystem is partially tied to CommonJS. Migration is happening but it's slow and non-trivial.

In a way, we bypassed the CommonJS era and skipped directly from AMD to ESM. AMD and ESM are pretty much isomorphic and differ only in syntax. You just run a codemod to get from AMD syntax to ESM syntax - semantics are preserved. Whereas the step from CommonJS to ESM does not fully preserve semantics. CommonJS module initializers are always synchronous. ESM can be asynchronous - you can use `await` in the module initializer.

The article covers a few of the things we've done to retain a very standard ES Module system that is ECMAScript compliant. I think it is this pursuit of standards and desire for robust interoperable packages that led to some of the surprising discoveries.


The article mentions they have around 2000 engineers, at that scale investing in custom tooling can easily become a significant net gain in productivity and stability, especially when dealing with the JS ecosystem.


Agreed. The article touches on this. When you have hundreds of projects that all target the same tightly-managed evergreen runtime, there's not much justification for having each project select and maintain a different toolchain. It just leads to the same problems being solved over again, and makes it harder for developers to switch between projects.

The fact that we can consider high-quality sourcemaps a solved problem (during both debugging and consolidated crash telemetry) really helps developers focus on building functionality rather than debugging a build system.

I think most large companies have dedicated tooling teams for these kinds of reasons. A related viewpoint in a related thread: https://www.reddit.com/r/typescript/comments/jrgi8z/10_insig...


Corporations seem to be desperately trying to create more jobs at the moment. TypeScript, by being less efficient and adding complexity, allows massive job creation to take place.

Nothing to do with bugs, efficiency, etc... It's all about job creation so that companies can justify receiving more, bigger government contracts and loans on favorable terms from banks and they get more political influence by coercing their employees into voting for specific candidates (more headcount = more voting power).


Busted.

It's true. The Big TypeScript lobby has been secretly working for years on a product called TypeScript Enterprise Edition. Many jobs will be created to support all those AbstractVirtualFactoryManagers.

Undeniable evidence: https://twitter.com/drosenwasser/status/1259946589902106624

This is just as serious as Bjarne Stroustrup's famous leaked interview in which he revealed why he created C++: https://www.stokely.com/lighter.side/stroustrup.html


Fascinating that Bloomberg started adopting server-side JS in 2005 and client-side JS only in 2012.

And that they use their own deno-like JS engine.


The server-side JS architecture created in 2005 was presented at JSConf 2011 by Andrew Paprocki.

https://www.youtube.com/watch?v=ODgs0eWAIKc

The talk describes the system architecture and shows how the IDE is used to create applications.


Please keep in mind the date and that there are many things that are a bit dated in the video! I cringe a bit whenever this comes up — I’m sure others can relate :)


I'd be interested to see numbers comparing the amount of server-side J(ava)Script to client-side in the decade up to 2005, because it was as common as Perl or PHP for much of the first part of my career.


Server side JavaScript exists since the language was introduced, Netscape had an application server that used it.


Interesting how much emphasis they seem to place on treating TS as precisely JS + types and nothing more, even to the extent of excluding evolutionary TS features.

That does seem a reasonable argument for avoiding enum types in TS. Those have semantics that go beyond how enumerations work in most languages and implementing them in the underlying JS is going to introduce some runtime cost.

I wonder whether their coding standards still permit const enums, though, as they are also new relative to JS but work more like traditional enumerations in C-family languages. These ought to be entirely compiled away, so it seems like the same arguments around efficiency and portability/compatibility wouldn’t apply.


There isn't much of a difference between regular enum and const enum when it comes to the usage of the enum keyword. The big difference is what they compile to - where const enum evaporates by inlining the values into the usage sites.

enum is a keyword that is reserved in ECMAScript and therefore may one day clash with TypeScript. It's unlikely any ECMAScript-defined semantics for enum would match today's TypeScript enum. So it's not on any standards track.

There is already a proposal for JS enum that has different semantics to TS enum, and it was created by someone on the TypeScript team: https://github.com/rbuckton/proposal-enum

So the const form doesn't really change the hazard.


Interesting. So even though the const form isn’t affected by the same runtime considerations, you do still prohibit it because of the potential conflict with a future use of the enum keyword by JS? I can understand the reasoning there, particularly at the scale you’re operating at.

I wonder how likely it is that ECMAScript would introduce its own incompatible form of enum now that TS has become so popular. With static type checking as in TS, enums are very useful, but in the more dynamic environment of JS, the benefit is much smaller. So even if there’s nothing wrong with ECMAScript using its reserved keyword in principle, it feels like a bad idea to introduce a potential source of confusion and maybe subtle bugs when so many developers now use both languages.


I would never say never for any potential ECMAScript feature. Each time people say "JS will never do x", a few years later it does. Brendan Eich has a slide on this.

https://www.slideshare.net/BrendanEich/jslol-9539395/110-Alw...

In terms of whether JS will introduce things that conflict with TS, it's hard to say. TypeScript team themselves are active participants in TC39, championing recent features such as Optional Chaining and Nullish Coalescing. So if there were any conflicts, or even opportunities for confusion, it would all be managed way ahead of time. Several TC39 delegates are users of TypeScript so there is no risk of accidents here (in my opinion).


I would never say never for any potential ECMAScript feature.

Sure, and again, at the scale in your organisation, taking an absolute position on this makes a lot of sense and I can respect the principled stance.

On the other hand, the trade-off in this case is giving up a tool that is widely useful immediately in exchange for a potential/hypothetical benefit later. For other development teams, perhaps those alternatives would be weighted differently. And then that in turn might affect how any future changes were viewed by the relevant language committees, who as you say would surely be aware of the implications.

Anyway, thanks for sharing your insights, both here and in the original article. It’s somehow reassuring to me that I’m not the only person in the world who wants their front-end code to continue working for more than five minutes, when it seems like a lot of the front-end community would consider that a pretty good working lifetime for code these days!


That's a very real trade-off and something we have to carefully make a judgement call on every time. In this specific case the decision is made a lot easier due to the fact that string unions are often a simpler alternative.

  type Color = "red" | "green" | "blue";
Glad to hear you appreciated the article!


“ Those have semantics that go beyond how enumerations work in most languages and implementing them in the underlying JS is going to introduce some runtime cost.”

Do you think this runtime cost will make much of a difference compared to everything else that goes on in the code? My impression that in the last few years the pure code performance is so fast that it almost doesn’t matter much anymore. Of course unless you manage to do incredibly stupid things.


For our purposes, the primary problem with enums is not the runtime overhead. The runtime code for TS enums is a little verbose but it's not huge. The semantics are a little wacky as described in Axel Rauschmeyer's article: https://2ality.com/2020/01/typescript-enums.html

The primary problem is the potential conflict with future ECMAScript. Multiple TypeScript team members have talked about how this feature is now regretted. The article features Orta's famous meme-slide communicating why runtime features (specifically calling enum) should not really be in the TypeScript language. Anders talked about it three weeks ago during the Q&A session at TSConf: https://www.youtube.com/watch?v=vBJF0cJ_3G0

enum is highly unlikely to be removed from TypeScript because of the strong commitment to backwards compatibility. But that doesn't mean we should encourage proliferation. Especially when many use-cases can be served more simply by string unions.

  type Color = "red" | "blue" | "green";


Do you think this runtime cost will make much of a difference compared to everything else that goes on in the code?

Perhaps not. It’s hard to imagine a program that you’d write in a language like JS in the first place where the small overhead of a look-up would be significant even for a feature as commonly used as enums.

However, there is a wider issue here than just whether to use one specific TS language feature or not. If you’re establishing principles that are going to affect thousands of developers working on thousands of different programs, which appears to be the case at Bloomberg from what I can see, then erring on the side of caution is a reasonable position to take. It’s difficult to objectively evaluate risk at that scale, whether from individually small but pervasive runtime performance hits or from potential future (in)compatibility with JS.


> Back in 2005, the company started migrating those apps from Fortran and C/C++ to server-side JavaScript, with client-side JavaScript arriving around 2012

Okay, I need another article about why in 2005 they chose to rewrite everything to server side JavaScript? That seems like, I don't want to say a bad decision that also led to all the issues and effort this article describes, but it kinda seems like it. In any case, it seems like an odd choice to make, especially in 2005, and especially coming from a fortran/c++ code base. So would love to hear about that and a retro on it.


I’d say you’re very wrong. Having worked there the JS ecosystem was a tremendous advantage.

It’s also important to keep in mind that Bloomberg is very willing to have their best developers go ahead and build actual tooling at the compilers (and language) levels to resolve issues they may face.

This occasionally leads to issues when they start diverging from the mainstream too much, but they’re also good about then stepping in and pivoting back to the mainstream development branches, which is kind of what’s happening in this article.


Sorry, I didn't mean to criticize, more so, I find that choice (in 2005) very surprising. So I'm just curious why would it have been made and how that played out.

> Having worked there the JS ecosystem was a tremendous advantage

How so? And to maybe ground my question, in 2005 I'd have expected a switch to Java, C# or Scala. So what advantages were there compared to those for JavaScript?

> It’s also important to keep in mind that Bloomberg is very willing to have their best developers go ahead and build actual tooling at the compilers (and language) levels to resolve issues they may face

That's awesome, but from my read of the article, seems like a lot of fighting to rebuild things that Java just has. And now this whole move to TypeScript to backfill in types, again, it appears to me like maybe if they'd gone with Java, much less time would have been needed on the devs building their own tooling, compilers and doing massive project migrations.


Yes, you're right the JS@Bloomberg origin story deserves to be told in full. Probably by Andrew Paprocki. SpiderMonkey has a starring role.

Until then this is as much as I can share right now: https://news.ycombinator.com/item?id=25068119

Also to be clear, we don't have a language monoculture. The app layer and hundreds of services use JS/TS. But the majority of the backend remains C++.


That's an interesting take. I'm now a Clojure programmer, so I totally understand the importance of a tight feedback loop. I guess in 2005 Lisp would have still been considered too "different" even though it had a longer history of backend development compared to JavaScript and an even tighter loop?

I'd be very interested in an origin story. The emphasis on tight feedback loop is very novel for the time I feel, over going with Java for example, which could have been a middleground between feedback loop and type safety with much more tooling available at the time. And now with bringing types back in, slowing down the build times again, hurting the feedback loop, but it seems safety has now been favored over it, was it a change of heart, what lead to that?

Anyways I'll patiently await a maybe blog post about it :)

Thanks for the right up here, was very fascinating.


"9. Generated declarations can inline types from dependencies"

Why do they generate .d.ts files instead of publishing .ts files alongside .js files? This should solve inlining issue, no?


That's an excellent suggestion!

It's true you could just have every package publish the raw TypeScript source-code. So that when an app imports a library, it type-checks against the original source code of the library. No need for DTS files from your dependencies!

I have never seen this done in practice for a large system. It's not as scalable as using bare-minimum type declarations that you find in DTS files. It requires more parsing, and potentially more time to accumulate the types. Whereas DTS emit in TypeScript can flatten the resulting type into the DTS file.

But if performance didn't matter, I think this would probably work. And it would eliminate the class of bugs where the generated DTS files are not 100% semantically identical to the original source code. That's another edge-case finding I omitted from the document but hope to write up another day.


It works quite well, you can forget about src/ lib/ directories, just structure package as you want (ie files at the root), generate js files along ts in the same directory (relative paths to assets stay the same, import paths are the same in js/ts, very useful), add in vscode rule to auto hide .js file if .ts with the same name exists `{"files.exclude":{"/*.js":{"when":"$(basename).ts"}}}` - coding is pleasure, you just see and work with ts, no need to generate js files most of the time (on prepublish yes) etc.


Given Bloomberg already moved most of their infrastructure to TS projects and incremental compiles already, so performance might not even matter or change substantially if they moved to TS only builds so long as they preserved project boundaries. Under the hood the TS compiler in incremental multi-project builds still essentially builds the equivalent of minimal DTS information for projects, but it caches it in a build file that's more compiler-internal oriented than the externally shareable DTS.

A developer with clean/fresh clones would still notice a big performance difference, but amortized over a number of builds it might not matter in general practice. In theory at least, as Bloomberg notes projects and incremental builds are still newish and don't yet always have the performance characteristics they should have in the wild. That said, the impression is that a lot driving projects/incremental compiles is monorepos in Microsoft (at the very least) that have moved to/are moving to monorepos that are entirely TS only builds with few/no DTS intermediaries. (That's just me reading between the lines, of course, I could be mistaken.) The drive behind projects/incremental compiles sounds like it is attempts at massive scales of TS files.


Another consideration is that it only works if all the code agrees on tsconfig settings (e.g. strictness settings and path resolution). That can work in a monorepo but won't work if you mix libraries.


Yes, they mention taking over tsconfig management because even generated .d.ts files are sensitive to tsconfig - article mentions type being different depending on the tsconfig setting.

But in general publishing the most strict ts should work fine when using from weaker tsconfig.

Personally I don't use custom path resolutions, can't comment on that.

ps. oh wait, but why does it matter? .d.ts file won't be different if you change strictness in tsconfig - why would it make difference for published .ts files?


Suppose project B uses lax tsconfig settings. Now suppose strict project A imports from B. If A uses B via d.ts it may be ignorant of most of B's choices. But if A uses B as source, A's strictness settings will now attempt to apply to B.


But you can use compiler option skipLibCheck, right?


It seems what they really want is something that is not javascript.

The TS team was adamant they wanted to not be a runtime, i.e. it was JS + Typing only.

Partly I wish they just dumped ECMA and just made TS proper, but perhaps that's just Dart.


I can empathize with this view.

JavaScript is bound by backwards compatibility. It is expressed by the soundbite "don't break the web". I would love for typeof null to not be "object", and it seems appealing to say "well if JS won't fix it, maybe TS should."

The problem is that this outcome has many negative effects including loss of trust. Orta (a member of the TypeScript team) describes this exact scenario in his video "How Does the TypeScript Team Try to Avoid Negative Effects on the JS Ecosystem" in which he talks about TypeScript "Embracing and Extending" JavaScript: https://www.youtube.com/watch?v=qr0TnQ2mHwY

The TS team seem to be firmly going in the direction of ECMAScript alignment and the JS + Types model, which is something the article advocates for too. Overall this increases my trust in the technology.

On a related topic Bjarne Stroustrup once said "There are only two kinds of languages: the ones people complain about and the ones nobody uses."


Yes, the market dictated that move, a non-TS JS may have fallen flat.

That said, it's MS, not some side project. They could feasibly take the road to 'pure TS' especially now that they have the core following and MS has some brand trust.

There's probably a huge following of folks that would jump on a Pure TS train as long as it wasn't cloistered by all the .Net legacy.

Like PureTS on Mono VM type thing.


> undesirable features by preventing their use.

I wonder if they have published those rules, I'd like to see them as I'm just diving into TS.


These rules are currently a not-readily-separable part of our build tooling. To provide earlier feedback for developers we're intending to standardise on an ESLint ruleset that will probably comprise the `no-restricted-syntax` rule: https://eslint.org/docs/rules/no-restricted-syntax


What is the proportion of JS libraries in the wild which provide TS type declaration files, either via DefinitelyTyped or provided by the lib itself? Is it normal to have to write your own declarations, or do most libs provide them these days?


Its actually quite rare to see a library without either first or third party support these days. Many new libraries are either written in typescript, or the authors have taken care to create declaration files or contribute to definately-typed. If a library reaches any degree of popularity it usually isn't long before a third party declaration appears.

Older libraries that are long abandoned by their maintainers usually don't have anything. At work we pretty much ignore all libraries that don't have types from somewhere these days, but that doesn't eliminate many things.


I agree on ignoring libraries without definitions.

Having to write the definitions myself usually isn't a huge pain, but it's a big red flag showing that it's probably abandoned, and there's no community support around it anymore (or possibly ever).


Yep, these days its basically a proxy for the kinds of smells you look out for when picking a library.


In those cases I will sometimes try to post an offer to write their definitions (or the build step to generate definitions from slightly modified existing JSDoc comments in even more rare cases) and PR them to the project if they like. How a project responds (or doesn't) to that Issue often tells you so much about current maintenance habits.


Almost all of them provide them, although sometimes they are slightly out of date. I think I've had to write my own declaration files (or type a library as any) for one or two libraries in years


As an outsider I'm puzzled how any TS library repo keeps up with the pace of change in TS. With JS you can more or less write it once.


Whilst TS does technically contain breaks, these are normally simple increases in the power of the type-checking. It's finding more errors in code that previous seemed fine. Think of it like adding more ESlint rules.

It's rare for there to be a breaking change in the JS emit.

The TypeScript team work hard to preserve compatibility. Breaking changes are explicitly managed and communicated ahead of time. There is a concept of a breakage-budget per-release. This means if you stay up to date, the cost of each upgrade should not be huge. Orta and Nathan on the TypeScript team talk about this in this podcast episode: https://dev.to/devteam/devnews-s1e4-typescript-4-0-gitee-chr...

So "keeping up" is not too hard.


Do the breaks tend to affect the TS declaration files, or are they more to the language itself?


The majority of breaks are due to the checker getting better. So code that passed now errors. Most of the time pre-existing published DTS files still operate fine.


Yeah, in most cases the breaks are because of how DTS files are interpreted in consuming projects, as new strictness checks light up. Most of the time that has seemed to be what the DTS files really intended and the previous interpretation less useful.

A useful example is Strict Null Checks which started to treat unadorned types as non-nullable (string versus string? versus string | null). In most cases the runtime behavior of libraries were already throwing errors when nulls were passed in, and the extra checks helped immensely. In rarer cases "oh yeah, null is totally a valid thing to send to this API and it has some defined behavior that we document/should document" and the DTS files were encouraged to annotate their types to include that. In the worst where libraries had such APIs but failed to update their DTS files, there were explicit casting workarounds in consuming projects (the awkward `null as string` and the so called "damn it" operator `possiblyNullString!`).


The breaking changes don't change runtime behaviour, just the type checking. Its more like the the linter is getting stricter over time. Your test will still pass at runtime, but they might start failing at compile time.

Also, for the cost of dealing with a breaking linter once and a while it helps enormously with upgrading javascript libraries provided they have up to date type definitions. If an API changes you are immediately alerted to many of the places where your previous assumption are now wrong.

It also completely changes the way you write code. Often time you can just go to the leaves of your application, make the change you want, and then just follow the compiler errors all the way back up to the top. When it compiles there is decent chance that it will run successfully first time.


Keeping up with JS library upgrades is a massive pain, and TS makes it a lot easier


I thought Bloomberg was into Bucklescript/ReasonML. Warring factions?!


Can't beat better ecosystem, better integration with the ecosystem and let other company(Microsoft) do the work for you.


The TypeScript ecosystem is amazing - partly because of the large number of users also also because of the sense of community. It is run as a true OSS project. Roadmaps are release plans are all public on GitHub. Even as outsiders we've been able to contribute significant features, e.g. Private Fields in TypeScript 3.8.

https://devblogs.microsoft.com/typescript/announcing-typescr...


Yeah TypeScript it's the only thing that makes me hate/doubt M$ a little less...


TypeScript was in the right place at the right time, and the team prioritized the right features in the beginning, allowing for the great adoption.

But I'd still place TypeScript firmly in the "worse is better" category (products that are technically inferior then the state of the art, but succeed due to circumstance), it's a band aid on top of a the broken JS ecosystem. Its ad-hoc, Hodge-Podge type system and poor meta-programming is sad to see for such a popular language.

I'd rather write TypeScript then JS, but as soon as there's an opportunity to write software that works across all platforms without dealing with JS, and with a type system that's actually built on a solid foundation, I'm taking it.


> But I'd still place TypeScript firmly in the "worse is better" category (products that are technically inferior then the state of the art, but succeed due to circumstance), it's a band aid on top of a the broken JS ecosystem.

I think it's important to distinguish between "worse is better" and path dependence, which is what you're getting at here.

"Worse is better" is a blank slate design philosophy. It says a system with more failures modes can be a better than a more robust system if allowing those failure modes keeps the system significantly simpler and more understandable to users. A bicycle is "worse is better" compared to a car. You have to check the tire pressure frequently, and clean the chain fairly often. But it's easy to put air in the tires and you can see the chain and tell when it needs to be cleaned.

"Path dependence" means your design is constrained by historical choices that don't benefit future users but are part of the reality you have to design for today. Train railway gauges today are not the best width for optimizing shipping efficiency. It was chosen in the 1800s before anyone really knew, and it's impossible to change because existing trains would be imcompatible with the new gauge.

TypeScript is an example of path dependence. If the world didn't have a few billion lines of JavaScript code, you could certainly design a better language. Simpler, cleaner, easier to statically type, more efficient. But the world does have all that JS code, and TS is the railway that can still run those old trains.


Yeah, I agree it was lazy of me define "worse is better" the way I did, I think they're 2 separate thoughts, but I still think TypeScript uses the "worse is better" approach to design, in the sense that C used it.

My usage of "worse is better" means "worse is more marketable, and easier to implement", as opposed to actually being better when adopted. The original use of "worse is better" was not advocating for it as a design style in the general case, but commenting on the survival and adoption characteristics of systems designed in this style, as well as that it's often a good approach in the beginning of a project.

TypeScript makes a simple promise, add types to your existing code base using an easy to understand structural type system, and it works on top of JS, just as C was designed as simple layer on top of Assembler.

If C ("Worse is Better") is now being disrupted by Rust ("The Right Thing"), I think we'll see a similar player for JS / TypeScript.

If you're in a position where you can decide what stack a large swath of developers are going to use, you should perhaps keep an eye open to JS / TypeScript alternatives with growing ecosystems. Your choice could turn into a competitive advantage, even if it means a smaller community in the beginning.


What about VS Code?


Yeah I use VS Code,

But yknow electron yadda yaddaa, I end up blowing it with too many plugins so it can be a little slow.

Also the telemetry...

Also although I love WSL for being able to use linux on my windows work machine, it is really slow for me. Trying to use a docker-magento image on windows under wsl gave me minute long page-loads.

I also hit a bug the other day that would put my vps server at 100% CPU when working remotely with the VSCode ssh remote extension...

So all in consideration, VS Code is Okay'ish but I'm not really a fan, for mac I like the new Nova editor from panic guys albeit it's newness that grants less community/plugins!


The bucklescript author now works for Facebook.


Oh wow they went from C/C++ to JavaScript? That's just trading one dangerous footgun for another. Glad to see they finally found a better tool. I can't function in JavaScript without Typescript...I constantly have to be looking up docs cause I don't have intellisense, and once something works I'm terrified to ever touch it again. Typescript made front end development reasonable again...not perfect, but not so pathetically bad that I want to rage quit like I did before.


That seems to be pretty common statement about JavaScript, but strangely not one I share.

I have been programming since the early 80s, and my experience with JS has been pretty positive, especially compared to the slog through the unreadable mud that is enterprise Java. (If I was going to rage quit on something, it would be the verbose, obtuse, repetitive Java sludge I’ve had to read the last decade and a half)

Typescript solves problems I don’t have. The Jetbrains IDEs do autocomplete just fine on vanilla JS, and the documentation window automatically shows the initialization and any JSDoc for most every symbol the editor cursor touches. I avoid writing code with thousands of global symbols.

OOP languages like Object Pascal and C++ are good for making lemonade from the lemon of manual memory management, but if you have efficient garbage collection, use functional programming techniques rather than OOP bloat, and many self inflicted problems go away.

I hope TS doesn’t turn JS into Enterprise Java, The Next Generation.

Nothing personal, most people seem to prefer that style. Just understand, there is a reason some people run away from the C++ / Java / C# / TS milieu.


> I hope TS doesn’t turn JS into Enterprise Java, The Next Generation.

No chance. And frankly, I don't get the comparison you are making to C++/Java/C#. TS is fundamentally different than those languages. From structural typing to a default functional/imperative paradigm, it really is just JS with better static analysis.

Your IDE is not as good for JS as it would be for TS. The entire impetus for TS is/was to provide better tooling/ergonomics around JS. Whether or not you need better tooling is a subjective matter.


honestly the IDE could be using TS typings for their JS. Visual studio does this for javascript we use at work, and we don't use TS


Have you used TS much? In my experience idiomatic TS isn't much different to idiomatic JS and a far cry from Java style OOP code (the Angular ecosystem which was admitedly an early adopter of TS is an exception and best avoided).


I used TS for several years and also don't like it. Nothing wrong with the language itself but the transpilation step adds significant complexity and is very slow so it breaks my train of thought and slows down my development iteration speed significantly. In addition to slow iteration speed, I got tired of encountering source mapping issues in some environments, version compatibility issues and TS config issues (especially when config property names change with the versions).

Also, static typing adds no value at all to programming if your project code is well structured. It's only useful for spaghetti code monoliths. If you make your code modular with good tests, you won't need type safety because your modules will be simple enough that your tests will easily catch these errors.

My view is that TypeScript encourages developers to write overly complex spaghetti code because it encourages devs to pass around active object references instead of simple primitives like strings, numbers or other primitive data types.

Alan Kay, one of the founders of OOP himself said that the point of OOP is not objects, "The big idea is messaging". Passing around complex typed instances goes completely against that. An instance is not a message. Complex instances (which have their own state and methods) are not messages; they should generally not be used for messaging between different components (I.e. as arguments or return values).


This is interesting. The things you list as problems are the same set of things we tried to solve:

- automatic compilation that is fast

- sourcemaps work

- decoupling of packages to help compatibility

- automatic tsconfig management

I wonder if you would have a different conclusion with these things taken care of.

I agree that TypeScript makes it easier to have functions with complex parameters - because they are now documented and more understandable. So yes, this demands more discipline on the API designer to keep APIs simple.


For simple method and field references sure, but once you push it with method chains and transformations it becomes pretty underwhelming quickly. Typescript also isn't invincible to this, but it's better. If you are going into a small well scoped code base vanilla js should be fine. For a large code base with plenty of package sharing, good luck in either. To me the investment in TS sounds like someone crossed their arms and said "I'm not doing java". Their loss, as a java dev I rarely think about languages. Stuff gets done fast, with few or no errors and no need for special tooling or forced approaches. Kotlin is an area of interest for me these days but that's about it.


> The Jetbrains IDEs do autocomplete just fine on vanilla JS

I find that hard to believe because it's not a solvable problem without type annotations.


Maybe not in the general case, but practically speaking, static analysis can infer the types of probably most completion you would need in day-to-day development.


This is contrary to my experience. For instance if the types of function parameters aren't specified, the IDE won't have any way of knowing what the types of those parameters are and will therefore not be helpful with completions.


What I've found is that senior developers with 10+ years of experience with many different paradigms and projects tend to enjoy JavaScript and many hate TypeScript (particularly because of the build step and complexity that it brings). But for some reason Microsoft is running a lot of propaganda campaigns to promote TypeScript. My guess is that they're trying to push all the JS devs to TS and then once they have enough adoption, they can slowly start diverging from JS standards and take ownership of the ecosystem - This is an old Microsoft trick.


The scenario you are describing (TypeScript diverging from JavaScript) is unlikely in my opinion.

Orta, a member of the TypeScript team, describes the reasons why in his video "How Does the TypeScript Team Try to Avoid Negative Effects on the JS Ecosystem" https://www.youtube.com/watch?v=qr0TnQ2mHwY

It directly answers the question of "What would a malicious version of TypeScript look like" and explains why this is unlikely to occur.


I have found the exact opposite.


C++ is still very much used, along with Python. These languages are generally used for the overwhelming majority of our backend services.

JS/TS are used in the Terminal's view layer. You can make services in these languages as well, AFAIK, but I don't think it is very common.


Glad to hear you are finding TypeScript useful too!

Just to expand on the conversion, whilst all of our apps (and hundreds of services) were migrated from C++ to JS, we still have much more C++ on the backend than we do JS. Thankfully it is not a language monoculture ;-)


Fashion.


You’re telling a team that’s contributed significant amounts of code to C++ and are major contributors to setting the standards for C++ that they moved to JS in 2005 (!) for fashion. Ok.




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: