This is the kind of work that gets me excited to live every day.
This really shows though, to the point of the article, that TypeScript at scale really lives up to its name, albeit with some quarks (and in this case, some of those quarks are also due to the environment it operates in). I have also found over the years my experience has been positive when you adopt, most notably, the points about keeping all your packages across projects/repos up to date, particularly keep your TS versions rolling upward to prevent definiftion file incompatabilities is necessary.
I'm curious though, as I'm sure there are several authored libraries (not every repo I assume is a pure app), do you guys run into issues where you have to use a lot of complex typings (like using a function type to infer a given type even though the type you're inferring isn't a function, and other type dances) or has this not been the case? I've not had much of an issue with this, though I've had to do some pretty complex extends [[type]] ? stuff to get it to infer correctly in some situations where I want things to be as generic as possible.
This was an informative and interesting read, and I'm grateful you guys published it. Keep up the excellent work!
It's true the codebase has some gnarly advanced types (generic, conditional, mapped) for expressing types for constructs created prior to TypeScript being introduced. I suspect we are pushing the limits of the compiler - quite literally TypeScript has fixed limits on recursion depth that have caused us breaks in the past. Hopefully as we refactor code with TypeScript in mind we can reduce the overall type complexity.
And only because you asked... if you wish to join us, please check out https://careers.bloomberg.com
edit: just adding though, that this isn't surprising with a dev base of 2000 js developers.
There was a long period of parallel evolution. Over the last few years we've been able to get back on the standards track. The article describes this as one of the guiding principles. It's why we participate in TC39.
Nowadays our tooling stack now uses TypeScript, Babel, Rollup & Terser which I regard as the most mainstream of choices. And we go out of our way to keep things aligned with ECMAScript, e.g. preventing the use of "experimentalDecorators".
The main competing technology at the time was Lua which would probably have been a fine alternative. JS won. Four years later Node was released which took server-side JS to a mainstream audience and helped validate the choice. Ten years later saw the ES6 renaissance and the rise of JS as a credible language for application development. Now TypeScript takes it to the next level, enabling large robust systems to be efficiently built using JS.
I'm pleased we bet on JS.
The main competing technology for what exactly? I could easily think of 10 other server side choices off the top of my head from the 2005 time frame; like Java or C# to name a couple. Something is missing here.
In a situation where you get to build the client all the way to the hardware, you're like Apple. You can probably get more advantage than most by matching client and server more deeply than the usual talk-to-the-API level. In your case, it sounds as though it's Microsoft's TS-in-Chromium spanning both.
Note -- it doesn't need to be specifically Chromium or specifically Node. It just so happens that Node bet on V8 and Chromium is embeddable, so you can get even more leverage out of using the two together. (We actually also use TS with server-side Spidermonkey as well, so TS has definitely proven itself as a better way to interop at the tooling level across these different implementations.)
This all comes with a Titantic-iceberg-sized caveat, however. Keeping up with Chromium is difficult. Any embedder is not Google and embedded use cases are not the web. You can leverage all the collective work put into making the web work quickly, but you have to back it up with your own resources to leverage the source and make changes and contributions where necessary. It's definitely a balancing act!
My immediate thought is that by choosing JS, they attracted young (and cheap) developer.
This predated node, and was just about the time GMail came into the picture.
It’s almost certain finding actual developers (As opposed to web designers with a sprinkling of JS knowledge) with JS experience was extremely difficult then, so it’s not likely to be a reason at all.
Our platform allows you to write this all inside one project - so the ability to use a single language across both sides helps app developers maintain mental flow and reduces context-switching. It's even better now that we have TypeScript to perform instant type-checking across the client-server boundary.
Gary Bernhardt shows off similar powers in his awesome video "End-to-End TypeScript: Database, Backend, API, and Frontend": https://www.youtube.com/watch?v=GrnBXhsr0ng
ES Modules are the glue that binds the whole JS ecosystem together. Whilst ESM is the standard, today a lot of people are using ESM only as an authoring format that later gets converted to CommonJS before being published or executed. Many of today's tools rely on this, meaning the ecosystem is partially tied to CommonJS. Migration is happening but it's slow and non-trivial.
In a way, we bypassed the CommonJS era and skipped directly from AMD to ESM. AMD and ESM are pretty much isomorphic and differ only in syntax. You just run a codemod to get from AMD syntax to ESM syntax - semantics are preserved. Whereas the step from CommonJS to ESM does not fully preserve semantics. CommonJS module initializers are always synchronous. ESM can be asynchronous - you can use `await` in the module initializer.
The article covers a few of the things we've done to retain a very standard ES Module system that is ECMAScript compliant. I think it is this pursuit of standards and desire for robust interoperable packages that led to some of the surprising discoveries.
The fact that we can consider high-quality sourcemaps a solved problem (during both debugging and consolidated crash telemetry) really helps developers focus on building functionality rather than debugging a build system.
I think most large companies have dedicated tooling teams for these kinds of reasons. A related viewpoint in a related thread: https://www.reddit.com/r/typescript/comments/jrgi8z/10_insig...
Nothing to do with bugs, efficiency, etc... It's all about job creation so that companies can justify receiving more, bigger government contracts and loans on favorable terms from banks and they get more political influence by coercing their employees into voting for specific candidates (more headcount = more voting power).
It's true. The Big TypeScript lobby has been secretly working for years on a product called TypeScript Enterprise Edition. Many jobs will be created to support all those AbstractVirtualFactoryManagers.
This is just as serious as Bjarne Stroustrup's famous leaked interview in which he revealed why he created C++: https://www.stokely.com/lighter.side/stroustrup.html
And that they use their own deno-like JS engine.
The talk describes the system architecture and shows how the IDE is used to create applications.
That does seem a reasonable argument for avoiding enum types in TS. Those have semantics that go beyond how enumerations work in most languages and implementing them in the underlying JS is going to introduce some runtime cost.
I wonder whether their coding standards still permit const enums, though, as they are also new relative to JS but work more like traditional enumerations in C-family languages. These ought to be entirely compiled away, so it seems like the same arguments around efficiency and portability/compatibility wouldn’t apply.
enum is a keyword that is reserved in ECMAScript and therefore may one day clash with TypeScript. It's unlikely any ECMAScript-defined semantics for enum would match today's TypeScript enum. So it's not on any standards track.
There is already a proposal for JS enum that has different semantics to TS enum, and it was created by someone on the TypeScript team: https://github.com/rbuckton/proposal-enum
So the const form doesn't really change the hazard.
I wonder how likely it is that ECMAScript would introduce its own incompatible form of enum now that TS has become so popular. With static type checking as in TS, enums are very useful, but in the more dynamic environment of JS, the benefit is much smaller. So even if there’s nothing wrong with ECMAScript using its reserved keyword in principle, it feels like a bad idea to introduce a potential source of confusion and maybe subtle bugs when so many developers now use both languages.
In terms of whether JS will introduce things that conflict with TS, it's hard to say. TypeScript team themselves are active participants in TC39, championing recent features such as Optional Chaining and Nullish Coalescing. So if there were any conflicts, or even opportunities for confusion, it would all be managed way ahead of time. Several TC39 delegates are users of TypeScript so there is no risk of accidents here (in my opinion).
Sure, and again, at the scale in your organisation, taking an absolute position on this makes a lot of sense and I can respect the principled stance.
On the other hand, the trade-off in this case is giving up a tool that is widely useful immediately in exchange for a potential/hypothetical benefit later. For other development teams, perhaps those alternatives would be weighted differently. And then that in turn might affect how any future changes were viewed by the relevant language committees, who as you say would surely be aware of the implications.
Anyway, thanks for sharing your insights, both here and in the original article. It’s somehow reassuring to me that I’m not the only person in the world who wants their front-end code to continue working for more than five minutes, when it seems like a lot of the front-end community would consider that a pretty good working lifetime for code these days!
type Color = "red" | "green" | "blue";
Do you think this runtime cost will make much of a difference compared to everything else that goes on in the code? My impression that in the last few years the pure code performance is so fast that it almost doesn’t matter much anymore. Of course unless you manage to do incredibly stupid things.
The primary problem is the potential conflict with future ECMAScript. Multiple TypeScript team members have talked about how this feature is now regretted. The article features Orta's famous meme-slide communicating why runtime features (specifically calling enum) should not really be in the TypeScript language. Anders talked about it three weeks ago during the Q&A session at TSConf: https://www.youtube.com/watch?v=vBJF0cJ_3G0
enum is highly unlikely to be removed from TypeScript because of the strong commitment to backwards compatibility. But that doesn't mean we should encourage proliferation. Especially when many use-cases can be served more simply by string unions.
type Color = "red" | "blue" | "green";
Perhaps not. It’s hard to imagine a program that you’d write in a language like JS in the first place where the small overhead of a look-up would be significant even for a feature as commonly used as enums.
However, there is a wider issue here than just whether to use one specific TS language feature or not. If you’re establishing principles that are going to affect thousands of developers working on thousands of different programs, which appears to be the case at Bloomberg from what I can see, then erring on the side of caution is a reasonable position to take. It’s difficult to objectively evaluate risk at that scale, whether from individually small but pervasive runtime performance hits or from potential future (in)compatibility with JS.
It’s also important to keep in mind that Bloomberg is very willing to have their best developers go ahead and build actual tooling at the compilers (and language) levels to resolve issues they may face.
This occasionally leads to issues when they start diverging from the mainstream too much, but they’re also good about then stepping in and pivoting back to the mainstream development branches, which is kind of what’s happening in this article.
> Having worked there the JS ecosystem was a tremendous advantage
> It’s also important to keep in mind that Bloomberg is very willing to have their best developers go ahead and build actual tooling at the compilers (and language) levels to resolve issues they may face
That's awesome, but from my read of the article, seems like a lot of fighting to rebuild things that Java just has. And now this whole move to TypeScript to backfill in types, again, it appears to me like maybe if they'd gone with Java, much less time would have been needed on the devs building their own tooling, compilers and doing massive project migrations.
Until then this is as much as I can share right now:
Also to be clear, we don't have a language monoculture. The app layer and hundreds of services use JS/TS. But the majority of the backend remains C++.
I'd be very interested in an origin story. The emphasis on tight feedback loop is very novel for the time I feel, over going with Java for example, which could have been a middleground between feedback loop and type safety with much more tooling available at the time. And now with bringing types back in, slowing down the build times again, hurting the feedback loop, but it seems safety has now been favored over it, was it a change of heart, what lead to that?
Anyways I'll patiently await a maybe blog post about it :)
Thanks for the right up here, was very fascinating.
Why do they generate .d.ts files instead of publishing .ts files alongside .js files? This should solve inlining issue, no?
It's true you could just have every package publish the raw TypeScript source-code. So that when an app imports a library, it type-checks against the original source code of the library. No need for DTS files from your dependencies!
I have never seen this done in practice for a large system. It's not as scalable as using bare-minimum type declarations that you find in DTS files. It requires more parsing, and potentially more time to accumulate the types. Whereas DTS emit in TypeScript can flatten the resulting type into the DTS file.
But if performance didn't matter, I think this would probably work. And it would eliminate the class of bugs where the generated DTS files are not 100% semantically identical to the original source code. That's another edge-case finding I omitted from the document but hope to write up another day.
A developer with clean/fresh clones would still notice a big performance difference, but amortized over a number of builds it might not matter in general practice. In theory at least, as Bloomberg notes projects and incremental builds are still newish and don't yet always have the performance characteristics they should have in the wild. That said, the impression is that a lot driving projects/incremental compiles is monorepos in Microsoft (at the very least) that have moved to/are moving to monorepos that are entirely TS only builds with few/no DTS intermediaries. (That's just me reading between the lines, of course, I could be mistaken.) The drive behind projects/incremental compiles sounds like it is attempts at massive scales of TS files.
But in general publishing the most strict ts should work fine when using from weaker tsconfig.
Personally I don't use custom path resolutions, can't comment on that.
ps. oh wait, but why does it matter? .d.ts file won't be different if you change strictness in tsconfig - why would it make difference for published .ts files?
The TS team was adamant they wanted to not be a runtime, i.e. it was JS + Typing only.
Partly I wish they just dumped ECMA and just made TS proper, but perhaps that's just Dart.
The TS team seem to be firmly going in the direction of ECMAScript alignment and the JS + Types model, which is something the article advocates for too. Overall this increases my trust in the technology.
On a related topic Bjarne Stroustrup once said "There are only two kinds of languages: the ones people complain about and the ones nobody uses."
That said, it's MS, not some side project. They could feasibly take the road to 'pure TS' especially now that they have the core following and MS has some brand trust.
There's probably a huge following of folks that would jump on a Pure TS train as long as it wasn't cloistered by all the .Net legacy.
Like PureTS on Mono VM type thing.
I wonder if they have published those rules, I'd like to see them as I'm just diving into TS.
Older libraries that are long abandoned by their maintainers usually don't have anything. At work we pretty much ignore all libraries that don't have types from somewhere these days, but that doesn't eliminate many things.
Having to write the definitions myself usually isn't a huge pain, but it's a big red flag showing that it's probably abandoned, and there's no community support around it anymore (or possibly ever).
It's rare for there to be a breaking change in the JS emit.
The TypeScript team work hard to preserve compatibility. Breaking changes are explicitly managed and communicated ahead of time. There is a concept of a breakage-budget per-release. This means if you stay up to date, the cost of each upgrade should not be huge. Orta and Nathan on the TypeScript team talk about this in this podcast episode: https://dev.to/devteam/devnews-s1e4-typescript-4-0-gitee-chr...
So "keeping up" is not too hard.
A useful example is Strict Null Checks which started to treat unadorned types as non-nullable (string versus string? versus string | null). In most cases the runtime behavior of libraries were already throwing errors when nulls were passed in, and the extra checks helped immensely. In rarer cases "oh yeah, null is totally a valid thing to send to this API and it has some defined behavior that we document/should document" and the DTS files were encouraged to annotate their types to include that. In the worst where libraries had such APIs but failed to update their DTS files, there were explicit casting workarounds in consuming projects (the awkward `null as string` and the so called "damn it" operator `possiblyNullString!`).
It also completely changes the way you write code. Often time you can just go to the leaves of your application, make the change you want, and then just follow the compiler errors all the way back up to the top. When it compiles there is decent chance that it will run successfully first time.
But I'd still place TypeScript firmly in the "worse is better" category (products that are technically inferior then the state of the art, but succeed due to circumstance), it's a band aid on top of a the broken JS ecosystem. Its ad-hoc, Hodge-Podge type system and poor meta-programming is sad to see for such a popular language.
I'd rather write TypeScript then JS, but as soon as there's an opportunity to write software that works across all platforms without dealing with JS, and with a type system that's actually built on a solid foundation, I'm taking it.
I think it's important to distinguish between "worse is better" and path dependence, which is what you're getting at here.
"Worse is better" is a blank slate design philosophy. It says a system with more failures modes can be a better than a more robust system if allowing those failure modes keeps the system significantly simpler and more understandable to users. A bicycle is "worse is better" compared to a car. You have to check the tire pressure frequently, and clean the chain fairly often. But it's easy to put air in the tires and you can see the chain and tell when it needs to be cleaned.
"Path dependence" means your design is constrained by historical choices that don't benefit future users but are part of the reality you have to design for today. Train railway gauges today are not the best width for optimizing shipping efficiency. It was chosen in the 1800s before anyone really knew, and it's impossible to change because existing trains would be imcompatible with the new gauge.
My usage of "worse is better" means "worse is more marketable, and easier to implement", as opposed to actually being better when adopted. The original use of "worse is better" was not advocating for it as a design style in the general case, but commenting on the survival and adoption characteristics of systems designed in this style, as well as that it's often a good approach in the beginning of a project.
TypeScript makes a simple promise, add types to your existing code base using an easy to understand structural type system, and it works on top of JS, just as C was designed as simple layer on top of Assembler.
If C ("Worse is Better") is now being disrupted by Rust ("The Right Thing"), I think we'll see a similar player for JS / TypeScript.
If you're in a position where you can decide what stack a large swath of developers are going to use, you should perhaps keep an eye open to JS / TypeScript alternatives with growing ecosystems. Your choice could turn into a competitive advantage, even if it means a smaller community in the beginning.
But yknow electron yadda yaddaa,
I end up blowing it with too many plugins so it can be a little slow.
Also the telemetry...
Also although I love WSL for being able to use linux on my windows work machine, it is really slow for me. Trying to use a docker-magento image on windows under wsl gave me minute long page-loads.
I also hit a bug the other day that would put my vps server at 100% CPU when working remotely with the VSCode ssh remote extension...
So all in consideration, VS Code is Okay'ish but I'm not really a fan, for mac I like the new Nova editor from panic guys albeit it's newness that grants less community/plugins!
I have been programming since the early 80s, and my experience with JS has been pretty positive, especially compared to the slog through the unreadable mud that is enterprise Java. (If I was going to rage quit on something, it would be the verbose, obtuse, repetitive Java sludge I’ve had to read the last decade and a half)
Typescript solves problems I don’t have. The Jetbrains IDEs do autocomplete just fine on vanilla JS, and the documentation window automatically shows the initialization and any JSDoc for most every symbol the editor cursor touches. I avoid writing code with thousands of global symbols.
OOP languages like Object Pascal and C++ are good for making lemonade from the lemon of manual memory management, but if you have efficient garbage collection, use functional programming techniques rather than OOP bloat, and many self inflicted problems go away.
I hope TS doesn’t turn JS into Enterprise Java, The Next Generation.
Nothing personal, most people seem to prefer that style. Just understand, there is a reason some people run away from the C++ / Java / C# / TS milieu.
No chance. And frankly, I don't get the comparison you are making to C++/Java/C#. TS is fundamentally different than those languages. From structural typing to a default functional/imperative paradigm, it really is just JS with better static analysis.
Your IDE is not as good for JS as it would be for TS. The entire impetus for TS is/was to provide better tooling/ergonomics around JS. Whether or not you need better tooling is a subjective matter.
Also, static typing adds no value at all to programming if your project code is well structured. It's only useful for spaghetti code monoliths.
If you make your code modular with good tests, you won't need type safety because your modules will be simple enough that your tests will easily catch these errors.
My view is that TypeScript encourages developers to write overly complex spaghetti code because it encourages devs to pass around active object references instead of simple primitives like strings, numbers or other primitive data types.
Alan Kay, one of the founders of OOP himself said that the point of OOP is not objects, "The big idea is messaging". Passing around complex typed instances goes completely against that. An instance is not a message. Complex instances (which have their own state and methods) are not messages; they should generally not be used for messaging between different components (I.e. as arguments or return values).
- automatic compilation that is fast
- sourcemaps work
- decoupling of packages to help compatibility
- automatic tsconfig management
I wonder if you would have a different conclusion with these things taken care of.
I agree that TypeScript makes it easier to have functions with complex parameters - because they are now documented and more understandable. So yes, this demands more discipline on the API designer to keep APIs simple.
I find that hard to believe because it's not a solvable problem without type annotations.
Orta, a member of the TypeScript team, describes the reasons why in his video "How Does the TypeScript Team Try to Avoid Negative Effects on the JS Ecosystem" https://www.youtube.com/watch?v=qr0TnQ2mHwY
It directly answers the question of "What would a malicious version of TypeScript look like" and explains why this is unlikely to occur.
JS/TS are used in the Terminal's view layer. You can make services in these languages as well, AFAIK, but I don't think it is very common.
Just to expand on the conversion, whilst all of our apps (and hundreds of services) were migrated from C++ to JS, we still have much more C++ on the backend than we do JS. Thankfully it is not a language monoculture ;-)