Hacker News new | past | comments | ask | show | jobs | submit | DanRosenwasser's comments login

Hey bcherny! Yes, dog-fooding (self-hosting) has definitely been a huge part in making TypeScript's development experience as good as it is. The upside is the breadth of tests and infrastructure we've already put together to watch out for regressions. Still, to supplement this I think we will definitely be leaning a lot on developer feedback and will need to write more TypeScript that may not be in a compiler or language service codebase. :D

Interesting! This sounds like a surprisingly hard problem to me, from what I've seen of other infra teams.

Does that mean more "support rotations" for TS compiler engineers on GitHub? Are there full-stack TS apps that the TS team owns that ownership can be spread around more? Will the TS team do more rotations onto other teams at MSFT?


Hi folks, Daniel Rosenwasser from the TypeScript team here. We're obviously very excited to announce this! RyanCavanaugh (our dev lead) and I are around to answer any quick questions you might have. You can also tune in to the Discord AMA mentioned in the blog this upcoming Thursday.

Hey Daniel.

I write a lot of tools that depend on the TypeScript compiler API, and they run in a lot of a lot of JS environments including Node and the browser. The current CJS codebase is even a little tricky to load into standard JS module supporting environments like browsers, so I've been _really_ looking forward to what Jake and others have said will be an upcoming standard modules based version.

Is that still happening, and how will the native compiler be distributed for us tools authors? I presume WASM? Will the compiler API be compatible? Transforms, the AST, LanguageService, Program, SourceFile, Checker, etc.?

I'm quite concerned that the migration path for tools could be extremely difficult.

[edit] To add to this as I think about it: I maintain libraries that build on top of the TS API, and are then in turn used by other libraries that still access the TS APIs. Things like framework static analysis, then used by various linters, compilers, etc. Some linters are integrated with eslint via typescript-eslint. So the dependency chain is somewhat deep and wide.

Is the path forward going to be that just the TS compiler has a JS interop layer and the rest stays the same, or are all TS ecosystem tools going to have to port to Go to run well?


I think they answered in their FAQ here: https://github.com/microsoft/typescript-go/discussions/455#d....

If I got it correctly, they created a node native module that allows synchronous communication on standard I/O between external processes.

So, this node module will make possible the communication between the typescript compiler GO process, that will expose an “API server compiler”, and a client side JavaScript process.

They don’t think it will be possible to port all APIs and some/most of them will be different than today.


Reading the article, it looks like they are writing go, so will probably be distributing go binaries.

Maybe they'll also be distributed in WASM too, which is easier to be integrated with JavaScript codebases.

Would running WASM be any faster than running JS in V8?

In my experience it is pretty difficult to make WASM faster than JS unless your JS is really crappy and inefficient to begin with. LLVM-generated WASM is your best bet to surpass vanilla JS, but even then it's not a guarantee, especially when you add js interop overhead in. It sort of depends on the specific thing you are doing.

I've found that as of 2025, Go's WASM generator isn't as good as LLVM and it has been very difficult for me to even get parity with vanilla JS performance. There is supposedly a way to use a subset of go with llvm for faster wasm, but I haven't tried it (https://tinygo.org/).

I'm hoping that Microsoft might eventually use some of their wasm chops to improve GO's native wasm compiler. Their .NET wasm compiler is pretty darn good, especially if you enable AOT.


I think the Wasm backends for both Golang and LLVM have yet to support the Wasm GC extension, which would likely be needed for anything like real parity with JS. The present approach is effectively including a full GC implementation alongside your actual Golang code and running that within the Wasm linear memory array, which is not a very sensible approach.

The major roadblocks for WasmGC in Golang at the moment are (A) Go expects a non-moving GC which WasmGC is not obligated to provide; and (B) WasmGC does not support interior pointers, which Go requires.

https://github.com/golang/go/issues/63904#issuecomment-22536...


These are no different than the issues you'd have in any language that compiles to WasmGC, because the new GC'd types are (AIUI) completely unrelated to the linear "heap" of ordinary WASM - they are pointed to via separate "reference" types that are not 'pointers' as normally understood. That whole part of the backend has to be reworked anyway, no matter what your source language is.

Go exposes raw pointers to the programmer, so from your description i think those semantics are too rudimentary to implement Go's semantics, there would need to be a WasmGC 2.0 to make this work.

It sounds like it would be a great fit for e.g. Lua though.


I don't think Go supports any pointer arithmetic out-of-the-box? What it has in the base language is effectively references.

> the Wasm GC extension, which would likely be needed for anything like real parity with JS

Well, for languages that use a GC. People who are writing WASM that exceeds JS in speed are typically doing it in Rust or C++.


Yeah. If I remember it correctly, you need to compile the GC to run on WASM if the GC extension is not supported.

The GC extension is supported within browsers and other WASM runtimes these days - it's effectively part of the standard. Compiler developers are dropping the ball.

I did some perf benchmarks a few years ago on some JS code vs C code compiled to WASM using clang and running on V8 vs the same C code compiled to x64 using clang.

The few cases that performed significantly better than the JS version (like >2x speed) were integer-heavy math and tail-call optimized recursive code, some cases were slower than the JS version.

What I was surprised was that the JS version had similar performance to the x64 version with -O3 in some of my benchmakrs (like float64 performance).

This was a while ago though when WASM support had just landed in browsers, so probably things got better now.


Apparently not good enough, given the decision to use Go.

Very likely. Migrating compute-intensive tasks from JavaScript was one of the explicit goals behind the invention of WASM.

Interop with a WASM-compiled Go binary from JS will be slower but the WASM binary itself might be a lot faster than a JS implementation, if that makes sense. So it depends on how chatty your interop is. The main place you get bogged down is typically exchanging strings across the boundary between WASM and JS. Exchanging buffers (file data, etc) can also be a source of slowdown.

Like others I'm curious about the choice of technology here. I see you went with Go, which is great! I know Go is fast! But its also a more 'primitive' language (for lack of a better way of putting it) with no frills.

Why not something like Rust? Most of the JS ecosystem that is moving toward faster tools seem to be going straight to Rust (Rolldown, rspack (the webpack successor) SWC, OXC, Lightning CSS / Parcel etc) and one of the reasons given is it has really great language constructs for parsers and traversing ASTs (I think largely due to the existence of `match` but i'm not entirely sure)

Was any thought given to this? And if so what was the deciding factors for Go vs something like Rust or another language entirely?


| with no frills.

People say this like it's a bad thing. It's not, it's Go's primary strength.


I can see the appeal. Not having to write C# style oop probably gave the team a huge productivity boost. I bet it compiles hundreds of times faster making the team, cicd, and dev efforts substantially more productive. Cohesive integrated modern tooling is also a huge plus. Project structure is considerably simpler... I am not really a go fan but I would chose it over c# in a majority of cases as well.

I think they missed out by not going with Rust. It seems like the social factors weighed out. Probably hard to quickly assemble a rust team within msft. Again though that makes Go a practical choice. I don't see why people are so confused by it. Go is a pretty widely used and solid choice to get things done reliably and quickly these days.


The reason they didn't do Rust is because it was faster and more reliable to port the compiler and Go was a strong match particularly because of struct layout, types, concurrency, etc. but most importantly because it was native code with automatic garbage collection which Rust simply doesn't have. There's a video of Anders talking specifically about this.

The automatic gc doesn't seem like an actual deal breaker though. They probably just didn't want to redesign a bunch of data types that assumed one existed.

I'm not arguing saying they made a bad call. I think what they did was smart with the options in front of them and whatever budget they have. The world isn't good for idealism, but it ideally could have been written in rust in my opinion.


Yes. For Webservers. Not for compilers. I wrote a bunch of compilers, and Go is not a language I would choose for this.

Go is exceptionally fast for a transpiler. Esbuild is a great example.. Rust would offer any significant gains vs adoption for support.

We did anticipate this question, and we have actually written up an FAQ entry on our GitHub Discussions. I'll post the response below. https://github.com/microsoft/typescript-go/discussions/411.

____

Language choice is always a hot topic! We extensively evaluated many language options, both recently and in prior investigations. We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript. We wrote multiple prototypes experimenting with different data representations in different languages, and did deep investigations into the approaches used by existing native TypeScript parsers like swc, oxc, and esbuild. To be clear, many languages would be suitable in a ground-up rewrite situation. Go did the best when considering multiple criteria that are particular to this situation, and it's worth explaining a few of them.

By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.

Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management. While this implies a garbage collector, the downsides of a GC aren't particularly salient in our codebase. We don't have any strong latency constraints that would suffer from GC pauses/slowdowns. Batch compilations can effectively forego garbage collection entirely, since the process terminates at the end. In non-batch scenarios, most of our up-front allocations (ASTs, etc.) live for the entire life of the program, and we have strong domain information about when "logical" times to run the GC will be. Go's model therefore nets us a very big win in reducing codebase complexity, while paying very little actual runtime cost for garbage collection.

We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.

Acknowledging some weak spots, Go's in-proc JS interop story is not as good as some of its alternatives. We have upcoming plans to mitigate this, and are committed to offering a performant and ergonomic JS API. We've been constrained in certain possible optimizations due to the current API model where consumers can access (or worse, modify) practically anything, and want to ensure that the new codebase keeps the door open for more freedom to change internal representations without having to worry about breaking all API users. Moving to a more intentional API design that also takes interop into account will let us move the ecosystem forward while still delivering these huge performance wins.


This is a great response but this is "why is Go better than JavaScript?" whereas my question is "why is Go better than C#, given that C# was famously created by the guy writing the blog post and Go is a language from a competitor?"

C# and TypeScript are Hejlsberg's children; C# is such an obvious pick that there must have been a monster problem with it that they didn't think could ever be fixed.

C# has all that stuff that the FAQ mentions about Go while also having an obvious political benefit. I'd hope the creator of said language who also made the decision not to use it would have an interesting opinion on the topic! I really hope we find out the real story.

As a C# developer I don't want to be offended but, like, I thought we were friends? What did we do wrong???


Anders answers that question here - https://www.youtube.com/watch?v=10qowKUW82U&t=1154s

Transcript: "But I will say that I think Go definitely is much more low-level. I'd say it's the lowest level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In contrast, C# is sort of bytecode-first, if you will. There are some ahead-of-time compilation options available, but they're not on all platforms and don't really have a decade or more of hardening. They weren't engineered that way to begin with. I think Go also has a little more expressiveness when it comes to data structure layout, inline structs, and so forth."


Thanks for the link. I'm not fully convinced by Anders answer. C# has records, first class functions, structs, span. Much control and I'd say more than Go. I'd even say C# is much closer to TS than Go is. You can use records for the data structures. The only little annoyance is that you need to write the functions as static methods. So an argument for easy translation would lead to C#. Also, C# has advantages over Go, e.g. null safety.

Sure, AOT is not as mature in C# but is this reason enough to be a show stopper? It seems there're other reasons Anders don't want to address publicly. Maybe as simple reasons as "Go is 10 times easier to pick up than C#" and "language features don't matter when the project matters". Those would indeed hurt the image of C# and Anders obviously don't want that.

But I don't see it as big drama.


I dont think there are other reasons.

The side-by-sides that show how Go code is closer to the current TS code (visually) than C# would be are pretty compelling. He made it pretty clear they're "porting" not rewriting.


This is a great link, thank you!

For anyone who can't watch the video, he mentions a few things (summarizing briefly just the linked time code, it's worth a watch):

- Go being the lowest level language that still has garbage collection

- Inline structs and other data structure expressiveness features

- Existing JS code is in a C-like function+data structure style and not an OOP style, this is easier to translate directly to Go while C# would require OOPifying it.


An unpopular pick that is probably more low level than Go but also still has a GC: D. Understandable why you wouldn't pick D though. Its ecosystem is extremely small.

I think you D fans need to dogfood a startup based around it.

It's a fascinating language, but it lacks a flagship product.

I feel the same way about Haxe. Someone created an amazing language, but it lacks a big enough community.

Realistically languages need 2 things for adoption. Momentum and ease of use. Rust has more momentum than ease, but arguably can solve problems higher level languages can't.

I'm half imagining a hackathon like format where teams are challenged to use niche languages. The foundations behind these languages can fund prizes.


Did my post come off as a fan? I directly criticized its ecosystem. It wouldn't be my first pick either. I was just making conversation that there are other options.

And AFAIK Symmetry Investments is that dogfood startup.


A missed opportunity to improve c# by dogfooding it with TS compiler rewrite.

They are trying to finish their current project and not redo all the projects which their current project may depend upon.

"Finish"?

C# is too old to change that drastically, just like me

> "given that C# was famously created by the guy writing the blog post"

What is this logic? "You worked on C# years ago so you must use C# for everything"?

"You must dictate C# to every team you lead forever, no matter what skills they have"?

"You must uphold a dogma that C# is the best language for everything, because you touched it last"?

Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those? Because there is no logic; the person who created hammers doesn't have to use hammers to solve every problem.


Yes, but C# is the Microsoft language, and I would say TypeScript is 2nd place Microsoft language (sorry F# folks - in terms of popularity not objective greatness of course).

So it's not just that the lead architect of C# is involved in the TypeScript changes. It's also that this is under the same roof and the same sign hangs on the building outside for both languages.

If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?


funny you bring up this analogy. tons of auto manufacturers these days will license other mfgs' engines and use them in your cars. e.g. a fair number of Ford's cars have had Mazda engines and a fair number of Mazdas have had Ford engines.

Could you give some examples of both? Also, why did they choose to do this?

Toyota 86 and Subaru BRZ are basically the same car. The car was designed by Toyota while Subaru supplied the engine. Just one example.

F# isn't in the running for third either.

Maybe top ten behind MSSQL, Powershell, Excel Formulae, DAX etc.


hey, there are dozens of us F# users! dozens!

I do love F#, but its compiler is a rusty set of monkey bars. It's somehow single pass, meaning the type checker will struggle if you don't reorder certain expressions - but also dog slow, especially for `inline` definitions (which work more like templates or hygienic macros than .net generics, and are far more powerful.) File order matters, bafflingly! Newer .net features like spans and ref structs are missing with no clear path to implementation. Doing moderately clever things can cause the compiler to throw weird, opaque, internal errors. F# is built around immutability but there's no integration with the modern .net immutable collections.

It's clearly languishing and being kept alive by a skeleton crew, which is sad, because it deserves better, but I've used research prototypes less clunky than what ought to be a flagship.


> Newer .net features like spans and ref structs are missing with no clear path to implementation

Huh? They're already implemented! It took years and they've still got some rough edges, yes, but they've been implemented for a few years now.

Agreed with the rest, though. As much as I love working with F#, I've jumped ship.


> "So it's not just that the lead architect of C# is involved in the TypeScript changes."

Anders Hejlsberg hasn't been the lead architect of C# for like 13 years. Mads Torgersen is:

https://dotnetcore.show/episode-104-c-sharp-with-mads-torger... - "I got hired by Microsoft 17 years ago to help work on C#. First, I worked with Anders Hejlsberg, who’s sort of the legendary creator and first lead designer of C#. And then when he and I had a little side project with others to do TypeScript, he stayed over there. And I got to take over as lead designer C#. So for the last, I don’t know, nearly a decade, that’s been my job at Microsoft to, to take care of the evolution of the C# programming language"

Years later, "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird; he's a person with a job, not a religious cult leader.

> "If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?"

Like these? https://www.slashgear.com/1642034/fords-powered-by-non-ford-...


> "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird

It's also not what anyone said.

> It's best not to use quotation marks to make it look like you're quoting someone when you're not. <https://news.ycombinator.com/item?id=21643562>


It's a bad look for both C# and TypeScript. Anybody starting a new code base now would be looking for ways to avoid both and jump right to Go.

I'm struggling to understand how this is a bad look for Typescript. Do you mean that the specific choice of Go reflects poorly on Typescript, or just the decision to rewrite the compiler in a different non-TS language?

If it's the latter, I think the pitch of TS remains the same — it's a better way of writing JS, not the best language for all contexts.


I think a lot of folks downplay the performance costs for the convenience of a shared code-base between the front and backend.

If the TS team is getting a 10x improvement moving from TS to Go, you might imagine you could save about 10x on your server cpu. Or that your backend would be 10x more responsive.

If you have dedicated team for front and back anyhow, is a 10x slow down really worth a shared codebase?


if I had to use Go I’d change my career and go do some gardening :)

I actually really enjoy Go. Sure it has a type system I wish was more powerful with lots of weird corners ( https://100go.co/ ), but it also has REALLY GOOD tooling- lots of nice libraries, the compiler is fast, the editor tooling is rock solid, it's easy to add linters to warn you about many issues (golangci-lint), and releasing binaries and updating package repositories is super nice (Goreleaser).

I'd probably have said the same 5 years ago, it's surprising how easy you change sides once you actually use it in a team.

I was mostly joking… some of the most amazing shit code-wise I have seen in “non-mainstream” languages (fortran leads the way here)

I like Anders'answer there. "But you can achieve pretty great things with it".

I had to, and I do think a lot about gardening these days...

If they're writing (actually porting) a _compiler_, perhaps.

Go doesn't run in the browser however (except WASM but that is different).

> Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those?

as you know full well, Delphi and Turbo Pascal don't have strong library ecosystems, don't have good support for non-Windows platforms, and don't have a large developer base to hire from, among other reasons. if Hejlsberg was asked why Delphi or Turbo Pascal weren't used, he might give one or more of those reasons. the question is why he didn't use C#, for which those reasons don't apply.


I'm not saying this to start a language war but. Look at the cognitive complexity and tooling complexity involved in a c# project. Seriously, every speed bump you hit in your ide think about how many pieces of knowledge you assemble to solve it. Similarly think about the overhead in designing both the software and tests. Think about cross platform builds and the tooling required to stand up ops infrastructure. Measure the compilation time. Think about the impedance mismatch between ts and c#.

Compare that to go. It's not even close. I see comments bickering about the size of executable files... Almost no major product cares about that within order of magnitude.

Go is a wild choice to write a compiler in. Literally in my top 10 things I never want to do. Everything else about it drove them to do it.


GP's answer is a great answer to why Go instead of Rust, which u/no_wizard asked about. And the answer to that boils down to the need to traverse data structures in ways which Rust makes difficult, and the simplicity of a GC.

[flagged]


C# is a decently-designed language, but its first principles are being microsoft-y and java-y, which are perhaps two of my least favorite principles. that aside, i've worked on C# backends deployed to lots of linux boxes and it's not really second-rate these days.

Microsoft's implementation has been cross platform for almost a decade now. You're way too late to the Mono FUD party.

Almost a decade? Amazing. Considering go has been cross platform since its inception almost twice as long as that, rust too, it’s no wonder developer mindshare is elsewhere.

It’s a political anti-benefit in most of the open-source world. And C# is not considered a high quality runtime once you leave Windows.

This is Anders Hejlsberg, the creator of C#, working on a politically important project at Microsoft. That's what I mean by political benefit. The larger open source world doesn't matter for this decision which is why this is a simple announcement of an internal Microsoft decision rather than an invitation for comments ahead of time.

I’m sure Microsoft’s strategy department would disagree with you. As a c# devotee - I get that you’re upset. And you may want to update your priors on where c# sits in Microsoft’s current world. But I think it’s a mistake to imagine this isn’t a well reasoned decision.

They can disagree if they want but as a career-long Microsoft developer they can't fool me that easily. I'm not even complaining, I'm just stating a fact that high-level steering decisions like this are made in Teams meetings between Microsoft employees, not in open discussion with the community. It's the same in .NET, which is a very open source project whose highest-level decisions are, nonetheless, made in Teams meetings between Microsoft employees and then announced to the public. I'm fine with this but let's not kid ourselves about it.

That said, I must have misstated my opinion if it seems like I didn't think they have a good reason. This is Anders Hejlsberg. The guy is a genius; he definitely has a good reason. They just didn't say what it is in this blog post (but did elsewhere in a podcast video linked in the HN thread).


> The larger open source world doesn't matter for this decision

It obviously does because the larger open source world are huge users of Typescript. This isn't some business-only Excel / PowerBI type product.

To put it another way, I think a lot of people would get quite pissed if tsc was going to be rewritten in C# because of the obvious headaches that's going to cause to users. Go is pretty much the perfect option from a user's point of view - it generates self-contained statically linked binaries.



It would have a substantial risk for the typescript project. Many people would see it as an unwanted and hostile push of a Microsoft technology on the typescript community.

And there would be logistical problems. With go, you just need to distribute the executable, but with c#, you also need a .net runtime, and on any platform that isn't Windows that almost certainly isn't already installed. And even if it is, you have to worry if the runtime is sufficiently up to date.

If they used c# there is a chance the community might fork typescript, or switch to something else, and that might not be a gamble MS would want to take just to get more exposure for c#.



Okay, not to be petty here but, it's important to note that on his GitHub he did not star the dotnet repository but has starred multiple go repos and multiple other c++ and TS repos

Modern C# (.NET Core and newer) works perfectly fine on Linux.

> And C# is not considered a high quality runtime once you leave Windows.

By who?


Usually by someone who hasn't used C# since 2015 (when this opinion was fairly valid)

It’s always the same response, c# was crappy but it’s not crappy anymore. Well guess what, Go has been not crappy for a lot longer than C# has been not crappy, maybe that’s part of the reason people like it more.

.NET executables requires a runtime environment to be installed.

Go executables do not.

TSC is installed in too many places for that burden to be placed all of a sudden. It is the same reason why Java has had a complicated acceptance history too. It's fine in the places that it is pre-installed, but no where else.

Node/React/Typescript developers do not want to install .net all of a sudden. If you react that poorly, pretend they decided they decided to write it in Java and ask if you think Node/React/Typescript developers WANT to install Java.


FYI this hasn’t been the case with C# for a very long time now.

.NET has been able to build a self contained single file executable for both the JIT and AOT target for a quite some time. Java also does not require the user to install a runtime. JLink and JPackage have both been around for a long time.

Maybe some other runtimes do this or it has been changed, but in the past self-contained singe-file .NET deployment just meant that it rolled all the files up during publishing and when you run it, it extracted them to a folder. Not really like a single statically linked executable.

You can indeed produce a compiled native executable with minimal bloat: https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...

It hasn't done that in years.

C# AOT filesizes are huge compared to Go.

Do you have data backing that up? Per https://github.com/MichalStrehovsky/sizegame:

C#: 945 kB Go: 2174 kB

Both are EXEs you just copy to the machine, no separate runtime needed, talks directly to the OS.


I personally find Go miles easier than Rust.

Is this the ultimate reason,Go is fast enough without being overally difficult. I'm humbly open to being wrong.

While I'm here, any reason Microsoft isn't sponsoring a solid open source game engine.

Even a bit of support for Godot's C#( help them get it working on web), would be great.

Even better would be a full C# engine with support for web assembly.

https://github.com/godotengine/godot/issues/70796


> Even a bit of support for Godot's C#( help them get it working on web), would be great.

They did that. https://godotengine.org/article/introducing-csharp-godot/

At least some initial grant to get it started.

Getting C# working on web would be an amazing. It is already on the roadmap but some sponsorship would help tremendously for sure.


Ok. Credit where credit is due, but considering the sheer value of having the next general of programmers comfortable with .net, Microsoft *should* chip in more.

Hasn't Microsoft largely hitched their horse to Go these days, though (not just this project)? They even maintain their own Go compiler: https://github.com/microsoft/go

It is a huge company. They can do more than one thing. C#/.NET certainly isn't dead, but I'm not sure they really care if you do use it like they once did. It's there if you find it useful. If not, that's cool too.


We're talking about a nominal amount of funding to effectively train 10s of thousands of developers.

I think Microsoft can find the money if they wanted to.


I'm sure Microsoft could find the money to do a lot of different things. But why that instead of the infinite alternatives that the money could be spent on instead?

History has shown Microsoft abandoning any gamedev toolkit or sdk they “support”. Managed DirectX, XNA, etc.

Personally, I would like them to never touch the game dev side of the market.


"any reason Microsoft isn't sponsoring a solid open source game engine"

I can see they do this in the future tbh, given how large their xbox gaming ecosystem, this path is very make sense since they can cut cost while giving option to their studios or indie developers


While I'm dreaming of things that will never ever happen, I would absolutely love for them to buy the game engine side of Unity and open source it.

Unless I missed Unity sorting a ton of stuff out, I assume they're going to have to sell themselves off for parts at some point after the runtime fee fiasco that was supposed to make them profitable lead to developers being angry or outright leaving the ecosystem. My assumption if that happens unless the DOJ gets involved for some reason is MS buys it for this reason.

> we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.

Cool. Can you tell us a bit more about the technical process of porting the TS code over to Go? Are you using any kind of automation or translation?

Personally, I've found Copilot to be surprisingly effective at translating Python code over to structurally similar Go code.


I find the discussion about the choice quite interesting, and many points are very convincing (like the GC one). But I am a bit confused about the comparison between Go and C#. Both should meet most of the criteria like GC, control over memory layout/allocation and good support for concurrency. I'm curious what the weaknesses of C# for this particular use case were that lead to the decision for Go.

Anders is answering this in the Video. Go is the lower level and also closer to Javascript's programming style. They didn't want to fully object oriented for this project.

C# is fine. But last I checked, the AOT compilation generates a bunch of .dll files, which are not suitable for a CLI program like Go's zero dependencies binary.

C# can create single-binary executables, even without native AOT.

They are still going to significant bigger than the equivalent golang binary because of the huge .NET runtime, no?


Since this is just Hello World, then TinyGo: 644kB

Is this a fair comparison, won't doing anything more significant than `print` in C# require a .NET framework to be installed (200MB+)?

No. This is normal native compilation mode. As you reference more features from either the standard library or the dependencies, the size of the binary will grow (sometimes marginally, sometimes substantially if you are heavily using struct generics with virtual members), but on average it should be more scalable than Go’s compilation model. Even JIT-based single-file binaries, with trimming, take about ~13-40 MB depending on the task. The runtime itself AFAIK, if installed separately, is below 100MB (installing full SDK takes more space, which is a given).

Spending ages slamming your head on your keyboard because you get a dll error or similar running a .NET app and just can't find the correct runtime version / download is a great past time.

then when you find the correct version but you then have to install both the x86 and x64 version because the first one you installed doesn't work

yeh, great ecosystem

at least a Go binary runs 99.99999% of the time when you start it.


Depends on how well trimming works. It's probably still larger than Go even with trimming, but Go also has a runtime and won't produce tiny binaries.

You can choose how the linking process is done, just like you can chose to have a a Go binary with dependencies.

C# has an option to publish to a single self-contained file.

It would be big enough that people would find it annoying (unless using AOT which is hard).

It seems like, without mentioning any language by name, this answers "why not Rust" better than "why not C#."

I don't think Go is a bad choice, though!


Personally, I want to know why Go was chosen instead of Zig. I think Zig is really more WASM-friendly than Go, and it's much more similar to JavaScript than Rust is.

Memory management? Or a stricter type system?


First reason in my mind is there isn't an abundance of Zig programmers internally in Microsoft, in the job market, and in open source. It's probably a fine choice if you're using it for your passion project e.g. Hashimoto.

Zig isn't memory safe, has regular breaking changes, and doesn't have a garbage collector.

For being production-ready?

So when can we expect Go support in Visual Studio? I am sold by Anders' explanation that Go is the lowest language you can use that has garbage collection!

You can also have GC in C++ and generate even faster code.

Thanks for the thoughtful response!

Go is quite difficult to embed in other applications due to the runtime.

What do you see as the future for use cases where the typescript compiler is embedded in other projects? (Eg. Deno, Jupyter kernels, etc.)

There’s some talk of an inter process api, but vague hand waving here about technical details. What’s the vision?

In TS7 will you be able to embed the compiler? Or is that not supported?


Go has buildmode=c-shared, which compiles your program to a C-style shared library with C ABI exports. Any first call into your functions initializes the runtime transparently. It's pretty seamless and automatic, and it'll perform better than embedding a WASM engine.

We are sure there will be a way to embed via something like WebAssembly, but the goal is to start from the IPC layer (similar to LSP), and then explore how possible it will be to integrate at a tighter level.

Golang is actually pretty easy to embed into JS/TS via wasm. See esbuild.

Esbuild is distributed as a series of native executables that are selectively installed by looking at arch and platform. Although you can build esbuild in wasm (and that's what you use when you run it in the browser), what you actually run from .bin in the CLI is a native executable, not wasm.

Why embed it if you can run a process alongside yours and use efficient IPC? I suppose the compiler code should not be in some tight loop where an IPC boundary would be a noticeable slowdown. Compilation occurs relatively rarely, compared to running the compiled code, in things like Node / Deno / Bun / Jupyter. LSPs use this model with a pretty wasteful XML IPC, and they don't seem to feel slow.

Because running a parallel process is often difficult. In most cases, the question becomes:

So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC? How do you shut it down when the 'host' process dies?

Not vaguely. Not hand wave "just launch it". How exactly do you do it?

How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.

How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?

When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.

So now each kernel process has to manage another process, which it talks to via IPC?

...

Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.


> So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC?

Usually the very easiest way to do this is to launch the target as a subprocess and communicate over stdin/stdout. (Obviously, you can also negotiate things like shared memory buffers once you have a communication channel, but stdin/stdout is enough for a lot of stuff.)

> How do you shut it down when the 'host' process dies?

From the perspective of the parent process, you can go through some extra work to guarantee this if you want; every operating system has facilities for it. For example, in Linux, you can make use of PR_SET_PDEATHSIG. Actually using that facility properly is a bit trickier, but it does work.

However, since the child process, in this case, is aware that it is a child process, the best way to go about it would be to handle it cooperatively. If you're communicating over stdin/stdout, the child process's stdin will close when the parent process dies. This is portable across Windows and UNIX-likes. The child process can then exit.

> How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.

On Android, there is nothing special to do here as far as I know. You should be able to bundle and spawn a native process just fine. Go binaries are no exception.

On iOS, it is true that apps are not allowed to spawn child processes, as far as I am aware. On iOS you'd need a different strategy. If you still want a native code approach, though, it's more than doable. Since you're on iOS, you'll have some native code somewhere. You can compile Go code into a Clang-compatible static library archive, using -buildmode=c-archive. There's a bit more nuance to it to get something that will link properly in iOS, but it is supported by Go itself (Go supports iOS and Android in the toolchain and via gomobile.) Once you have something that can be linked into the process space, the old IPC approach would continue to work, with the semantic caveat that it's not technically interprocess anymore. This approach can also be used in any other situation you're doing native code, so as long as you can link C libraries.

If you're in an even more restrictive situation, like, I dunno, Cloudflare Pages Functions, you can use a WASM bundle. It comes at a performance hit, but given that the Go port of the TypeScript compiler is already roughly 3.5x faster than the TypeScript implementation, it probably will not be a huge issue compared to today's performance.

> How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?

There are no particular complexities with distributing Go binaries. You need to ship a binary for each architecture and OS combination you want to support, but Go has relatively straight-forward cross-compiling, so this is usually very easy to do. (Rather unusually, it is even capable of cross-compiling to macOS and iOS from non-Apple platforms. Though I bet Zig can do this, too.) You just include the binary into your build. If you are using some bindings, I would expect the bindings to take care of this by default, making your resulting binaries "just work" as needed.

It will not conflict with other applications that do the same thing.

> When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.

> So now each kernel process has to manage another process, which it talks to via IPC?

Yes, that's right: you would have to have another process for each existing process that needs its own compiler instance, if going with the IPC approach. However, unless we're talking about an obscene number of processes, this is probably not going to be much of an issue. If anything, keeping it out-of-process might help improve matters if it's currently doing things synchronously that could be asynchronous.

Of course, even though this isn't really much of an issue, you could still avoid it by going with another approach if it really was a huge problem. For example, assuming the respective Jupyter kernel already needs Node.JS in-process somehow, you could just as well have a version of tsc compiled into a Node-API module, and do everything in-process.

> Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.

Except for browsers and edge runtimes, it should be possible to make an embedded version of the compiler if it is necessary. I'm not sure if the TypeScript team will maintain such a version on their own, it remains to be seen exactly what approach they take for IPC.

I'm not a TypeScript Compiler developer, but I hope these answers are helpful in some way anyways.


Thanks for chiming in with these details, but I would just like to say:

> It will not conflict with other applications that do the same thing.

It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.

For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?

Not conflicting is not a property of parallel binary deployment and communication via IPC by default.

IPC is, by definition intended to be accessible by other processes.

Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.

However, you'd have to rely on that mechanism being built into the typescript compiler service.

...ie. it's a bit complicated right?

Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).


> Not conflicting is not a property of parallel binary deployment

I fail to see how starting another process under an OS like Linux or Windows can be conflicting. Don't share resources, and you're conflict-free.

> IPC is, by definition intended to be accessible by other processes

Yes, but you can limit the visibility of the IPC channel to a specific process, in the form of stdin/stdout pipe between processes, which is not shared by any other processes. This is enough of a channel to coordinate creation of a more efficient channel, e.g. a shmem region for high-bandwidth communication, or a Unix domain socket (under Linux, you can open a UDS completely outside of the filesystem tree), etc.

A Unix shell is a thing that spawns and communicates with running processes all day long, and I'm yet to hear about any conflicts arising from its normal use.


This seems like an oddly specific take on this topic.

You can get a conflicting resource in a shell by typing 'npm start' twice in two different shells, and it'll fail with 'port in use'.

My point is that you can do not conflicting IPC, but by default IPC is conflicting because it is intended to be.

You cannot bind the same port, semaphore, whatever if someone else is using it. That's the definition of having addressable IPC.

I don't think arguing otherwise is defensible or reasonable.

Having a concern that a network service might bind the same port as an other copy of the same network service deployed on the same target by another host is an entirely reasonable concern.

I think we're getting off into the woods here with an arbitrary 'die on this hill' point about semantics which I really don't care about.

TLDR: If you ship an IPC binary, you have to pay attention to these concerns. Pretending otherwise means you're not doing it properly.

It's not an idle concern; it's a real concern that real actual application developers have to worry about, in real world situations.

I've had to worry about it.

I think it's not unfair to think it's going to be more problematic than the current, very easy, embedded story, and it is a concern that simply does not exist when you embed a library instead of communicating using IPC.


> It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.

Sure, some IPC approaches can run into issues, such as using TCP connections over loopback. However, I'm describing an approach that should never conflict since the resources that are shared are inherited directly, and since the binary would be embedded in your application bundle and not shared with other programs on the system. A similar example would be language servers which often work this way: no need to worry about conflicts between different instances of language servers, different language servers, instances of different versions of the same language server, etc.

There's also some precedent for this approach since as far as I understand it, it's also what the Go-based ESBuild tool does[1], also popular in the Node.JS ecosystem (it is used by Vite.)

> For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?

> Not conflicting is not a property of parallel binary deployment and communication via IPC by default.

> IPC is, by definition intended to be accessible by other processes.

Yes, although the set of processes which the IPC mechanism is designed to be accessible by can be bound to just one process, and there are cross-platform mechanisms to achieve this on popular desktop OSes. I can not speak for why one would choose TCP over stdin/stdout, but, I don't expect that tsc will pick a method of IPC that is flawed in this way, since it would not follow precedent anyway. (e.g. tsserver already uses stdio[2].)

> Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.

> However, you'd have to rely on that mechanism being built into the typescript compiler service.

> ...ie. it's a bit complicated right?

> Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).

Well, I wouldn't honestly go as far as to say it's complicated. There's a ton of precedent for how to solve this issue without any conflict. I can not speak to why Jupyter kernels use TCP for IPC instead of stdio, I'm very sure they have reasons why it makes more sense in their case. For example, in some use cases it could be faster or perhaps just simpler to have multiple channels of communication, and doing this with multiple pipes to a subprocess is a little more complicated and less portable than stdio. Same for shared memory: You can always have a protocol to negotiate shared memory across some serial IPC mechanism, but you'll almost always need a couple different shared memory backends, and it adds some complexity. So that's one potential reason.

(edit: Another potential reason to use TCP sockets is, of course, if your "IPC" is going across the network sometimes. Maybe this is of interest for Jupyter, I don't know!)

That said, in this case, I think it's a non-issue. ESBuild and tsserver demonstrate sufficiently that communication over stdio is sufficient for these kinds of use cases.

And of course, even if the Jupyter kernel itself has to speak the TCP IPC protocols used by Jupyter, it can still subprocess a theoretical tsc and use stdio-based IPC. Not much complexity to speak of.

Also, unrelated, but it's funny you should say that about postgres, because actually there have been several different projects that deliver an "embeddable" subset of postgres. Of course, the reasoning for why you would not necessarily want to embed a database engine are quite a lot different from this, since in this case IPC is merely an implementation detail whereas in the database case the network protocol and centralized servers are essentially the entire point of the whole thing.

[1]: https://github.com/evanw/esbuild/blob/main/cmd/esbuild/stdio...

[2]: https://github.com/microsoft/TypeScript/wiki/Standalone-Serv...


Javascript is also quite difficult to embed in other applications. So not much has changed, except it's no longer your language of choice.

TypeScript compiles to JavaScript. It means both `tsc` and the TS program can share the same platform today.

With a TSC in Go, it's no longer true. Previously you only had to figure out how to run JS, now you have to figure out both how to manage a native process _and_ run the JS output.

This obviously matters less for situations where you have a clear separation between the build stage and runtime stage. Most people complaining here seem to be talking about environments were compilation is tightly integrated with the execution of the compiled JS.


This is awesome. Thanks to you and all the TypeScript team for the work they put on this project! Also, nice to see you here, engaging with the community.

Porting to Go was the right decision, but part of me would've liked to see a different approach to solve the performance issue. Here I'm not thinking about the practicality, but simply about how cool it would've been if performance had instead been improved via:

- porting to OCaml. I contributed to Flow once upon a time, and a version of TypeScript in OCaml would've been huge in unifying the efforts here.

- porting to Rust. Having "official" TypeScript crates in rust would be huge for the Rust javascript-tooling ecosystem.

- a new runtime (or compiler!). I'm thinking here an optional, stricter version of TypeScript that forbids all the dynamic behaviours that make JavaScript hard to optimize. I'm also imagining an interpreter or compiler that can then use this stricter TypeScript to run faster or produce an efficient native binary, skipping JavaScript altogether and using types for optimization.

This last option would've been especially exciting since it is my opinion that Flow was hindered by the lack of dogfooding, at least when I was somewhat involved with the project. I hope this doesn't happen in the TypeScript project.

None of these are questions, just wanted to share these fanciful perspectives. I do agree Go sounds like the right choice, and and in any case I'm excited about the improvement in performance and memory usage. It really is the biggest gripe I have with TypeScript right now!


Not Daniel, but I've ported a typechecker from PHP to Rust (with some functional changes) and also tried working with the official Hack OCaml-based typechecker (a precursor to Flow).

Rust and OCaml are _maybe_ prettier to look at, but for the average TypeScript developer Go is a much more understandable target IMO.

Lifetimes and ownership are not trivial topics to grasp, and they add overhead (as discussed here: https://github.com/microsoft/typescript-go/discussions/411) that not all contributors might grasp immediately.


I am curious why dotnet was not considered - it should run everywhere Go does with added NativeAoT too, so I am especially curious given the folks involved ;)

(FWIW, It must have been a very well thought out rationale.)

Edit: watched the revenant clip from the GH discussion- makes sense. Maybe push NativeAoT to be as good?

I am (positively) surprised Hejlsberg has not used this opportunity to push C#: a rarity in the software world where people never let go of their darlings. :)


Discussion and video link here for anyone else interested: https://github.com/microsoft/typescript-go/discussions/411#d...

And lightly edited transcript here: https://github.com/microsoft/typescript-go/discussions/411#d...


It was considered and tested, just not used in the end.

Well-optimized JavaScript can get to within about 1.5x the performance of C++ - something we have experience with having developed a full game engine in JavaScript [1]. Why is the TypeScript team moving to an entirely different technology instead of working on optimizing the existing TS/JS codebase?

[1] https://www.construct.net/en


Well-optimized JavaScript can, if you jump through hoops like avoiding object creation and storing your data in `Uint8Array`s. But idiomatic, maintainable JS simply can't (except in microbenchmarks where allocations and memory layout aren't yet concerns).

In a game engine, you probably aren't recreating every game object from frame to frame. But in a compiler, you're creating new objects for every file you parse. That's a huge amount of work for the GC.


I'd say that our JS game engine codebase is generally idiomatic, maintainable JS. We don't really do anything too esoteric to get maximum performance - modern JS engines are phenomenal at optimizing idiomatic code. The best JS performance advice is to basically treat it like a statically typed language (no dynamically-shaped objects etc) - and TS takes care of that for you. I suppose a compiler is a very different use case and may do things like lean on the GC more, but modern JS GCs are also amazing.

Basically I'd be interested to know what the bottlenecks in tsc are, whether there's much low-hanging fruit, and if not why not.


Note that games are based on main loops + events, for which JITs are optimized, while compilers are typically single run-to-completion, for which JITs aren't.

So this might be a very different performance profile.

*edit* I had initially written "single-pass", but in the context of a compiler, that's ambiguous.


In other words you write asm.js, which is a textual form of WebAssembly that is also valid Javascript, and if your browser has an asm.js JIT compiler - which it doesn't because it was replaced by WebAssembly.

Our best estimate for how much faster the Go code is (in this situation) than the equivalent TS is ~3.5x

In a situation like a game engine I think 1.5x is reasonable, but TS has a huge amount of polymorphic data reading that defeats a lot of the optimizations in JS engines that get you to monomorphic property access speeds. If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.


I used to work on compilers & JITs, and 100% this — polymorphic calls is the killer of JIT performance, which is why something native is preferable to something that JIT compiles.

Also for command-line tools, the JIT warmup time can be pretty significant, adding a lot to overall command-to-result latency (and in some cases even wiping out the JIT performance entirely!)


> If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.

I really wish JS VMs would invest in this. The DOM is full of large inheritance hierarchies, with lots of subtypes, so a lot of DOM code is megamorphic. You can do tricks like tearing off methods from Element to use as functions, instead of virtual methods as usual, but that quite a pain.


"Well optimized Javascript", and more generally, "well-optimized code for a JIT/optimizer for language X", is a subset of language X, is an undefined subset of language X, is a moving subset of language X that is moving in ways unrelated to your project, is actually multiple such subsets at a minimum one per JIT and arguably one per version of JIT compilers, and is generally a subset of language X that is extremely complicated (e.g., you can lose optimization if your arrays grow in certain ways, or you can non-locally deoptimize vast swathes of your code because one function call in one location happened to do one thing the JIT can't handle and it had to despecialize everything touching it as a result) such that trying to keep a lot of developers in sync with the requirements on a large project is essentially infeasible.

None of these things say "this is a good way to build a large compiler suite that we're building for performance".


Please note that compilers and game engines have extremely different needs and performance characteristics—and also that statements like "about 1.5x the performance of C++" are virtually meaningless out-of-context. I feel we've long passed this type of performance discussion by and could do with more nuanced and specific discussions.

Who wants to spend all their time hand-tuning JS/TS when you can write the same code in Go, spend no time at all optimizing it, and get 10x better results?

Why is the TypeScript team moving to an entirely different technology

A few things mentioned in an interview:

Cannot build native binaries from TypeScript

Cannot as easily take advantage of concurrency in TypeScript

Writing fast TypeScript requires you to write things in a way that isn't 'normal' idiomatic TypeScript. Easier to onboard new people onto a more idiomatic codebase.


The message I hear is: don't use JS, don't use async. Music to my ears.

All Go is async though.

What kind of C++ and what kind of JS?

- C++ with thousands of tiny objects and virtual function calls? - JavaScript where data is stored in large Int32Array and does operations on it like a VM?

If you know anything about how JavaScript works, you know there is a lot of costly and challenging resource management.


While Go can be considered entirely different technology, I'd argue that Go is easy enough to understand for the vast majority of software developers that it's not too difficult to learn.

(disclaimer: I am a biased Go fan)


It had been very explicitly designed with this goal. The idea was to make a simpler Java which is as easy as possible to deploy and as fast as possible to commute and by these measures is a resounding success.

Does "well-optimized JavaScript" mean "you can't use Objects"?

In JavaScript, you can't even put 8M keys in a Hashmap; inserts take > 1 second per element:

https://issues.chromium.org/issues/42202799


Well-optimized JS isn't the only point of operation here. There's a LOT of exchange, parsing and processing that interacts with the File System and the JS engine itself. It isn't just a matter of loading a JS library and letting it do its thing. Every call that crosses the boundaries from JS runtime to the underlying host environment has a cost. This is multiplied across potentially many thousands of files.

Just going from ESLint to Biome is more than a 10x improvement... it's not just 1.5x because it's not just the runtime logic at play for build tools.


Sometimes, the time required to optimize is greater than the time required to rewrite.

Are you comparing perfectly written JS to poorly written C++?

Numeric code can, but compilers have to do a lot of string manipulation which is almost impossible to optimise well in JS.

It sounds like the C++ is not well-optimized then?

How does that scale with number of threads?

I'm not sure how it is in Construct, but IME "well-optimized" JavaScript quickly becomes very difficult to read, debug, and update, because you're relying heavily on runtime implementation quirks and micro-optimizations that make a hash of code cleanliness. Even you can hit close to native performance, the native equivalent usually has much more idiomatic code. The tsc team needs to balance performance of the compiler against keeping the codebase maintainable, which is especially vital for such a core piece of web infrastructure as TypeScript.

Your JS code is way uglier than their Go code, if you're doing those kinds of shenanigans.

JS is 10x-100x slower than native languages (C++, Go, Rust, etc) if you write the code normally (i.e. don't go down the road of uglifying your JS code to the point where it's dramatically less pleasant to work with than the C++ code you're comparing to).


There's no such thing as a native language unless you're talking about machine code.

It's kind of annoying how even someone like Hejlsberg is throwing around words like "native" in such an ambiguous, sloppy, and prone-to-be-misleading way on a project like this.

"C++" isn't native. The object code that it gets compiled to, large parts of which are in the machine's native language, is.

Likewise "TypeScript" isn't non-native in any way that doesn't apply to any other language. The fact that tsc emits JS instead of the machine's native language is what makes TypeScript programs (like tsc itself) comparatively slow.

It's the compilers that are the important here, not the languages. (The fact that the TypeScript team was committed to making sure the typescript-go compiler is the same (nearly line-for-line equivalent) to the production version of the TypeScript compiler written in itself really highlights this.)


Why not AOT compiled C#, given the team's historical background?

There is an interview with Anders Hejlsberg here: https://www.youtube.com/watch?v=ZlGza4oIleY

The question comes up and he quickly glosses over it, but by the sound of it he isn't impressed with the performance or support of AOT compiled C# on all targeted platforms.


https://www.youtube.com/watch?v=10qowKUW82U

[19:14] why not C#?

Dimitri: Was C# considered?

Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.

Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.

Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.

[12:34] why not Rust?

Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.

(https://www.reddit.com/r/golang/comments/1j8shzb/microsoft_r...)


I wonder if they explored using a Gc like https://github.com/Manishearth/rust-gc with Rust. I think that probably removes all the borrow checker / cycle impedance mismatch while providing a path to remove Gc from the critical path altogether. Of course the Rust Gc crates are probably more immature, maybe slower, than Go’s so if there’s no path to getting rid of cycles as part of down-the-road perf optimization, then Go makes more sense.

>C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#

They could have used static classes in C#.


he went into more detail about C# in this one: https://youtu.be/10qowKUW82U?t=1154s

He says:

- C# Ahead of Time compiler doesn't target all the platforms they want.

- C# Ahead of Time compiler hasn't been stressed in production as many years as Go.

- The core TypeScript compiler doesn't use any classes; Go is functions and datastructures whereas C# is heavily OOP, so they would have to switch paradigms to use C#.

- Go has better control of low level memory layouts.

- Go was ultimately the path of least resistance.



Anders explained his reasoning in this interview (transcript):

https://github.com/microsoft/typescript-go/discussions/411#d...


this is the "official" response at this point, since it is in the FAQ linked in the OP

I'm not involved in the decisions, but don't C# applications have a higher startup time and memory usage? These are important considerations for a compiler like this that needs to start up and run fast in e.g. new CI/CD boxes.

For a daemon like an LSP I reckon C# would've worked.


Yes, in fact that's one of the main reasons given in the two linked interviews: Go can generate "real" native executables for all the platforms they want to support. One of the other reasons is (paraphrasing) that it's easier to port the existing mostly functional JS code to Go than to C#, which has a much more OOP style.

The C# compiler is written in C# and distributed to multiple platforms. Along with the JIT that runs on all kinds of devices.

Graph for the differences in Runtime, Runtime Trimmed, and AOT .NET.

https://learn.microsoft.com/en-us/aspnet/core/fundamentals/n...


Native AOT exists, and C# has many C++ like capabilities, so not at all.

It exists but isn’t the same as a natively compiled binary. A lot gets packed into an AOT binary for it to work. Longer startup times, more memory, etc.

Just like Go, there is no magic here.

Where do you think Go gets those chubby static linked executables from?

That people have to apply UPX on top.


Go’s static binaries are orders of magnitude smaller than .Net’s static binaries. However, you are right, all binaries have some bloat in order to make them executable.

This is flat out incorrect if you are doing AOT in C#

Not when compiled by NativeAOT. It also produces smaller binaries than Go and has better per-dependency scalability (due to metadata compression, pointer-rich section dehydration and stronger reachability analysis). This also means you can use F# too for this instead, which is excellent for langdev (provided you don't use printf "%A" which is incompatible which is a small sacrifice).

What is the cross compilation support for NativeAOT though? This is one of the things that Go shines (as long as you don't use CGO, that seems perfectly plausible in this project), and while I don't think it would be a deal breaker it probably makes things a lot easier.

What is the state of WASM support in Go though? :)

I doubt the ability to cross-compile TSC would have been a major factor. These artifacts are always produced on dedicated platforms via separate build stages before publishing and sign-off. Indeed, Go is better at native cross-compilation where-as .NET NativeAOT can do only do cross-arch and limited cross-OS by tapping into Zig toolchain.


> What is the state of WASM support in Go though? :)

I am sure it is good enough that the team decided to choose Go either way OR it is not important for this project.

> I doubt the ability to cross-compile TSC would have been a major factor.

I never said it was a major factor (I even said "I don't think it would be a deal breaker"), but it is a factor nonetheless. It definitely helps a lot during cross-platform debugging since you don't need to setup a whole toolchain just to test a bug in another platform, instead you can simple build a binary on your development machine and send it to the other machine.

But the only reason I asked this is because I was curious really, no need to be so defensive.


Seeing that Hejlsberg started out with Turbo Pascal and Delphi, and that Go also has a lot of Pascal-family heritage, he might hold some sympathy for Go as well...

Yes there is that irony, however when these kind of decisions are made, by folks with historical roots on how .NET and C# came to be, then .NET team cannot wonder why .NET keeps lagging adoption versus other ecosystems, on companies that aren't traditional Microsoft shops.

Not involved, but there's a faq in their repo, and this answers your question, perhaps, a bit: https://github.com/microsoft/typescript-go/discussions/411

Thanks, but it really doesn't clarify why a team with roots on the .NET ecosystem decided C#/Native AOT isn't fit for purpose.

I don't understand what Anders' past involvement with C# has to do with this. Would the technical evaluation be different if done by Anders vs someone else?

C# and Go are direct competitors and the advantages of Go that were cited are all features of C# as well, except the lack of top level functions. That's clearly not an actual problem: you can just define a class per file and make every method static, if that's how you like to code. It doesn't require any restructuring of your codebase. There's also no meaningful difference in platform support, .NET AOT supports Win/Mac/Linux on AMD64/ARM i.e. every platform a developer might use.

He clearly knows all this so the obvious inference is that the decision isn't really about features. The most likely problem is a lack of confidence in the .NET team, or some political problems/bad blood inside Microsoft. Perhaps he's tried to use it and been frustrated by bugs; the comment about "battle hardened" feel like where the actual rationale is hiding. We're not getting the full story here, that's clear enough.

I'm honestly surprised Microsoft's policies allowed this. Normally companies have rules that require dogfooding for exactly this reason. Such a project is not terribly urgent, it has political heft within Microsoft. They could presumably have got the .NET team to fix bugs or make optimizations they need, at least a lot easier than getting the Go team to do it. Yet they chose not to. Who would have any confidence in adoption of .NET for performance sensitive programs now? Even the father of .NET doesn't want to use it. Anyone who wants to challenge a decision to adopt it can just point at Microsoft's own actions as evidence.


Yea, I came here to say the same thing. Anders' reasons for not going with C# all seem either dubious or superficial and easily worked around.

First he mentions the no classes thing. It is hard to see how that would matter even for automated porting, because like you said, he could just use static classes, and even do a static using statement on the calling side.

Another one of his reasons was that Go was good at processing complex graphs, but it is hard to imagine how Go would be better at that than C#. What language feature that Go has, but C# does not supports that? I don't think anyone will be able to demonstrate one. This distinction makes sense for Go vs Rust, but not for Go vs C#.

As for the platform / AOT argument, I don't know as much about that, but I thought it was supposed to be possible now. If it isn't, it seems like it would be better for Microsoft to beef that up than to allow a vote of no confidence to be cast like this.


Thanks, this is a good way to frame it, someone else also phrased similar sentiment which I'm in total agreement with: https://x.com/Lon/status/1899527659308429333

It is especially jarring given that they are a first-party customer who would have no trouble in getting necessary platforms supported or projects expedited (like NativeAOT-LLVM-WASM) in .NET. And the statements of Anders Hejlsberg himself which contradict the facts about .NET as a platform make this even more unfortunate.


I wonder if there's just some cultural / generational stuff happening there too. The fact that the TS compiler is all about compiling a highly complex OOP/functional hybrid language yet is said to use neither objects nor FP seems rather telling. Hejlsberg is famous for designing object oriented languages (Delphi, C#) but the Delphi compiler itself was written largely in assembly, and the C# compiler was for a very long time written in C++ iirc. It's possible that he just doesn't personally like working in the sort of languages he gets paid to design.

There's an interesting contrast here with Java, where javac was ported to Java from C++ very early on in its lifecycle. And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too. Whereas in the .NET world Roslyn took quite a long time to come along, it wasn't until .NET 6, and of course MS rejected it from Windows more or less entirely for the same sorts of rationales as what Anders provides here.


> Roslyn

It was introduced back then with .NET Framework 4.6 (C# 6) - a loong time ago (July 2015). The OSS .NET has started with Roslyn from the very beginning.

> And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too.

NativeAOT uses the same architecture. There is no C++ besides GC and pre-existing compiler back-end (both ILC and RyuJIT drive it during compilation process). Much like GraalVM's Native Image, the VM/host, type system facilities, virtual/interface dispatch and everything else it could possibly need is implemented in C# including the linker (reachability analysis/trimming, kind of like jlink) and optimizations (exact devirtualization, cctor interpreter, etc.).

In the end, it is the TypeScript team members who worked on this port, not Anders Hejlsberg himself, which is my understanding. So we need to take this into account when judging what is being communicated.


> In the end, it is the TypeScript team members who worked on this port, not Anders Hejlsberg himself, which is my understanding

no? https://github.com/microsoft/typescript-go/graphs/contributo...


Ah, I see. Thanks for the clarification. Well, doubly unfortunate then. I wonder if we'll ever know what happened behind the scenes.

Ah C# 6 not .NET 6, thanks for the correction. Cool to hear that the NativeAOT stuff follows the same path.

Yes, when the author of the language feels it is unfit for purpose, it is a different marketing message than a random dude on the Internet on his new startup project.

Pure speculation, but C# is not nearly the first class citizen that go binaries are when you look at all possible deployment targets. The “new” Microsoft likely has some built-in bias against “embrace and extend” architectural and business decisions for developers. Overall this doesn’t seem like a hard choice to me.

Cue rust devotees in 3, 2, ..


> Cue rust devotees in 3, 2, ..

If you are a rust devotee, you can use https://github.com/FractalFir/rustc_codegen_clr to compile your rust code to the same .NET runtime as C#. The project is still in the works but support is said to be about 95% complete.


Link to interview with Anders. (linked from the thread as well) https://www.youtube.com/watch?v=10qowKUW82U&t=1154s

I write a lot of Go and a decent amount of TypeScript. Was there anything you found during this project that you found particularly helpful/nice in Go, vs. TypeScript? Or was there anything about Go that increased the difficulty or required a change of approach?

I'd be curious to hear about the politics and behinds the scenes of this project. How did you get buy-in? What were some of the sticking points in getting this project off of the ground? When you mention that many other languages were used to spike the new compiler, were there interesting learnings?

I feel like you'll need to provide a wasm binary for browser environments and maybe as a fallback in node itself. Last time I checked, Go really struggles to perform when targeting wasm. This might be the only reason I'd like to see it in Rust but I'm still glad you went with Go.

Are there any insights on the platform decision?


Honestly, the choice seems fine to me: the vast majority of users are not compiling huge TypeScript projects in the browser. If you're using Vite/ESBuild, you're already using a Go-based JS toolchain, and last I checked Vite was pretty darn popular. I don't suspect there will be a huge burden for things like playground; given the general performance uplift that the Go tsc implementation already gets, it may in fact be faster even after paying the Wasm tax. (And even if it isn't, it should be more than fine for playground anyways.)

I‘m pretty sure that a lot of vite users with hot reload will run tsc inside the browser (tanstack, react-router)

I am not a Vite expert, however, when running Vite in dev mode, I can see two things:

- There is an esbuild process running in the background.

- If I look at the JavaScript returned to the browser, it is transpiled without any types present.

So even though the URLs in Vite dev mode look like they're pointing to "raw" TypeScript files, they're actually transpiled JavaScript, just not bundled.

I could be incorrect, of course, but it sure seems to me like Vite is using ESBuild on the Node.JS side and not tsc on the web browser side.


> While we’re not yet feature-complete

This is a big concern to me. Could you expand on what work is left to do for the native implementation of gsc? In particular, can you make an argument why that last bit of work won't reduce these 10x figures we're seeing? I'm worried the marketing got ahead of the engineering


It’s fine, if it’s 2x faster after being feature complete, I don’t really mind. It still is a free speedup to all existing code-bases. Developers don’t need to anything than install the latest version of TypeScript I presume

Thanks for answering questions.

One thing I'm curious about: What about updating the original Typescript-based compiler to target WASM and/or native code, without needing to run in a Javascript VM?

Was that considered? What would (at a high level) the obstacles be to achieving similar performance to Golang?

Edit: Clarified to show that I indicate updating the original compiler.


It's unlikely that you would get much performance benefit from AOT compiling a TypeScript codebase. (At least not with a ton of manual optimization of the native code, and if you're going to do that, why not just rewrite in a native-first language?)

JavaScript, like other dynamic languages, runs well with a JIT because the runtime can optimize for hotspots and common patterns (e.g. this method's first argument is generally an object with this shape, so write a fast path for that case). In theory you could write an AOT compiler for TypeScript that made some of those inferences at compile time based on type definitions, but

(a) nobody's done that

(b) it still wouldn't be as fast as native, or much faster than JIT

(c) it would be limited - any optimizations would die as soon as you used an inherently dynamic method like JSON.parse()


So basically, TypeScript as a language doesn't allow compiling to as as efficient machine code as Golang? (Edit) And I assume it's not practical to alter the language in a way that this kind of information can be added. (Such as adding a typed version of JSON.parse()).

Amazing news, but I'm wondering what will happen to Monaco editor and all the SaaS that use typescript in the browser?

Not sure if it does but the video linked in the post might answer your question? I think he is compiling vscode which includes Monaco editor which is where they are getting 10x faster stat. (I might be wrong here.) [0]

[0] https://youtu.be/pNlq-EVld70?feature=shared&t=112


Yeah I saw that but will they maintain a browser compatible version is another question

Ah, inception compiling. The issue isn't compiling the Monaco editor, but rather will the Monaco editor compile TypeScript 7 in the browser?

That is a good question.


This might be an oddly specific question, but do you think performance improvements like this might eventually lead to features like partial type argument inference in generics? If I recall correctly off the top of my head, performance was one of the main reasons it was never implemented.

thank you, to both of you, for so many years of groundbreaking work. you've both been on the project for, what, 11 years now? such legends.

Will we still have compiler plugins? What will this mean for projects like ts-patch?

> You can also tune in to the Discord AMA mentioned in the blog this upcoming Thursday.

Will the questions and answers be posted anywhere outside of Discord after it's concluded?


Daniel, please make this a priority. Post the Q&A transcript to GitHub, at least.

Since the new tsc is written in go, will I be able to pull it into my go web server as a middleware to dynamically transpile ts?

We'll be working on an API that ideally can be used through any language - that would be our preferred means of consuming the new codebase.

What is the forward paths available for efforts like the TS Playground under Typescript 7 (native)?

One of the nice advantages of js is that it can run so many places. Will TypeScript still be able to enjoy that legacy going forward, or is native only what we should expect in 7+?


We anticipate that we will eventually get a playground working on the new native codebase. We know we'll likely compile down to WebAssembly, but a lot of how it gets integrated will depend on what the API looks like. We're currently giving a lot of thought to that, but we have good ideas. https://github.com/microsoft/typescript-go/discussions/455

Will this be a prerequisite of the 7.0 release?

This is very exciting! I'm curious if this move eventually unlocks features that have been deemed too expensive/slow so far, e.g. typing `ReactElement` more accurately, typing `TemplateStringsArray` etc

I'm curious about the choice of Go to develop the new toolchain. Was the support for parallelism/concurrency a factor in the decision?

Is 10x a starting point or could we expect even more improvements in the future?

Hi Daniel! What's your stance on support for yarn pnp?

pnp is still very cool, and it would be great if we can find a better API story that works well with pnp!

Congrat on the announcement, this is a great achievement!

Amazing!! I did not see timing. When might we see in VS Code? Edge?

Daniel, congrats! I'm _so_ excited about everything y'all have achieved in the last few years.

When can we just replace the JS runtime with TS and skip the compiler altogether? Start fresh, if you will.

Why Go?

Will the refactor possibly be an occasion for ironing out a spec

Your patience with Michael Saboff is incredible.

[flagged]


> inexpressive type system

Simplicity is a feature, not a bug. Overly expressive languages become nightmares to work with and reason about (see: C++ templates)

Go's compilation times are also extremely fast compared to Rust, which is a non-negligible cost when iterating on large projects.


Have you considered a closer to metal language to implement the compiler in like c or rust ? Have you evaluated further perf improvements ?

I don't think c or rust are really 'closer to the metal' than golang (what they're using)

Considering Go is the only language with a garbage collector out of the three languages you mentioned, I'm not sure how you reach the conclusion they're all as close to the metal.

C and Rust both have predictable memory behaviour, Go does not.


When I read the article it was very clear, due to the compiler's in-memory graphs, that they needed a GC.

(IE, as opposed to reference counting, where if you have cyclic loops, you need to manually go in and "break" the loop so memory gets reclaimed.)


> When I read the article it was very clear, due to the compiler's in-memory graphs, that they needed a GC.

It's actually pretty easy to do something like this with C, just using something like an arena allocator, or honestly, leaking memory. I actually wrote a little allocator yesterday that just dumps memory into a linkedlist, it's not very complicated: http://github.com/danieltuveson/dsalloc/

You allocate wherever you want, and when you're done with the big messy memory graph, you throw it all out at once.

There are obviously a lot of other reasons to choose go over C, though (easier to learn, nicer tooling, memory safety, etc).


I get the impression they'd use smart pointers (C++) or Rc/Arc (Rust)

Go isn't that bad in terms of memory predictability to be honest. It generally has roughly 100% overhead in terms of memory usage compared to no GC. This can be reduced by using GOGC env variable, at the cost of worse performance if not careful.

Hi Daniel!

Really interesting news, and uniquely dismaying to me as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem.

My question has to do with Ryan's statement:

> We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript

I've experimented deeply in this area (maybe 15k hours invested in BABLR so far) and what I've found is that it's richly rewarding. Javascript is fast enough for what is needed, and its ability to cache on immutable data can make it lightning fast not through doing more work faster, but by making it possible to do less work. In other words, change the complexity class not the constant factor.

Is this a direction you investigated? What made you decide to try to move sideways instead of forwards?


> as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem

Have you considered the man-years and energy you're making everyone waste? Just as an example, I wonder what the carbon footprint of ESLint has been over the years...

Now, it pales in comparison to Python, but still...


I'm no more thrilled than you at the cost of running ESLint, but using a high-level language doesn't need to mean being wasteful of resources.

TS currently wastes tons of resources (most especially peoples' time) by not being able to share its data and infrastructure with other tools and ecosystems, but while there would be much bigger wins from tackling the systemic problem, you wouldn't be able to say something as glib as "TS is 10x faster". Only the work that can be distilled to a metric is done now, because that's how to get a promotion when you work for a company like Microsoft


If I could choose between Typescript speeding up 10x or all the surrounding tooling speeding up 20x, I'd take Typescript in a heartbeat. Slow type checking is the biggest pain point in my daily dev cycle.

Thank you Typescript team for chasing those promotions!


Go is an extremely strange choice, given the ecosystem you're targeting. I've got quite a bit of experience in it, TS, Rust and C++. I'd pick any of those for productivity and (in the case of C++ and Rust, thread-safety) over Go, simply because Go's type system is so impoverished.

From a performance perspective, I'd expect C++ and Rust to be much easier targets too, since I've seen quite a few industrial Go services be rewritten in C++/Rust after they fail to meet runtime performance / operability targets.

Wasn't there a recent study from Google that came to the same conclusion? (They see improved productivity for Go with junior programmers that don't understand static typing, but then they can never actually stabilize the resulting codebase.)


This isn't a course about generative AI if that's what you're getting at.

Mentioning this because I did assume that from the title.


I guess generative AI using copilot or ChatGPT might overtake model-driven engineering as a time-saving technique, which seems worrisome for the "software factory" industry. At the same time, I doubt it will replace MDE as a formal method (i.e. if you need to prove that your software does what it is supposed to do).


Hi all, I work on the TypeScript team. There's already a lot of feedback on the issue itself from users urging the authors not to make this decision, so I will hold back from adding to the noise on that issue. Every team is entitled to make the decisions that they feel are best for them, and I don't think it'd be productive to change anyone's mind in this case.

Instead I'll just mention that I always welcome thoughts on some of the challenges teams encounter when writing in TypeScript. It helps make the language better. If there's anything you often hit, you can comment here, create an issue on the issue tracker, or reach out to me at Daniel <dot> Mylastname <at> microsoft <dot-com>


In our team it's causing a lot of confusion an inconsistency that types and interfaces overlap 90% in functionality, but have different syntaxes.

You can find many articles trying to explain which is best under which circumstances, but there are no correct answers.

I would wish for TS to deprecate one of the syntaxes (probably interfaces because they read as statements rather than expressions), and instead extend the other with the 10% functionality that would be lost.

For instance a type could be declared as open, to make it possible to reopen and extend it.


I even saw a Youtube video recently making a strong case that types are almost always the better option unless you run into a few niche use cases like needing interface merging.


I’d just like to say thank you for your team’s hard work on TypeScript. I can’t imagine writing JavaScript without TypeScript.


Y'all have been ignoring the largest DX issue in TypeScript for years: https://stackoverflow.com/questions/57683303/how-can-i-see-t...

Even as someone who is very proficient in TypeScript, and an advocate, it's a huge PITA having to constantly ctrl+click through large type hierarchies trying to build a mental model of fully resolved types.


Typescript has been revolutionary and made me love doing frontend work. It’s solved so many issues with reliability that I’d never suggest using an untyped framework like rails for a new project.


One of the key things that we've focused on with TypeChat is not just that it acts as a specification for retrieving structured data (i.e. JSON), but that the structure is actually valid - that it's well-typed based on your type definitions.

The thing to keep in mind with these different libraries is that they are not necessarily perfect substitutes for each other. They often serve different use-cases, or can be combined in various ways -- possibly using the techniques directly and independent of the libraries themselves.


Hi there! I'm one of the people working on TypeChat and I just want to say that we definitely welcome experimentation on things like this. We've actually been experimenting with running Llama 2 ourselves. Like you said, to get a model working with TypeChat all you really need is to provide a completion function. So give it a shot!


Whoops - thanks for catching this. Earlier iterations of this blog post used an different schema where `size` had been accidentally specified as a `number`. While we changed the schema, we hadn't re-run the prompt. It should be fixed now!


Hey all, TypeScript PM here.

I understand the desire here. Runtime type checking is often necessary for data validation, and we can see lots of libraries developed to help fill the gap here. But I think the fact that there are so many libraries with different design decisions is pretty indicative that this is not a solved problem with an obvious solution. We knew this going into the early design of TypeScript, and it's a principle that's held up very well.

What have been happy to find is that we've grown TypeScript to be powerful enough to communicate precisely what runtime type-checking libraries are actually doing, so that we can derive the types directly. The dual of this is that people have the tools they need to build up runtime type validation logic out of types by using our APIs. That feels like a reasonable level of flexibility.


I'm relatively new to programming and had a question about TypeScript's functionality. Is there any specific reason why TypeScript doesn't allow for the creation of custom and intricate data types? For example, I'm unable to define a number type within a specific range, or a string that adheres to a certain pattern (like a postal code).

I'm imagining a language where I could define a custom data type with a regular function. For instance, I could have a method that the compiler would use to verify the validity of what I input, as shown below:

function PercentType(value: number) { if (value > 100 || value < 0) throw new Error();

    return true;
}

Is the lack of such a feature in TypeScript (or any language) a deliberate design decision to avoid unnecessary complexity, or due to technical constraints such as performance considerations?


TypeScript actually does have some support for the kinds of types you're suggesting. For example, a US postal code can be defined like so:

    type PostalCode = `${Digit}${Digit}${Digit}${Digit}${Digit}`;

    type Digit = '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9';
You could trivially define a `parsePostalCode` function that accepts a string and yields a PostalCode (or throws an error if it's the wrong format).

Ranges like percent are much trickier—TypeScript would need to compute the return type of `Percent + Percent` (0 <= T <= 200), `Percent / Percent` (indeterminate because of division by zero or near-zero values), and so on for all the main operators. In the best case scenario this computation is very expensive and complicates the compiler, but in the worst case there's no clear correct answer for all use cases (should we just return `number` for percent division or should we return `[0, Infinity]`?).

In most mainstream programming languages the solution to this problem is to define a class that enforces the invariants that you care about—Percent would be a class that defines only the operators you need (do you really need to divide Percent by Percent?) and that throws an exception if it's constructed with an invalid value.


This is a feature some (experimental) programming languages have - look into dependent types. The long-and-short of it is that it adds a lot of power, but comes at an ergonomic cost - the more your types say about your code, the more the type checker needs to be able to understand and reason about your code, and you start to run up against some fundamental limits of computation unless you start making trade-offs: giving up Turing-completeness, writing proofs for the type checker, stuff like that.

Another interesting point of reference are "refinement types", which allow you to specify things like ranges to "refine" a type; the various constraints are then run through a kind of automated reasoning system called an SMT solver to ensure they're all compatible with each other.


Awesome, thanks for the pointers!


> Is the lack of such a feature in TypeScript (or any language) a deliberate design decision to avoid unnecessary complexity, or due to technical constraints such as performance considerations?

It makes a lot of things impossible. For example, if you defined two different types of ranges, OneToFifty and OneToHundred similarly to your PercentType above, the following code would be problematic:

    let x: OneToFifty = <...>;
    let y: OneToHundred = <...>;
    y = x;
Any human programmer would say the third line makes sense because every OneToFifty number is also OneToHundred. But for a compiler, that's impossible to determine because JavaScript code is Turing-complete, and so it can't generally say that one is certainly a subset of the other.

In other words, any two custom-defined types like that would be unassignable from and to each other, making the language much less usable. Now add generics, co-/contravariance, type deduction, etc., and suddenly it becomes clear how much work adding a new type to the type system is; much more than just a boolean function.

That said, TypeScript has a lot of primitives, for example, template string types for five-digit zip codes:

    type Digit = '0' | '1' | '2' | '3' | <...> | '9';
    type FiveDigitZipCode = `${Digit}${Digit}${Digit}${Digit}${Digit}`;
(Actually, some of these are Turing-complete too, which means type-checking will sometimes fail, but those cases are rare enough for the TS team to deem the tradeoff worth.)


It's the fundamental programming language design conundrum: Every programming language feature looks easy in isolation, but once you start composing it with everything else, they get hard. And hardly anything composes as complexly as programming languages.

There's sort of a meme where you should never ask why someone doesn't "just" do something, and of all the people you shouldn't ask that of, programming language designers are way, way up there. Every feature interacts not just with itself, not just with every other feature in the language, but also in every other possible combination of those features at arbitrary levels of complexity, and you can be assured that someone, somewhere out there is using that exact combination, either deliberately for some purpose, or without even realizing it.


Thanks :)


Some of this is possible in the type system like a range: From stackoverflow: https://stackoverflow.com/questions/39494689/is-it-possible-...

  type Enumerate<N extends number, Acc extends number[] = []> = Acc['length'] extends N
    ? Acc[number]
    : Enumerate<N, [...Acc, Acc['length']]>

  type NumberRange<F extends number, T extends number> = Exclude<Enumerate<T>, Enumerate<F>>

  type ZeroToOneHundred = NumberRange<0, 100>
One limitation is that this has to be bounded on both ends so constructing a type for something like GreaterThanZero is not possible.

Similarly for zip codes you could create a union of all possible zip codes like this:

  type USZipCodes = '90210' | ...
Often with the idea you have in mind the solution is to implement an object where the constructor does a run time check of the requirements and if the checks pass instantiate the instance and otherwise throw a run time error.

In functional programming this is often handled with the Option which can be thought of as an array with exactly 0 or 1 elements always. 0 elements when a constraint is not met and 1 element when all constraints are met.

This [0] is a library I wrote for JS/TS that provides an implementation of Options. Many others exist and other languages like Rust and Scala support the Option data structure natively.

[0] https://github.com/sbernheim4/excoptional


Interesting, thanks!


> or a string that adheres to a certain pattern (like a postal code).

This is somewhat supported with template literal types, https://www.typescriptlang.org/docs/handbook/2/template-lite.... However, there are good technical/performance reasons they don't support, for example, plain regexes - there is a HUGE discussion on this topic here, https://github.com/microsoft/TypeScript/issues/6579.


You can do exactly this with io-ts and fp-ts: https://gist.github.com/golergka/64b06f711e4cb07c67367bdace7...


Maybe official preprocessor plugins for TypeScript compiler could help?

I understand that everybody who needs it can already put their own preprocessor that generates runtime objects from type information before the code is passed to tsc for compilation.

But the effort is inconsistent and distributed.

If TypeScript officially supported pluggable preprocessor and plugin ecosystem for it some good solutions might get discovered.


What's next? An official TypeScript UI framework, will it be Vue, React, Next.js or Svelte? An official TypeScript date library?


What's the point of any of those?

Generating code based on type annotations is frequently requested feature directly related to the core feature of TypeScript which is type system.


This is the curse of guest languages, after the initial adoption pain everyone wants idiomatic libraries and pretends the underlying platform doesn't exist.

Until they hit a roadblock caused by a leaky abstraction, that proves them otherwise.


Type script does a very good job not to hide the underlying platform. In it's essence it is just a development time linter and does not interfere with the JavaScript runtime at all (except enums).

And I think that's actually the reason why it won the competition against Googles Dart. They even used Microsofts TypeScript for Angular instead of their own language Dart.


Indeed, and that is why the Typescript team is against any feature that steps away from that relationship.


I'll take an official WYSIWYG editor while we're at it


Microsoft hosted MacroScript as TypeScript plugin or toplevel wrapper would solve this problem.

You just need to spark it, community will help maintaining it.

It would solve all codegen needs from generating clients to runtime type assertions and many interesting problems in between.


thoughts on the other poster asking for type info to be kept around in the Class objects after compilation?


Hi there! I work on the TypeScript team and I respect your feedback. Of course I do think TypeScript is worth it, and I'll try to address some of the points you've raised with my thoughts.

i. Dependency management is indeed frustrating. TypeScript doesn't create a new major version for every more-advanced check. In cases where inference might improve or new analyses are added, we run the risk of affecting existing builds. My best advice on this front is to lock to a specific minor version of TS.

ii. My anecdotal experience is that library documentation could indeed be better; however, that's been the case with JavaScript libraries regardless of types.

iii. Our error messages need to get better - I'm in full agreement with you. Often a concrete repro is a good way to get us thinking. Our error reporting system can often take shortcuts to provide a good error message when we recognize a pattern.

iv. Compilation can be a burden from tooling overhead. For the front-end, it is usually less of a pain since tools like esbuild and swc are making these so much faster and seamless (assuming you're bundling anyway - which is likely if you use npm). For a platform like Node.js, it is admittedly still a bit annoying. You can still use those tools, or you can even use TypeScript for type-checking `.js` files with JSDoc. Long-term, we've been investigating ways to bring type annotations to JavaScript itself and checked by TypeScript - but that might be years away.

I know that these points might not give you back the time you spent working on these issues - but maybe they'll help avoid the same frustrations in the future.

If you have any other thoughts or want to dig into specifics, feel free to reach out at Daniel <dot> MyLastName at Microsoft <dot-com>.


Thanks for your work, TS saves me time every day. I was saying something similar to the op 3-4 years ago but really cannot picture working without some kind of type safety in JS now.


> TS saves me time every day.

Hmm, not my experience. I do TS for years now and still today I'm spending more time on fighting/pleasing TS compared to the actual code.

JS with all its node_modules dependencies is a complete nightmare to get the typing right. I regularly have to change good solid code to please TS, but at the same time TS often doesn't complain when the typing is obviously wrong.

I once started with Assembly, Pascal, C and C++. So please don't start to explain to me what strict typing is and the benefits and so on, I know. JS uses dynamic typing by design. And I remember how awesome it felt when JS came out and we could write code without doing type juggling. And I believe that with type inference and some other tooling in the IDE we really don't need TS at all.


I’m noticing a pattern in your arguments.

You need to understand that I (and I suspect many others) don’t have the same experience as you. I don’t _fight/battle with_ the type system, I work with the type system - and I enjoy it. It saves me countless hours. I don’t actually use javascript without TypeScript anymore - it’s simply not worth _not_ using it - for me.

You ask whether it’s worth using it - and you keep telling people not to explain the primary benefits to you. The answer is yes for many people. It seems like you’re looking to be convinced that it’s worth it but you don’t want anyone to tell you what you already know. I’ve done this myself in the past - I’m not saying the situation is the same for you but it might be worth looking inside at: when I did this it was because I knew that x was worth it but I’d put myself in a position where getting down off my hill and accepting that x was worth it would require me to admit that I’d been wrong about it. Now I could double down on my position that x was simply not worth it, or I could come down slowly and start to enjoy the benefits of x more openly.

If you kinda feel that what I’ve just said might be a factor for you, then that’s already incredibly brave. If you’re interested in taking it further I can recommend role playing: for a week (just a week) role play as someone who thinks x _is_ worth it. Adopt the positions on the benefits that you already know. Act like you love it, act like the type checker REALLY helps you and saves your time, act like the types aren’t all that bad and CAN be used in usefully-constricting ways, and of course, help make your code even more self-documenting. You’ve gotta convincingly act, as if you’re going to win an Oscar. The audience fully believe you’re a true, light-seeing advocate for x.

Being able to change your mind is a great, noble and immensely valuable skill, and I can see that’s what you’re trying to do. Consider role playing as an advocate like I suggested above and perhaps you’ll have a new tool in your toolbox.


> a pattern in your arguments

> You ask whether it’s worth

> you keep telling people

I just want to point out that the account you're replying to isn't the OP.


It sounds you are essentially recommending therapy for those who don't like TS.


SM people enjoy their pain too ;-) Typescript for "Consumer" is great, but as soon as you must write own complex typings, that is everything, but really not a joy.


> I do TS for years now and still today I'm spending more time on fighting/pleasing TS compared to the actual code.

Admittedly, this mirrors my own experience, at least in some cases, which I shall not deny.

Was called in to help with this one particular TypeScript codebase that used React a while back, it was a mess. I suspect that some of the chosen abstractions were overengineered and overcomplicated (and perhaps underdocumented), but at the same time TypeScript made things more finicky and as a result the development velocity tended to be on the lower side of things, when compared to similar projects with JS. This was especially noticeable when new developers needed to be onboarded, though at the very least refactoring could be done with more confidence. Essentially they were not dealing with just sub-optimal code structure/architecture, not just the ever increasing complexity of React, but also how it all integrated with TypeScript on top of that.

It's a bit of a double edged sword, because when done right, TypeScript is objectively better than JS in at least some regards (refactoring for one, the type system obviously but also how well IDEs can figure out autocomplete because of it, or highlight issues that would otherwise manifest at runtime only with JS), however there is also potential for things to get much worse than your typical JS codebase, when done wrong.

This might be a silly comparison, but I'll compare it to something like PHP: when used correctly, you'll get something that lets you iterate pretty fast and just ship stuff, whereas if you go about it the wrong way you'll probably have a badly developed unreadable mess that's full of bugs or security issues, for a variety of reasons. In my experience, TypeScript codebases range from very good to very bad, whereas JS tends to be more mediocre in general, at least in regards to complexity and iteration speed. In regards to maintenance, TypeScript will win in most cases.

Use TypeScript responsibly and you'll have a decent time. Give it to someone who wants to be clever about things and you'll have lots of accidental complexity to deal with, more so than with the alternatives. Then again, personally I think that Angular did TS better than React or Vue, so this might be a niche view in of itself.


Sharing your experience is fine. Rebutting someone saying “this saves me time”, much less in a comment where they’re thanking someone for something they help make, is a bit ridiculous.


> I remember how awesome it felt when JS came out and we could write code without doing type juggling.

You enjoy working with dynamic typing. It more aligns with how you think and program. That's okay!

TypeScript may never be worth it, to you, and that's okay too! Not everyone likes and appreciates static typing.

If you ever do come to the datk side and think TypeScript is worth your time, it will be because that part is not a time waster but a time saver.


> Hmm, not my experience. I do TS for years now and still today I'm spending more time on fighting/pleasing TS compared to the actual code.

What are you doing exactly? I like TS, because it's one of the easiest to use type systems.

I'm also intrigued that you used C++ and think TS is bad, C++ error messages are legendary for being hard to understand.


What's Math.sqrt("hello")?


I use TS daily and I think this sort of argument doesn't give TS the credit it deserves.

Sure, you _could_ use it to check that you aren't making obvious errors like this (but this seems constrained to the "convince me that it's worth it" level of functionality, as it is just a nice-to-have for an existing working pattern).

Where TS shines for me is that it ENABLES new ways of "ad-hoc" coding where it's no longer risky to just create "convenience objects" to represent state when prototyping/refactoring, since you can avoid specifying concrete required types across a load of middle-man code and compose types at each level. This enables the pattern of splatting (composition over inheritance) a bunch of inputs to your data together, and then routing them to the points where they are needed. This scales nicely when you introduce monadic fun (processing some data with children, or something delay-loaded) since your type constraints basically write the boilerplate for you (I'm sure co-pilot will make this even more so in the far future).

There's also the fact that your can have your back-end APIs strongly typed on the front-end via something like GQL or Swagger, and this saves a TON of time for API discoverability.


Or 5/"potato"

In JS it's NaN, in TypeScript (or any other language with a remotely sane type system) it's a compilation error that saves you from running into a random NaN at runtime.


[flagged]


Please make your substantive points without breaking the site guidelines.

https://news.ycombinator.com/newsguidelines.html


What's Math.sqrt(value_from_a_package_that_was_definitely_a_number_before_you_updated_it)?


So then you have to make sure any inputs that get passed into Math.Sqrt aren't strings. You can pay the cost at runtime or compile time, and doing it at compile time saves you headaches later.


I agree that is not a good example, but the exact same thing can happen in more subtle ways that is hard to catch. For instance you might have a function that returns a user. Do you pass in the user ID argument as a string or a number?


>I agree that is not a good example

It's actually a great example, because in JS Math.sqrt("4") returns 2 because of JS's idiotic type coercion rules. So if you're passing in user input and don't typecheck it, it will work fine until someone inputs a letter instead of a number.


You won’t get far with attitude like that.


You know you can just use transpileOnly option and ignore some errors? I use it like that and it’s helpful to model data and speeds up development.


Assembly? No types there, just registers. The only distinction is floating point and non-floating point registers.


If you're just looking for general feedback, constructor typing has made my life really hard trying to type an existing JS library. In a JS object instance like `const user = new User()` you can call `this.constructor.staticMethod()` and it calls `staticMethod()` on `User` or up the inheritance chain. But TS doesn't type `.constructor` so you're out of luck. In the simple case you call `User.staticMethod()` but that doesn't work for an instance method on a superclass that wants to call the method of the constructor of the instance.

I understand why JS makes this difficult to type because you can mess around with the constructor. But for normal every day code you just expect `this.constructor` on an instance of `User` to be `User` and it really sucks that it isn't!


> you can call `this.constructor.staticMethod()` and it calls `staticMethod()` on `User` or up the inheritance chain.

This is where I have come to like typescript. 4 years ago, I would have agreed with you, but TS have moved me into a world where I wouldn't write that kind of code any more, and it honestly makes me sick to look at.

Instead I would just do `User.staticMethod` because this accomplishes several things:

  - super classes shouldn't know about inheriting classes. If they do, TS gives you interfaces and abstract classes for this purpose.
  - it still crawls up the proto chain, so the inheriting class doesn't need to know about every super class
  - no risk in `this` pointing to something unintentional (if someone calls or apply's your method)
  - shorter code, easier to read IMO, especially for less experienced devs


I might agree with you but sometimes we have to type up JS code that is written in different styles to what we'd personally prefer. TS should (a) let me generate the types that are used in JS. There are areas like proxies where that's just not possible, but in this case it feels like there's a disparity between TS and JS over classes. (b) I want to work with this code in my editor without red lines all over the place on perfectly valid code.

super classes don't need to 'know' about the inheriting classes for static inheritance to work. i.e. here is a simplified problem in the library I'm trying to add types to:

superclass:

  static hasField(name) { return false }
  constructor() {
    if(this.constructor.hasField('id')) { ... }
  }
subclass:

  static hasField(name) { if(name === 'id') return true; }
(it doesn't really look like that but you get the idea). That just works in JS, but in TS you get `Property 'hasField' does not exist on type 'Function'`. In the TS definition there are a couple of ways I can trick it to return the right thing for `this.constructor` but if I'm looking at a JS file in vscode with TS' checkJs flag on then this pattern should just work in my opinion.

edit: And I don't think I should have to trick it, that makes the definition harder to read and essentially wrong somehow.


Great of you to hop in. Just want to say that while I can typically navigate error messages in TS, I do occasionally have to do some googling on some error messages (specifically ones around generics that have a super type that apparently doesn’t necessarily agree with a subtype or something — still don’t quite get it), and it’s nice to see that the TS team recognizes the obtuseness of these messages is an issue.


TS#### error messages was a brilliant idea to make them more ~googleable~ bingable at least.


Isn't that pretty standard for compiler error messages? The C# compiler uses CS\d{4} for example. MSVC, MSBuild, various other build tools (at least on the Microsoft side) all use similar patterns with different prefixes.

That being said, it seems like Clang or GCC don't do that at all, which perplexes me a bit. Perhaps it doesn't matter much when error messages are never localized.


I guess TypeScript and C# having Anders Hejlsberg in common probably helps with things like that?


(A bit )off topic - chatGPT managed to infer your email address based on this comment. Required some hints though - most likely due to my lack of experience (first try). I was curious to see how it can improve indexing in general.


Thanks for responding, and thanks for your work for the community! I sometimes place myself in the shoes of devs building TypeScript, especially when I am a little frustrated, and most of the time I realize that a lot of these issues are incredibly hard to solve.

> i. Dependency management is indeed frustrating. TypeScript doesn't create a new major version for every more-advanced check. In cases where inference might improve or new analyses are added, we run the risk of affecting existing builds. My best advice on this front is to lock to a specific minor version of TS.

In my recent case, I needed to update Apollo Server to v4, which needs a newer version of TypeScript (see https://www.apollographql.com/docs/apollo-server/migration#t...), which in turn broke a type used from ProtobufJS. I am still navigating ProtobufJS source code to figure out what is the correct fix here.

> ii. My anecdotal experience is that library documentation could indeed be better; however, that's been the case with JavaScript libraries regardless of types.

Actually I think documentation is almost universally bad, I don't think Go or other languages are that much better (I don't want to wade into that debate though). The thing is, having TypeScript means you need more documentation. Even some pretty well documented JS/TS libraries completely neglect TypeScript and the end effect is that you end up having to guess things, or start reading source code. I don't actually know how you could improve this situation.

> iii. Our error messages need to get better - I'm in full agreement with you. Often a concrete repro is a good way to get us thinking. Our error reporting system can often take shortcuts to provide a good error message when we recognize a pattern.

I will look closer at this and start to think of how it could be better when I see a confusing message. I would probably count this as the biggest area that could yield improvement, because most of the time frustration is born of not being able to understand an error message. Often fixing things lead to trial and error. Can I just open an issue in the TypeScript repo for this sort of thing if I have a concrete suggestion?

> iv. Compilation can be a burden from tooling overhead. For the front-end, it is usually less of a pain since tools like esbuild and swc are making these so much faster and seamless (assuming you're bundling anyway - which is likely if you use npm). For a platform like Node.js, it is admittedly still a bit annoying. You can still use those tools, or you can even use TypeScript for type-checking `.js` files with JSDoc. Long-term, we've been investigating ways to bring type annotations to JavaScript itself and checked by TypeScript - but that might be years away.

Once it is part of the language, that will help a lot :) I considered using Deno or Bun to get me there on the server side, but need to be careful with production services.


Re libraries incompatible with certain typescript versions - e.g. protobufjs fix - it’s been my experience that you want to try and only use compilers specific to each library and compile libraries separately. It’s unfortunate but the JS community often tries to run all JS for a project through the same single compiler tool chain, using one global version of the compiler instead of relying on and effectively linking the JS output for each library. Unless you routinely rewrite third-party libraries to match your toolchain’s expectations, you’re going to have a hard time doing that.

For a library that generates code, that’s a special case, as the code it generates must target a particular language version. You have three choices: 1. Upstream a fix as you propose; 2. Side-by-side install both TS 4.6 and TS 4.7 using workspaces or sub-projects and have some of your code compile with 4.6 and then link the results or 3. Find a replacement that is updated to 4.7. For example, https://github.com/stephenh/ts-proto has 4.7 support listed in its readme.


We do generate the protobuf from a different repo which gets published on npm, and we could generate it for different versions of TS. I suppose all of this work is part of the overhead I am not so happy about using TypeScript.


This is a very interesting idea!


> Actually I think documentation is almost universally bad, I don't think Go or other languages are that much better (I don't want to wade into that debate though). The thing is, having TypeScript means you need more documentation. Even some pretty well documented JS/TS libraries completely neglect TypeScript and the end effect is that you end up having to guess things, or start reading source code. I don't actually know how you could improve this situation.

I think Rust approach is the best one so far, every package published in crates.io has an entry in docs.rs (that is created automatically when you publish your crate in crates.io), so I think Microsoft could improve it for every package published to create an entry in a domain specifically for js docs, if a project does not have it will look empty, but slowly the devs will start adopting it at the point that major libraries will improve the docs compared to what we have today.


> Can I just open an issue in the TypeScript repo for this sort of thing if I have a concrete suggestion?

Yes. There are even issue templates to guide you through writing an issue that the team will be able to address effectively.


>Once it is part of the language, that will help a lot :)

If you want to follow along, the proposal to allow type syntax to be part of JavaScript is here:

https://github.com/tc39/proposal-type-annotations

(To repeat Daniel, there is still a huge amount of work ahead)


> Can I just open an issue in the TypeScript repo for this sort of thing if I have a concrete suggestion?

aozgaa has already answered this one - but yes! If you have a concrete suggestion, that's fair game and we can brainstorm on the issue to think of something. We might not come up with something general enough to implement, but it's often a good seed to plant.

> which in turn broke a type used from ProtobufJS.

I am curious to hear what sort of issue you ran into. Was this the Apollo fork (https://github.com/apollographql/protobuf.js), or the original?


It's the original, which is being compiled on our internal protobuf definitions in a different repository and then installed post compile as an npm module.

I spent a while trying to grok the TS error but started to suspect that it didn't make a lot of sense, so I rm -rf node_modules, reinstalled, and it went away.

It would be hard to figure out what was at fault, but it's probably a combination of the node module system, protobuf, ProtobufJS and TypeScript. I do sometimes get funny type errors and restarting TypeScript makes them go away, in this case I had to go a step further.

I'll let you know if I get this again, or figure out what happened.


As the FAQ of the proposal mentions (https://github.com/giltayar/proposal-types-as-comments/#FAQ), we believe adding runtime type checking would be untenable. This stems from both performance concerns and concerns around evolving whatever type-checking would be built in. For that reason, not only do we not expect engines to perform runtime checks, we would specify that these type annotations have no runtime effect.


Yes, and having spent many years working on browsers I am saying that if the annotations have no effect there will be code that ends up depending on the engine not performing those checks. At the point the syntax is burned because enforcing the checks breaks content.

I am unsure why you believe performing the checks is untenable if TS already does them? The runtime checks should be relatively cheap and in perf critical code if it turned out type checks were too expensive there are a bunch of optimizations engines can do, or the impacted functions could drop the annotations.

Adding a significant parsing surface, with the related page load perf impact, with the potential (im going to be generous) to burn the syntax, this seems like nothing but downside.


> TS already does them?

TS runs the checks at compile time and compiles to JavaScript without annotations. There is no runtime cost.

> related page load perf impact

Type annotations would likely still be stripped by minifiers, just like comments. No change.

> burn the syntax

As a dev that regularly uses TypeScript, being able to use debug builds with annotations and copy paste TypeScript into the console would save a lot of time. This would be part of larger trend of adding type annotations to dynamic languages. Python, Ruby, and other dynamic languages have blazed the trail here. There are ways to add enforcement in the future if needed, just look at 'use strict';


> > TS already does them? > TS runs the checks at compile time and compiles to JavaScript without annotations. There is no runtime cost.

If there were performance costs this can be mitigated very easily - to be able to do compile time checks means TS is designed largely around early binding, at least in the default case. A JS engine can do this trivially.

> > related page load perf impact > Type annotations would likely still be stripped by minifiers, just like comments. No change.

You're kidding right? You're proposing permanently changing the language, in a way that burns the type syntax for a feature that you're now saying will be stripped before a JS engine ever sees it?

> As a dev that regularly uses TypeScript, being able to use debug builds with annotations and copy paste TypeScript into the console would save a lot of time.

That sounds like a feature for your console, not the language

> This would be part of larger trend of adding type annotations to dynamic languages. Python, Ruby, and other dynamic languages have blazed the trail here.

Which enforced the types. Understand I am not opposed to the type annotations, I am opposed to this because it does not enforce those types. That means that broken content will exist, and thus burn the syntax.

> There are ways to add enforcement in the future if needed, just look at 'use strict';

Which will never happen again. As the person who implemented strict mode in JSC I can tell you that the cost of a mode switch is massive, in spec and implementation complexity. Adding another mode switch is simply not going to happen. The only reason I was ever ok with adding strict mode to javascript was the removal of the this conversion. No other part of strict mode would have warranted the cost.


> A JS engine can do this trivially

I'm not one to argue, computers are fast. However, this would be a huge paradigm shift for the JavaScript ecosystem, where compile-to-JavaScript-tools like TypeScript are dominant. These tools have explicit goals to do type checking at compile time only and avoid generating anything at runtime. Runtime checking is redundant in this paradigm.

Now if you're arguing that the JS engines like V8 should be able to use type information for performance gains, I understand that wish. WebAssembly seems to be taking over the performance angle these days though, as it was designed from the start for that purpose, unlike JavaScript.

> You're kidding right? .. will be stripped before a JS engine ever sees it?

Yes. We're talking JavaScript here - billions of lines of which are compiled every year to ES5 ;-; with polyfills. Throwing away hundreds of useful features of ES2016+. This is par for the course, part of the JavaScript paradigm which this proposal understands.

> sounds like a feature for your console

That is fair, though it seems unlikely Chrome would adopt such a feature for their console without motivation. Perhaps.

> Which enforced the types

Python does not enforce types at runtime? You might be referring to something I'm not familiar with. Fired up Python 3.9. This works fine even though I'm passing strings for int.

  def add(a: int, b: int):
      return a + b
  add("not ", "used") # 'not used'
> the cost of a mode switch is massive

But earlier you said a JS engine can do this trivially? I don't understand how this is now difficult. Ultimately, if the types are enforced, you may as well go the full monty and support TypeScript in the browser, with Deno leading the way. Stop beating the JavaScript horse and all. That's the dream. This proposal is a compromise without a doubt, with the understanding that TypeScript in the browser is politically untenable at this point.


So parsing and enforcing these types would be fine, performance-wise, but having a flag at the start of the module that says "discard types yes/no" would be a massive performance hit?

I really don't understand.

It wouldn't have to support any smaller granularity, and it wouldn't need to change parsing at all.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: