Hacker News new | past | comments | ask | show | jobs | submit login
Moving from TypeScript to Rust / WebAssembly (nicolodavis.com)
500 points by nicolodavis on July 9, 2020 | hide | past | favorite | 405 comments

Whenever I write Rust, I have a lot of fun, but I'm still not sold on it for most web dev.

The analogy that comes to mind is that Rust is a really nice sports car with a great engine. It handles like a dream and you can feel the powerful engine humming while driving it. With the right open road, it's a great time. You can really optimize code, make great abstractions and work with data easily.

Unfortunately, web dev generally isn't a wide open road. It's a lot of little turns and alleys and special cases. Driving Rust in web dev is a lot of accelerating only to screech to a halt. I wrote a backend API for a side project in Rust. And true to what I said, Rust handled like a dream. But I didn't need the power. I spent most of my time screeching to a halt worrying about the plumbing between the various immature Rust libraries. And that's on the backend, which is way more mature compared to the frontend Rust libraries.

Judging by this post, OP managed to find a great open road in web dev to use Rust. I only wish I could find one as worthwhile.

This really nails why languages like PHP and Ruby have won out over static typed, compiled ones for application level web development. The web is a massive collection of disjointed, loosely coupled APIs that all (just barely) interoperate to provide a massive array of functionality. Languages that tend to work well with it are those that are highly flexible around the corner cases of these technologies, and allow you to quickly iterate through the process of gluing them together.

I’ve worked on several large Ruby codebases, and the thing is, the same qualities that make it easy to get a Ruby project up and working quickly make it a complete nightmare to maintain later. Its expressiveness and malleability mean that you can never really be sure that you understand how code is being used, and the complexity that emerges from a few years of that is incredibly overwhelming. Nowadays I would much rather slog through ten times as much Java boilerplate, because later I’d expect that I could still refactor code without fearing that the whole thing would collapse around me

Exactly; I guess a lot of people just work on projects/jobs and then move on. Once you need to go back to systems you forgot about (things you wrote 5-10-15-20+ years ago), Ruby (in your example and indeed my experience) is a nightmare on speed. The (strange to me) idea that people have that their code won't be around that long, hits me in the face every time a client asks me to 'connect to something made by someone some time ago'; I go check it and it's almost always something php/RoR/(and lately node) built by someone that left years ago and no-one touched it since because it works. When you actually do touch it, ofcourse, it is completely out of date, nothing current works and because of the flexibility of these languages and systems things change way too fast (really unneeded API breaks in packages is something crazy to me; coming from Java, I expect people to keep things backward compatible, but nope; just toss it out!) (especially node npm and least of all PHP which is actually a joy to work with in this kind of spelonking as it is remarkably stable including it's (older) libraries) (and no, not everyone uses a framework with php 'nowadays'; I run into many PHP projects a few years old that are just plain php, more so than frameworks used and that actually does make it easier to jump as plain php still works from a decade ago after update). And then when you have it running on your laptop, it usually is full of these 'corner cases' (lazy programmer input/output handling I would call it) which don't age well (and often hides bugs).

On the other hand, running into Java or C# projects is hardly ever an issue; it's a lot of boiler plate (many (request/response/dto etc) models/entities and layers), but it's readable immediately and if it still runs, it means that all data is validated and I can trust it.

The dilemma: boilerplate vs dynamic languages is a thing of the past. Kotlin make code even clearer than in java while being the sexiest syntax out there. It's 100% compatible with your Java code so you can incrementally migrate starting now!

The dilemma was always a false choice. You don't need kotlin to get rid of your boilerplate. Java is just a capable of being concise as any other language. Java's verbosity is a cultural artifact not a technical one.

No it is not as capable. The cultural verbosity is only a subset of the verbosity in java. The cultural one is great as those long API names allows for standardized and maximal clarity. The one induced by language syntax is accidental.

https://www.mediaan.com/mediaan-blog/kotlin-vs-java Things like state of the art type inference, data classes and lambda syntax, no semi colon and ton of other things allow for reduced syntactic noise.

Don’t get me wrong. I use java every day and quite enjoy it, but when I compare it to writing in Go or Kotlin, java does sometimes feel a little like meta programming. Especially if it’s a spring app, which is quite common.

This is obviously a matter of taste but I enjoy Java more when I stay away from Spring.

So, technically, you can get rid of a lot of the more egregious clutter with things like Project Lombok.

But I'd argue that, if you need to lean on compiler plugins to do it, it's sort of a rhetorical Pyrrhic victory.

And I would argue that adopting guest languages, alongside the extra complexity that they bring on board regarding extra IDE plugins, more layers to debug (has to pretend to be Java to the JVM), FFI issues (calling coroutines from Java), and creating their own library eco-system is not worth the trouble, beyond some short term feel good.

As long as the platform exists, its main language is the one holding the keys.

Can you describe this a little more. How did the verbosity enter the culture?

Kotlin has sealed its fate turning into Android's main language.

On the JVM its novelty will fade out while Java adopts whatever features might be relevant to the millions of Java developers out there, just like it happened with all other that came before.

And I remember the days back when Groovy was supposed to be the scripting language for JEE, and Spring was going full speed with it as Java alternative, and we even got strongly typed variant.

I've discovered in large, long-lived Java codebases that it's common to Greenspun expressiveness and malleability back into the language through heavy reliance on things like Spring, stringly typed mechanisms, and dynamic code generation.

The impact on maintainability is about what you'd expect. Maybe, on an abstract level, the software isn't as complex as what you can do in Ruby, but the verbosity of the language brings its own maintenance challenges. I can't speak to Ruby specifically, but, in practice, I haven't found that I'm all that much more afraid to change things in a large Python codebase than a large Java one.

I'm now coming to think that the real challenge is dynamicism in general, regardless of whether it's built into the language or implemented as a library solution. So, if Java has an advantage here, it's simply that it tends to discourage people from overdoing it by making it awkward to do.

What I find interesting about dynamism, is that every templating library I’ve ever used has a tendency to become php over time. Even templating libs in php! It’s fascinating to watch, since each new feature really makes sense because certain logic is trivial in the template, that would be way more complicated elsewhere, and so the template lib has pressure to add little things. Next thing you know the complexity and feature set of the lib is it’s own beast to deal with.

Agree here, admittedly I have no prior Ruby experience, but I've inherited a code base in Rails. There is so much magic that goes on implicitly. I end up jumping around everywhere to see why something unexpected is happening.

Have you used sorbet on these codebases? I'd be curious if there'd be a benefit here in terms of maintainability.

Sorbet is the only way I can function in anything larger than a 10 line script these days. I’ve started adding it to personal projects because it makes revisiting them later so much easier.

Without the LSP implementation out in the open, though, sorbet is just a type checker. It's not made much for sense for me to start integrating it into my personal projects just yet, since I only get the benefits when I run tc manually. Once Stripe opens up the LSP, however, I can see it gaining widespread adoption.

I haven’t tried the open source release lately but it looks like master has the lsp option.

sorbet is relatively new and I've been avoiding ruby for the last two years

> Nowadays I would much rather slog through ten times as much Java boilerplate, because later I’d expect that I could still refactor code without fearing that the whole thing would collapse around me

Sure, but if you don't reach market fast enough you'll have no reason to slog through anything, period. Can you imagine writing something like Facebook using Rust? It's an almost ludicrous proposition.

Static typing requires that you know what you're building at a relatively fine-grained level--how the pieces will fit together at the source code level, at least roughly. No amount of automated refactoring could ever match the flexibility of dynamic typing when you're just trying to bang out features, let alone explore the feature space, which almost by definition is what any novel solution is doing--exploring. That's why I love using embedded scripting, particularly Lua. You can gradually move components outside the dynamic environment as their place in the architecture becomes fixed and well understood.[1] So-called gradual typing doesn't really permit the same kind of flexibility because it's not the static typing, per se, that burdens development, but rather that static typing has the effect of forcing a more rigid hierarchy of higher-order abstractions and interface boundaries. What you want is the ability to slowly solidify the architecture from the outside to the inside, not the inside to the outside.

Microservices is another way to address the problem, at least in theory, but in practice doesn't actually directly resolve the real dilemmas. At best it just multiplies the number of environments in which you're faced with the problem, which can both help and hurt.

[1] Statically typed scripting languages seem rather pointless to me, and more a reflection of a fleeting fascination with REPL.

I think if you were starting today, you would build the app as a set of backend APIs connected to an array of front ends (apps, mobile web, desktop web, 3rd party, etc).

Building some or all of those backend APIs in Rust from day 1 would not be crazy, or less productive than any other language.

I think differences in proficiency drive bigger differences in productivity than inherent language differences.

So if you were starting today to build Facebook, using a language you are already effective with would matter more than which language you use (there are plenty of people out there for whom that could be Rust, or Haskell, or Ruby, or PHP, etc).

After working with a 300,000 LoC code base of PHP I would gladly rewrite Facebook in Rust. It allows effortless atomic commits. When I build something I make an MVP of a first version, refactor, make an MVP of the next version, refactor, and so on.

Rust lets me make sure my code still works through refactors and code changes. With PHP I could easily break something that would only cause an error when that code actually runs. That's right, there is no sanity check at all until the parser reads that code.

Facebook uses a bunch of a Rust, incidentally. They also use a lot of statically typed languages, and have made some of their own, even.

Exactly this.

I was involved in developing large PHP and Rubby codebases and its so hard to maintain.

These days I just use SpringBoot. It strikes a good balance between providing speed to market (PHP/Ruby), maintainability and performance (the next version should support GraalVM). Cant complaint so far.

Won ? Ruby fell off a cliff once it got to a point where people had to maintain that shit in production - I'm currently working on a large mature RoR codebase and I'm switching jobs ASAP because it's incredibly painful to work with and feels like a dead end career wise - and I like the gig otherwise - good product and a decent team - but the technology is draining so much of my life energy on nonsense it's ridiculous. And the language is built for unmaintainability - standard functions being abbreviated to a single letter, 2 or 3 aliases for a single standard function (map/collect and such), Rails overriding the conventions of the language (! postfix). And then there's the architectural retardation of fat models, fat controllers and using mixins ("concerns") completely break encapsulation - all actively killing code reuse-

PHP has long been a meme and is mostly legacy stuff or bottom tier work.

Dynamic languages are adding static optional static typing because the value provided by tooling as the projects scales is undeniable.

It took ES6/TypeScript to drag JS out of the dump that it was with every library adding its own take on a class system - the maintainability jump from ES5 to TS is incomparable for anything larger than 1k LOC

I say this a lot, but if you are this irritated by Rails, then you ought to see the 20+ year old legacy C/C++ some of us get to work with ;)

I started on C++ back in VS6 days and I wrote tons of it when I was in to game dev and graphics programming - but I would not want to work on those systems and kinds of problems when I can get paid the same to work on higher level stuff - the tooling quality and slow iteration cycle when I'm really stuck on something leads to a lot of stress - I just want low stress dev environment where I focus on solving problems I'm being paid to solve - not problems created by the tools and then having to explain stakeholders why something seemingly simple takes 3x of the estimate. I like environments where I'm confident in my estimates - C# proved to be good in this regard simply because the tooling is there even when I hit a wall and it has a relatively consistent uniformity that makes it easier to navigate unknown territory.

Funny, after 10 years of Java and other high level languages I was very happy to go work on C++ at a lower level and have been pushing myself lower down the stack as I go.

C++ these days is a far more pleasant beast than the VS6 days, the tooling is pretty good and the language can be tamed into a pretty elegant beast with the right coding standards and discipline.

I agree. My team is just more productive in C#, so we just stay there as much as possible. As you say, the ability to work on just solving the actual problem is very under valued.

This is one nice thing about working at Google. We have C++ codebases that are 20 years old, but you'd never be able to tell because they are still continuously worked on and even if not there's a team of people at Google who constantly run company-wide refactorings and the like to modernize things.

This is, IMO, one of the most underrated aspects of the monorepo, that has such a valuable impact over time.

It only takes a handful of extremely passionate engineers (and I really mean a handful, like less that 2% of engineers in an org) to raise the quality-floor of the entire codebase.

For sanity, sustainability and fairness I would rather that read

It only takes a handful of extremely passionate engineers > It only takes a handful of paid engineers, full time assigned to the task

Anecdotally (I don't work at Google, and this might not apply there because it's such a large company): At the monorepo companies I've worked at, of the 2% passionate I mentioned, maybe half have naturally gravitated towards working FT on the infra teams, the other half make very occasional but astronomical contributions, while working full-time on an actual product.

What I mean to say is that with a monorepo, you leave the door open to essentially passion-driven contributions made by the other half of that 2%. In non-monorepo environments, the barrier to uniform adoption is too high for someone that isn't full-time assigned to the task to justify.

Some people are genuinely passionate about this stuff, and want to take on the task of migrating the entire code base, because it's a fun challenge and it'll improve their day-to-day work every day for the remainder of their employment. They're very rare, but they do exist, and with a monorepo those people are better-equipped to drag the company forward.

I don't get this - changing the code is not the problem - this is a political / marketing issue.

I have some news - passion is not enough, and there are more than 2% of people working on things they are passionate about - it only 2% get lucky.

I can think of a few instances in my recent career at large monorepo company where I have built something on the scale of "passionate, could improve our working lives"

One died because ... well I gave up, two are used locally by my team and those I could persuade, whilst other solutions built by others for the same fix have gone on to be blessed officially, (ie replaced by grassroots competition) and one was replaced by a fully mandated and funded project that spotted the need and just steamrollered over all the local fixes.

One is still outstanding and I think well worth pushing still.

But I don't dream of the big win where suddenly the Board says "why, without your glasses Miss Moneypenny you look ravishing".

I get paid, I work, and I try and make the world a bit better where I can. I am passionate about it. But i don't write blog posts about how passionate I am.

Maybe I should :-)

Ya, fb feels the same way fwiw. I'm very sold on the benefits of monorepos now.

I've got a large codebase i've been maintaining since 1999 that i've continued to evolve and is still in use today . I've been able to steadily improve it without breaking much. IDK maybe its being > 50 but I can appreciate less "dynamic" environments. I do we dev in vue with a go back end but I know the vue part is going to be a rewrite in 5 years. Who wants to keep doing that shit? not this 52 year old guy.

Modern refactoring and static analysis tools really make it far easier to keep old code bases up to date.

As a hobby thing I'm taking an almost 30 year code base written in C and reworking it into C++17. I've become somehow somewhat adept at this transformation over the years :-)

Yeah, the problem is that the large majority of the world isn't Google, software isn't their main business, and their codebases are maintained by a continuous rotation of external contractors that are paid by the tickets that they get to implement.

Yep, and don't get me wrong. Stray out of the monorepo and into the wilds of various git hosted code bases at Google and you'll find code rotting in the fields, maintained to a far lower standard (IMHO).

I'll see your legacy C/C++ and raise you legacy Fortran 77 :)

We needn't fill the young Rubyist's head with nightmares that deep, friend :)

I think you would really appreciate Rust for backend web dev. Quite a few people who were once upon a time hardcore rubyists gravitated to Rust and became some of its greatest contributors: Steve K., Sean G., Carl L., Florian G., and many more.

I like Rust a lot - especially since I have a C++ background - unfortunately I don't see any opportunities to move to it ATM, but I haven't really tried hard enough - maybe I'll take a month off after my current project to write something substantial in it and try to find a Rust gig.

PHP is definitely not a meme, it's got a ton of actively developed frameworks, PHP 7 was a huge step forward for the language and PHP 8 is shaping up to be as well.

It's been used at every workplace I've been at, sometimes well, sometimes not so well, just like any language. No idea what the point of blanket statements like this is.

Especially when said statements are just wrong. Anecdotal evidence from individual companies does not allow you to make representative statements about the entire internet...

https://w3techs.com/technologies/details/pl-php claims ~79% of websites whose backend is known use PHP.

https://w3techs.com/technologies/details/pl-ruby shows Ruby has grown from 2.5-3.5% over the last year.

Which standard function in Ruby is a single letter?

b, to_s, to_i, to_a, to_c, to_h, to_r, etc.

Should have said methods but you get what I mean. And I've encountered ambiguous and pointless abbreviations all over the place.

It's weird, when I was I younger I used to love shortform syntax like that as well as removing unnecessary punctuation

But now that I'm older, I appreciate things being more descriptive and orderly, including strict use of semicolons, functions that say what they are doing (e.g. to_string), or being explicit about converting (e.g. static_cast<type>)

I think it's because I find trying to make everything as succinct as possible ends up trying to be too clever.

When I was younger I tended to write more code than I read. As I got older I transitioned to reading more code than I wrote. This correlated with a desire toward a preference for code that self documents as I progressed.

I type fast enough that somewhat longer names don't hurt. Also tab completion has existed both in IDEs and in shells for many years now. But just like very nearly everyone else I fall into the "7 ± 2 items in working memory at once" capability range. Having to remember what a command/function/syntax element does instead of reading a word that says what it does takes away from that working memory.

Long-term memory is similar, while it's not bounded in the same way it takes time and effort to develop. I'd rather type `git new-branch` than have to remember `git checkout -b` and I certainly don't alias it to `git nb` or something: I can type `git n<TAB>` and let that complete it.

Well said. "Clean Code" goes into this a lot and it helped me realize why I was starting to become more and more bothered by looking back at the short variable names I wrote years ago on my projects.

When I was first learning java I loathed the verbosity, but I was coding everything in vi. Now days with good ides, I don't mind java's verbosity or even the boiler plate.

I find myself more annoyed by dynamic languages, because the tooling just isn't at the same level. All sorts of 'hints' that I rely on in java simply aren't there in other languages, and the ide throws up it's hands and is like '...? I guess this is right? Godspeed sir', and I wind up having to go lookup documentation rather than ctrl clicking into underlying functions and code. It's really annoying, and it's made me be more appreciative of staticly typed languages.

Interesting, I’d never seen or used .b or .p that someone else mentioned. I agree those definitely should not be a part of the standard library. Not sure about to_i etc. we use those pretty often and it’s never really been an issue.


I think you took the phrase too literally. I'm assuming by won they didn't mean that Ruby and PHP will dominate the web landscape for eternity. I'll just point out that dynamic programming languages have for most of the webs history been the primary tool for developers.

The meme here is that your comment reads like a parody of an outraged junior programmer at their Dunning–Kruger peak.

They've "won" so far because static types languages were cumbersome and unpleasant to use, but this is changing and dynamic languages are learning some type tricks too.

The issue here is with Rust: its strengths are mostly irrelevant for the web and its weaknesses (particularly slow development compared to the competition because of having to pacify the type checker) are really important.

Op is painstakingly beating around the bush, but what they're getting at is that Rust + webdev = mismatch.

For a lot of "usual" webdev, any reasonably modern language will do. Even modern JavaScript will be fine. Something like C# would have been way better, though, but we cannot change that now.

However... software development has become a mess of slow technologies and abstractions one on top of the other.

Some people are working routinely in a text editor that is behind three operating systems: the VM/hypervisor, the usual operating system (Windows/macOS/Linux) and then a browser instance (which is like a operating system now). Then add all the drivers, libraries, frameworks etc. that go in-between.

While I don't like that WebAssembly is yet another abstraction, at least it is a chance for a resurgence of system programming languages and to proper software engineering...

Why can't you use C#? .Net Core (upcoming .Net 5) are pretty decent to work with. I'd still rather use node for the most part though.

I have been playing with Rust, but not sure about how good an idea it is for the front end yet though.

Well, you can compile the .NET runtime into Wasm, send it to the client and run C# on top of it, but then you are adding yet another layer.

I didn't have the impression the discussion was a web context. In which case if lean straight js/react or rust/yew

Pretty sure the parent comment was talking about client side.

You can use C# on the front end now with Blazor. It's pretty new though so I don't know how good it is yet.

I don't totally agree about its strengths not being relevant. Memory safety is an extremely low bar that the web has generally managed to meet, but Rust has a lot of value elsewhere. The author talks about significant performance improvements, which would be very welcome on many sites, as well as improved error handling and data validation patterns.

Most things thank goodness are not SPAs, nor do they have to be.

The problem here is the concept of SPA itself - it's a complete hack which is plastered over with various tricks to make it halfway usable. This has been going on for years now.

Websites on the other hand are slow because of the tracking and the ads which load tons of JS. Remove that and the web will be blazing fast.

> Remove that and the web will be blazing fast.

With sufficient ad blocking and JS white listing, I can attest to this.

After removing ads and analytics, sites that rely on JavaScript to render significant amounts of content are the slowest.

If you have A LOT of content (eg. a table with 10000 rows containing rich data) it will be faster to use a JS virtualized list than to just have all those 100k+ DOM elements computed at once.

I am currently writing a hobby project in Rust with Rocket and Diesel. I could bang out a working prototype very quickly with either Python or something like ASP.NET, but I picked Rust because I want to get better at it.

Many of the things you say are true - Rust libraries in general need some love and polish before they can be beginner-friendly, but some are getting there. The difference between Diesel and Rocket, for example, is quite stark. The former has only the barest minimum 'examples' and 'guide', if they may be called that, and I feel like I'm expected to read and understand its source code to become really proficient with it. It takes a lot of experimenting and trial+error to do anything beyond the basics. Rocket, on the other hand, has a very comprehensive guide with useful examples and, so far, has been enjoyable to work.

That said, there are still a lot of "convenience" features missing. My current notable example is forms. As I'm doing it (I haven't looked into any addons to Rocket for this), I have to write out the HTML for the form myself, along with any Javascript I might require for validation, etc, then write the server-side methods for GET/POST, making sure to maintain the state myself. In this regard, something like Django's effortless ease to get a form on screen and store its data into a database really showcases where Rust (Rocket) still has a long way to go.

Hopefully, with more people using it, the tools and libraries will improve and mature, particularly the documentation.

When all is said and done, I enjoy "slogging" through Rust a lot more than I did working with Django, even with the slower progress. Something about this language really speaks to me.

Exactly. The frameworks aren't there yet but I think Typescript is pretty much ideal for web dev. We're still using dynamic languages for web dev for mostly legacy reasons now.

Not sure if "ideal" is a good choice of word here. We dont know what's still coming. It surely has some flaws (many inherited from JS). There is stuff like ReasonML/Elm/PureScript that seems more ideal to me from a pure language (not eco sys, job/talent market, etc) perspective.

Yeah sure there are many better languages than TS in an absolute sense but TS is the only one I see with the momentum to displace the current incumbents.

It's still a single threaded run time. There's one platform that's ideal for web, BEAM, but unfortunately it doesn't have a strongly typed language. Elixir and Erlang are both great languages though.

I can't speak to whether or not the BEAM is ideal for the web, but I will say I've been watching the Gleam project very closely because having static typing with the BEAM is a dream of mine.

I think they have "won" in certain domains. Java and C are still widely used in loads of places, though less so in web dev.

> This really nails why languages like PHP and Ruby have won out over static typed

It really doesn't.

Languages like PHP and Ruby "have won out" over statically typed languages because the representatives of statically typed languages at the time were Java and C++, both of which were bad (they still are, but they were): verbose, difficult (and verbose) to leverage for type-safety, missing a bunch of tremendously useful pieces (type-safe enums to say nothing of full-blown sum types), nulls galore, …

As a result, the gain in expressiveness and terseness (even without trying to code-golf) of dynamically typed languages was huge, the loss in type-safety was very limited given the choice was "avoid leveraging the type system" or "write reams of code because the language is shit", and the fast increase in compute performances more than compensated for the language's loss in efficiency.

And that's before talking about the horror show that Java's web frameworks ecosystem was circa 2006.

> Languages that tend to work well with it are those that are highly flexible around the corner cases of these technologies, and allow you to quickly iterate through the process of gluing them together.

That's really complete hogwash outside of the client, which isn't what we're talking about here since neither php nor ruby run there. On the server you have clear interfaces between "inside" and "outside", and there's no inflexibility to properly taking care of cleaning up your crap at the edges. Quite the opposite, really.

I think this is spot on. IMHO, everyone that uses Rails knows it's flawed. The problem is that there's not a better option out there that:

(1) Is "batteries included"

(2) Has the gem/engine ecosystem where basically every problem is already solved.

(3) Expressive code like has_many :things

If we could get those things + static typing + performance everyone would switch. But that doesn't exist.

That alternative in 2020 is Python 3.7+ with typing annotations.

Python is very "batteries included," and there is a large set of exisitng packages for most things. One can monkey-patch things and invent cute syntax, but most python is rarely hard to reason about what's actually going on.

It's easy to bag on parts of python (eg some syntax and terrible distribution story), but I feel much less crazy (and suspicious of every punctuation mark) when working on a modern python app than when working on a modern ruby app, especially in the presence of rails.

.NET and Java have enough batteries, 20 years of libraries solving every problem, several expressive languages available, alongside AOT/JIT compilers.

.NET has the problem if being a MS product. It's only relatively recently that they've decided to be fully cross platform and open source. I'm not sure if it's all the way there yet, but it could be a viable option.

Java is fine as a programming language. Java as a web app more or less means Spring. And Spring means XML files and annotations which come with their own problems. Kind of an out of the pan and into the fire thing.

You can get away without using Spring and do quite fine, but there's definitely a risk that if you invest in the JVM ecosystem you'll get hired to work on Spring.

(alternatives like Vert.x, the less-annotation-heavy quarkus, etc).

For Java web dev I actually like Kotlin better - ktor is pretty nice.

Plenty of companies don't care one second that it is a MS product, in fact it is a positive value on their eyes regarding product support and tooling.

First of all there is more to Java Web development than JEE and Spring, which in any case, other eco-systems don't have a mature answer for many of their deployment scenarios and cross system integrations.

XML is beautiful, there is yet a format that supports machine manipulation, IDE graphical tooling, comments, schema validation as XML.

Anyone that praises Rails will be right at home with XML and annotations magic.

because the representatives of statically typed languages at the time were Java and C++

And Pascal, Ada, Haskell, Eiffel, Standard ML, ...

Operative concept: representative.

None of the language you mention had any sort of wide-spread visibility at that time, meaning they were not representative of the (overwhelming) majority experience with statically typed languages.

As far as I'm aware, Pascal and Ada were still significant in the early 90s (ie when Python arrived). You're right that they where far less so when PHP entered the picture, but it's not as if they were dead (Borland Delphi and PHP 1.0 were realeased the same year).

> Pascal and Ada were still significant in the early 90s (ie when Python arrived).

That is not the timeline we're talking about here. Dynamically typed languages started taking off at the turn of the millenium and really exploded in the second half of the aughts. Java didn't even exist in the early 90s.

But it's not as if the programming community collectively just forgot about these languages (and, as mentioned, Delphi and PHP are in fact contemporaries). The question remains, why did new dynamic languages rise, but already existing static languages decline? That requires an explanation.

Yep Delphi 1.0 was launched in the mid-90s and had a good run for some time after that.

None of which really had mainstream visibility with the kind of people who actually made the decisions about what language your company would be using.

Yep and on the dynamic side there were also Erlang, Python, Lua, Lisp, etc.

I think you get it backwards. Things are disjointed, loosely coupled because highly dynamic languages have won. The flexibility of dynamic languages, which can be great when working around dynamic corner cases, does not force developers into fixed, well-defined contracts keeping everything loose and disjoint.

PHP is highly to blame. PHP, being a scripting language, is easy to deploy and keeps chugging along at all costs. It may do something nonsensical, but it will try to chug along to completion without aborting. It creates a system where it is easy to write and deploy something that usually works.

I completely disagree with this.

I work on a large C# platform that interops with a ton of vendor API's, the majority of which are also written in typed languages. The disjointed, loosely coupled nature of web systems is what causes problems. In particular, knowing what a "correct input" is to web systems is very difficult, where correct means:

1) Passes the API's immediate validation

2) Passes validation inside API's longer running processes

3) Passes business rules (it might be correct types, lengths etc, but fails anyway because of an incorrect combination of API calls)

Some of our vendors use Elixir, some use PHP, many use C# or Java. I really don't notice any significant difference between them on that account. The issues are all down to the complexity of the software and business requirements and that the whole thing is a giant distributed system.

> knowing what a "correct input" is to web systems is very difficult

I cut my teeth programming an XML web service using soap and while I can't say that it was as drop-dead simple as a restful HTTP web api these days, between the wsdl and the uddi, it sure was convenient to be able to know as a client what methods were being exposed and what inputs they expected.

Sometimes I wish the w3c had not required XML for those web services ,so that we could do that with json plus whatever add-ons can make it have an enforceable schema like XML. WSDLs were underrated.

I agree. It was nice being able to grab the WSDL from say, a .NET web service, and run your maven script on your Java app to generate the interop boilerplate and just start coding against a type-safe interface.

If have seen horrible javascript, but I have also seen horrible type-safe examples. It is always a compromise between errors, readability and productivity.

Attaching text in form of a script anywhere in the DOM can be abused, but it is also insanely practical. I don't think Javascript is too horrible. I think the whole toolchain to get minimized and bundled JS is. That is why I am wary of TypeScript. Yes, I see its benefits, but I don't like cross-compiling if I can just not do it.

Given, I am no web developer and my "projects" on the web are tiny. I completely understand someone who likes to take the increased effort to use TypeScript, especially if your project isn't just making a text blink. If it reaches a certain size, I would probaly look into it too.

Webassembly looks interesting, but will take some time to establish itself. There are also disadvantages to that, since it could reduce the openness of the web. All the tracking we are subjected to could only be inspected by network traffic instead of looking into the source.

I agree with both of you in that I don't think the direction of the origin really matters.

The problem is when dealing with other teams/products/api's it's a mess. Varying standards, varying strictness, corner cases everywhere. There are plenty of JAVA, ASP.Net APIs that are absolute nightmares to integrate with despite being strictly typed beginnings.

If you have a highly federated space, you control everything and thusly can do cool things like Rust, Lua, what have you. But, most engineers don't get such a luxury. Languages that can be rapidly morphed to accommodate all those situations achieve shipped solutions to business problems faster.

In the end, that makes money and that turns our world.

Regardless, here's the loosely related XKCD's Standards: https://xkcd.com/927/

Applying a schema doesn’t fix an architecture problem.

Describing those fixed well defined contracts on top of the architurally absurd stack that is TCP/DNS/TLS/HTTP/Ajax/Dom/JS engine/Server side graphql/rpc/rest / database... doesnt by you anything but slower iteration speed and an unwieldy schema.

These components don’t have the same impedance, the same flavour to their design. Necessarily their schema would be a mess.

Also, what stops something like

    function add(int a, int b) -> int {
        return a-b;
Typing is overrated. It has productive use cases for sure, but only in specific scenarios. It is no panacea.

What would be nice is typing like typescript - optional typing. Where i can provide a typings binding separate from the code (in the way that i can bring my own tests without changing anything of production code if you’ve made an unholy mess of testing). Same argument for optional typing outside the main source code - odds on you’re going to misunderstand or misuse types, I’d like to use them where they make sense and ignore your steaming pile of types. Impossible in a language like haskell where types are inline rather than annotations attached to code.

> Also, what stops something like [defining add as subtraction]

cmon... "it doesn't guard you against every logic error under the sun" isn't a great argument

There's no strong evidence that static typing helps reduce application logic bugs https://danluu.com/empirical-pl/

Under the same pressures it's pretty likely that the same teams that deploy dynamic language code to production and get an unexpected nil deploy statically typed code to production which unwraps an unexpected None and throws exceptions or panics.

>There's no strong evidence that static typing helps reduce application logic bugs https://danluu.com/empirical-pl/

There's no evidence proving the opposite either. We can't say anything beyond that the existing studies don't really show anything. I think it's pretty obvious that the limit of static typing reduces logic bugs (since it can actually be used to prove statements about the logic), but beyond that everything is kind of up in the air.

>Under the same pressures it's pretty likely that the same teams that deploy dynamic language code to production and get an unexpected nil deploy statically typed code to production which unwraps an unexpected None and throws exceptions or panics.

There's nothing to support that that this is 'pretty likely'. As far as we know, it could very well be that statically typed languages push you in the direction of handling it the right way. Or hell, it could be that a team using a dynamic language is more cautious and conscious of edge cases.

I don't understand this reasoning at all. You have some form of contracts, whether you specify them or not. If it's too messy to formalize, that's still going to be an issue when you're working on it in an untyped language. At best you're just sweeping all those ugly edge cases under the rug.

>Also, what stops something like

having a more precise specification. E.g. adding a commutativity requirement would eliminate your example.

> What would be nice is typing like typescript - optional typing. Where i can provide a typings binding separate from the code (in the way that i can bring my own tests without changing anything of production code if you’ve made an unholy mess of testing).

A contract system like clojure's spec sounds close to what you're describing.

Having ridden the 90's .com startup wave with an in-house application server written in a mix of Apache plugins and Tcl, I learned the hard way to never again rely on anything that doesn't bring a JIT or AOT compiler to the party.

I’m interested in knowing more about your story. Would you please take the time to tell us more about it?

Not much to publicly tell.

Portuguese startup based on our own version of AOLserver, based on the same principles of Apache plugins + Tcl + C, basically Rails before it was even an idea.

We got acquired, by another bigger Portuguese company (Easyphone), which alongside another acquisitions became Altitude Software.

Our stack was used to some of their products serving multiple top tier companies in Portugal, we created an IDE for our tools, written in VB.

Supported all major UNIX flavours and Windows NT/2000.

Eventually scale problems happened and we were looking how to tackle them while avoiding rewriting everything in C, as MSFT partner we got invited to try out .NET pre-public announcement, so we decided it was a good opportunity to rewrite our product in this new .NET thing.

Some of the team members eventually took all these lessons and founded OutSystems.

What's the hard lesson? Performance? You have more power these days. The bottleneck won't be the interpreter.

Performance yes, there is always more power as long as the bank account is full, and even then it isn't enough when working at scale.

The amount of histories of porting code prove otherwise.

Note I am not pushing away dynamic languages, only those that don't have a JIT/AOT as part of their canonical implementation.

Common Lisp, Julia, JavaScript, PHP (7 and later) are all invited to the party.

Meanwhile maybe JRuby or PyPy will eventually get more community love, instead of being the black swans.

But php doesnt have jit...

JIT is coming to PHP in version 8.

Just In Time for 2020!

Java, .NET, Golang are actively used for web development. PHP is popular, especially for small projects, but it definitely did not win, web development is a contested area.

> PHP is popular, especially for small projects

Not especially for small projects, for projects of all sizes - huge swathes of the internet still run on PHP, and it's not limited to blogs built on Wordpress.

More than a few gigantic sites (Wikipedia and PornHub come to mind) are built with PHP. This is highly anecdotal but I still see the .php extension all over the place, sometimes in sites that perform really well BTW (again, Wikipedia is a good example here).

They are also evolving though. I don't know about Ruby and PHP but in Python gradual typing via type annotations really makes projects much easier to maintain and develop these days. While "mypy" (the semi-official type checker tool) still has a long way to go in terms of library support it works really well and helped me to find many issues by analyzing the code, as opposed to running it and finding the issues via tests (or in production).

So I think typing has its merits and languages like Typescript really make development safer and code easier to understand, while still making it possible to write and interface with untyped code if necessary.

> I don't know about Ruby

Ruby is moving in the direction with the addition of Sorbet[1] for gradual typing. I have heard some discussion that this typing could become a formal part of Ruby 3.0.

1. https://sorbet.org/

I'm not sure this is really true - the subtleties of HTTP are not best handled with dynamic languages, and the actual HTTP API is fairly simple for most web app development. The areas with the most complexity related to the web, itself, are usually do with servers, concurrency, etc - precisely where these languages tend to fall apart.

> This really nails why languages like PHP and Ruby have won out over static typed ...

Real reason is lowering the bar of entry, and that explains why web is horribly broken.

The "bootcamp webshit" meme exists for a reason. That's not gatekeeping - lowering the bar to entry below a certain level leads to drastic decrease in quality.

You realize the web started with just HTML at first right?

The web was always meant to be a platform anyone could easily build on, whether they were experts or someone's little old grandma who barely knew what a keyboard was.

Not only is it gatekeeping, it's historically wrong and betrays a flawed understanding of the platform. The web has always been about ease of use and there have always been people who went out of their way to teach others how to make good use of it.

Yes, the web is broken, but the web is broken because it's no longer about everyone running a little shoebox of a server and instead now we depend on giants who see us as cattle.

A lot of it is affordances. Every function signature is a user interface for your fellow programmer.

And if you are used to poorly-designed UI, you will expect poorly-designed UI from yourself.

Less testing, Documentation, Guarding against typos, better IDE support, better performance;

But there is a whole new generation of web developers who don't bother to learn algorithms or low level programming, and just churn out code with that hot new framework. That kind of programmers are also the ones that choose a technology because "it looks easier".

The world would be nice if management people understood not all programmers are equal, the 10x programmer is not a myth. But here on HN or reddit you see most people arguing that all programmers are similarly productive. And what we get for this is more electron apps.

> the 10x programmer is not a myth

You are right that quality of programmer skill and productivity has high variance, but the way people talked about the "10x programmer" was a vague vision that people pasted their ideas and personal bugaboos onto. So people got into increasingly-heated arguments and talked past each other. When we create social concepts, we need to strive for something like falsifiability -- something that lets you look at an example and say "Well actually no, thats not 'high-performing programmer' behavior -- for {{describable reason}}"

Categories matter. We should shape our categories for human happiness and human effectiveness, but categories matter.

I think it's really not that some programmers do 10x more, but some people with the same job title do a different job. Some engineers take responsibility for product or for ecosystem; some engineers tick off tasks on a list. (Both can be very valid, and bleeding for product quality at a company is definitely not inherently good.)

Is less because there are a few 10x programmers, and more because there are a whole hell of a lot of 0.1x programmers.

True. That was a vague notion. I think Joel Spolsky's "Hitting the high notes" summarized it better.

> Less testing


For example functions only take objects of supported types. Many invariants are verifiable at compile time. Especially when you have generics, structural typing and ADTs, the expressiveness is pretty good.

Depends what one considers application level web development, over here it has been pretty much Java and .NET, with occasional C and C++ written libraries, during the last 20 years.

This is probably true .... but watch out for Node with Typescript. You get that quite iteration, but with a decent amount of type safety most of the time, but with a drop down to the 'any' type for libraries that haven't written type definitions. I also use the existence of type definitions as a guide to how mature the library is :-), so that's a win win.

This is a mistaken view point. Every function you can ever write is bounded by a type, thus whether you use Ruby, PHP, or Rust can be described by a bounded type.

Every time you write logic in your code, your brain is aware that the logic is dealing with a specific type. Writing a type signature on top of that is just additional instructions to the compiler about the type you have already specified in your logic. It is a minor inconvenience.

There are many reasons why Ruby and PHP won out. One of the reasons is many people misunderstand the power and flexibility of types. Once they understand this, they will know there is really no trade off between statically typed and dynamically typed. Statically typed languages are infinitely better and the only downside is a minor inconvenience.

First off, note that there is only one function in the universe that can really take every single type in existence and that function is shown below:

  -- haskell
  identity :: x -> x
  identity x = x

  # python
  def identity(x: Any) -> Any:
      return x
This is the only untyped function in existence. Every other function in the universe must be typed whether it is typed dynamically or statically is irrelevant, either way your code will have to specify the type in logic or in the type signature. You can see this happening in the examples below...

The above example have no logic to specify a type... The minute you start to add any logic to your function immediately your function becomes bounded by a type. Let's say I simply want to add one in my identity function.

  -- haskell
  identity :: Int -> Int
  identity x + 1 = x + 1

  # python
  def identity(x: Int) -> Int:
      return x + 1
The very act of adding even the simplest logic binds your function to a type. In this case it binded my function parameter to an integer. Whether you use PHP or Ruby or Rust there is a type signature that describes all functions.

Let's say I want to do garbage code like write a function that handles an Int or a String? That can be typed as well....

  type IntOrString = Number Int | Characters String
  func :: IntOrString -> IntOrString
  func Number n = n + 1
  func Characters n = n ++ "1"

  IntOrString = Union[str, int]
  def func(n: IntOrString) -> IntOrString:
      if isinstance(n, int):
         return n + 1
      if isinstance(n, str):
         return n + "1"

Let's say I want to handle every possible JSON api in existence? Well it can be typed as well.

   -- haskell
   type JSON = Number Float | Characters String | Array [JSON] | Map String JSON
   func :: JSON -> JSON
   func Array x = x ++ [1.0]
   func x = x

   -- python
   JSON = Union[float, str, List["JSON"], Dict[str, "JSON"]]
   def func(x: JSON) -> JSON:
     if isinstance(x, list):
        return x + [1.0]
        return x 

There really isn't any additional flexibility afforded to you by a dynamically typed language other than the slight overhead of translating the types you already specified dynamically into types that are specified statically.

One caveat to note here (and this is specific to haskell) is lack of interfaces specific to record types. I cannot specify a type that represents every single record that contains at least a property named x with a type of int.

this was historically true, but I suspect in the age of the evergreen browser this argument has been seriously dented (but not invalidated)

> The analogy that comes to mind is that Rust is a really nice sports car with a great engine. It handles like a dream and you can feel the powerful engine humming while driving it. With the right open road, it's a great time. You can really optimize code, make great abstractions and work with data easily.

I prefer a different analogy:

Rust is high-end automated industrial equipment for machining high-precision metal parts that will fit perfectly with each other and which you can use to build anything you want, with high confidence that it will work and last a long time. But if you're under pressure to build lots of small, partially documented, constantly changing parts every day, you might be better off using more flexible equipment and materials (balsa wood, bailing wire, masking tape, etc.) to get the job done, even if the result will be messier and more prone to breaking.

> with high confidence that it will work and last a long time

And we need more of our software to be like that. Now if we could just do something about the constant pressure to compromise.

Certifications and more flexible laws for returning it when it doesn't fit the purpose, will do it.

>>But if you're under pressure to build lots of small, partially documented, constantly changing parts every day, you might be better off using more flexible equipment and materials (balsa wood, bailing wire, masking tape, etc.) to get the job done, even if the result will be messier and more prone to breaking.

Python & Deno/Node fit the bill for the latter category. Would you consider Julia in the former or latter category for ML use-cases?

For ML/scientific computing, it can fulfill both roles, I'd say. Julia looks about as flexible and easy to use as Python, R, and matlab for experimenting and iterating quickly with throwaway code, and superior to those languages in every other important dimension except one: it lacks Python's gigantic ecosystem.

Julia has a much larger ecosystem in terms of things like numerical linear algebra, scientific computing, and differentiable programming. How do you use block banded Jacobians inside of ODE solvers? Python's ML ecosystems just barely got non-stiff methods, so advanced accelerations are fairly far away, whereas these are things that have worked with Julia's ML solvers for a long time now.

Before responding, first let me say thank you for all the work you have done and continue to do. Also as I wrote above, I view Julia as superior to Python in every other way. :-)

By "ecosystem" I mean much more. PyPi currently offers around a quarter million Python packages, versus thousands for Julia. Github code and activity using Python is also currently around three orders of magnitude greater than for Julia. Python is currently the fourth most popular language on Stack Overflow (after JS, HTML, and SQL). It's a gigantic ecosystem that has evolved over three decades.

Yes true. I think it's a lot like the old Android vs iOS stuff though: counting the number of packages doesn't mean all that much because so many have been abandoned that Python probably has a much higher percentage of junk out there. That said, Python's ecosystem is still larger, but that doesn't mean it covers all domains well. I can name a bunch of holes that it has off of the top of my head, and same with Julia too, so it's not like either supersets each other. So I just think the "but Python has a bunch of packages" is a lot more nuanced than that: in many areas of ML or webdev it is much more developed, but when you get out of that realm it can get sparse in some ways.

It's not just packages and repos, but also installed base, developers, forums, discussions, corporate backers, etc.

Otherwise, I agree that not all domains are well covered by any language :-)

Something that I think is worth discussing is that julia has far more package developers per user than Python. That may seem like a weird thing to prop up, it seems to just suggest that Julia users more often find themselves needing to implement something themselves rather than use a pre-existing library, and that's definitely true.

However, Julia also makes the experience of growing from a package user to a package developer basically seamless. A big part of what makes this seamless is that almost all julia packages are written in pure julia and thus relatively easy to read and understand for a julia programmer. This coupled with the strong introspection capabilities makes it so that users who would never be writing their own packages in Python end up contributing to the Julia ecosystem.

Coupled to this is the fact that julia is a highly composable language meaning that it's very easy to combine functionality together from two separate packages that weren't designed to work together. This composability makes julia's fewer packages have far greater leverage and applicability than Python packages and also makes it so that when you do need to make your own package, you're less likely to be reinventing the wheel every time.

Consider the fact that all the big machine learning libraries in Python all have basically implemented their own entire language with their own special compiler, all the requisite datastructures, etc. In Julia, that's rarely necessary. Flux.jl is just built using native julia datastructures and julia's own (very hackable) compiler.

Great points. Agree with all of them.

Interestingly, I see the same trends with Rust -- high ratio of packages to users (already ~43K crates, despite having only 3% share of Stack Overflow last year), easy packaging (`cargo publish`), and high composability (e.g., via traits, type parameters, etc.).

An obvious question is, why does Rust have 10x more packages than Julia already, and almost a fifth as many as Python? I think it's because from day one Rust has appealed to a broader group of developers (anyone seeking better performance and/or concurrency, with memory safety and modern language features like algebraic data types, will consider Rust). Julia, in contrast, has always targeted the scientific computing community, which contains a much smaller group of developers who seem mostly content with existing tools.

I think Rust scratched a deep itch by targeting people who are unhappy with C++ and definitely started off targeting a bigger, less competitive market than Julia.

Julia has a harder marketing job than Rust. Julia needs to convince people who use $slow_dynamic_language and $fast_static_language together that they'd be better served by just using Julia for both jobs.

Even more importantly though, julia needs to convince people that it's premise is even possible. People have deep seated biases that make them unaware that it's even possible for a dynamic, garbage collected language to be fast.

Rust on the other hand 'just' has to convince people that it's a better ~1:1 replacement for C++. Most people who have written C++ deeply believe that a better language is possible and yearn for that language.

Python, R and Matlab users typically don't believe that it's possible for a language to do what julia does.

Furthermore, Julia did initially spend a lot of marketing effort on the scientific community, which is somewhat small, and is more composed of people who just see their language as a tool that only needs to be 'good enough', so they're less likely to want to switch than say systems programmers who spend all day faced with C++'s inadequacies.

I think an even bigger aspect of this is that C++ users are technical people who make technical decisions. If Rust is a better tool for them, they will switch. For a lot of scientific programmers, or even just a lot of general Python users, they don't necessarily choose Python because they know 20 languages and think Python is the best tool for the job. A lot of people use Python because... they use Python and it's what they were taught. That's a much harder audience to go to and say "would you change to this technically better language?". Most just say "I'm not really a programmer, I'm a researcher/scientist/etc. so I shouldn't spend time learning more programming", and that makes it fairly difficult.


Speaking from personal experience, I would add that some people use Python, not only because that's what they already know, but also because... everyone else in their field is using Python and its ecosystem. New advances in their field occur largely within the confines of that ecosystem. It's hard to leave the pack behind.

Large ecosystems have significant network effects that act as barriers to new entrants.

Yeah definitely. That's what I was trying to get at in the last paragraph, and you might be right that it's an even bigger aspect than the other things I mentioned.

Additional considerations here:

Rust invested in a package manager very early, and specifically designed things to make it easy to publish packages.

Rust has a smaller standard library, and so you often need packages to do things that you may not need in other languages, which encourages the use of packages, which I think encourages the publishing of them as well, not to mention that there are gaps that would get filled by packages.

It's still a significant upgrade over JavaScript. I'm developing a web app with the Backend in Rust and Front-end in Vuejs/JavaScript. Sure Rust was slower to develop, but it doesn't throw errors like 'val in undefined' and then I have to go debugging where I check for the value of val. You do that beforehand on Rust and that avoid a whole class of problems.

JavaScript is cheaper to get started but becomes way more expensive to maintain. My Rust code on the other hand: Once it works it keeps working. If it compile it probably won't bug down the road.

I'd switch to Rust / WebAssembly once the libraries are mature/stable/documented enough. The overhead cost is worth it in the long run.

> JavaScript is cheaper to get started but becomes way more expensive to maintain.

I think this is assumption is a bit too broad. There is another side of this:

In JS you write less code than in Rust and there is less explicit coupling in a dynamic codebase. So changes tend to be smaller and faster as well.

For example if you pass a top-level data-structure X through A, B... and C but only C cares about some part Z. Then you change how Z is produced in A and consumed in C.

If you are dynamic you just do exactly that, which is actually just fine. In a static language you have to change all the B's. Or you take the time and write a sensible abstraction over Z or X so in the future you don't have to change the B's anymore, which is better, but ultimately less readable and more complex.

Stuff that is well understood and specified in advance suits a language like Rust better. You get all these runtime guarantees and performance. You get all this expressive (but explicit!) power to fine-tune abstractions and performance. It's great.

But the closer you get to a UI (especially GUI) and non-technical, direct(!) users, the more dynamic you want to be. Because in this context you are more of a human to computer translator: requirements get discovered via iterations, interactions need to be tested and evolve etc.

You can get that kind of reliability on the front end with Elm, Purescript etc but investing in such niche languages is obviously a nuanced decision.

You're referring to Rust's static type system. What do you make of TypeScript?

I've tried TypeScript and I didn't really like it. Plus you can ignore the strict type if you'd like to which is a slippery-slope if you are in a hurry to get something to market.

For backend web development I was somewhat disappointed to see that many of the new web frameworks (all async) allocate extensively on the heap (for example lots of Strings in their http request types.)

I mean it works but I don't understand why I would go to the bother of thinking about lifetimes when it seems performance would be similar to Kotlin / Swift / F#.

The short answer is that because async is not done yet in Rust.

The long answer probably includes GATs (generic associated types), and the even longer answer starts with the amazing work that Nico et al. does to reinvent the internals of the Rust type checker. (Basically - if I remember correctly - the compiler team is currently refactoring big parts of rustc, to librarify the type checker, and replace it with a PROLOG-ish library called "chalk", which is able to prove more things, so it allows better handling of GATs, so it allows better handling of async, where you get "impl Future<Output = T>"-s everywhere.

But also somewhere there is that the ergonomics of async/.await are still not very much in progress too. With better error messages people will be able to forego Box<>-ing and figure out what to use instead, and only heap allocate where they must.

It's possible to write low-level async code with hand rolled-polled Futures, and push/manage as much stuff on the stack as possible. But ... that takes time, and performance is already "good enough". (And/or probably there are bigger gains in performance in other areas, such as scheduling.)

And, finally, boxing helps with compile times. (Because it's basically trivial for the compile to prove that a heap allocated simple trait object behaves well compared to a stack allocated concrete, but usually highly complex (plus unnameable) type.)

Can I ask what you mean when you say that async isn't done? async/await and Futures are on stable now. There is no runtime in the core/std but that's by design, there is no short term intent from the Rust team to do that. Tokio and async-std are both fairly usable as well.

The team itself described the stabilization of async/await as being in an "MVP" state. Stuff that's been fixed since then:

* error messages (I think there may be more to do, but they're better and better all the time)

* size of tasks (Still more to do, but they've shrunk a lot since it was first implemented

* async/await on no_std (initial implementation used TLS to manage some state, that's since been fixed)

Stuff still to do:

* Async functions in traits (this has a library that lets you work around this with one allocation, having this requires GATs)

* More improvements to the stuff above

* other stuff I'm sure I'm forgetting

> There is no runtime in the core/std but that's by design, there is no short term intent from the Rust team to do that.

This is true but with an asterick; https://github.com/rust-lang/rust/pull/65875 ended up being closed with "please write an RFC," rather than "no."

Huh! Thanks for chiming in and that's good to know!

Side note, I just want to say that I have mad respect for you always finding a way to stay so positive while continuing to be so heavily involved with the community, even with job changes and all of that stuff going on. You're an absolute role model for how to participate in community management and a godsend for Rust.

Thanks. It has taken a ton of work, frankly. I made a conscious decision to try and improve in this way a while back, and while I'm not perfect at it, I'm glad that I'm moving in that direction.

As a complete noob to rust, I found attempting to manage a memory pool that is handed out per request to cause all kinds of borrow checking headaches.

My bet is people choose to allocate heavily rather than figure out how to share memory correctly with Rust. I need to learn more rust to get a hang of it so I don't allocate so much.

Are there any rust folks who have a great "list of memory management models for rust"?

Rust does not yet have local custom allocators ala C++. (You can change the global allocator, but this doesn't help the "memory arena" use case.) The feature is very much on the roadmap, though.

While this is true, this doesn't inherently mean you can't do this pattern. https://crates.io/crates/typed_arena for example. It depends on exactly how allocations and your data structure are tied together.

It would be amazingly handy if someday it was possible to direct Rust to make all heap allocations under a certain block within an arena. At the moment it looks like you have to rewrite any third party code that makes heap allocations to use the arena explicitly at each allocation.

Yep, it would. There’s certainly a bunch of work to do here still.

I often write my code to allocate at first just to get a feel for the API and business logic and then go through it later to remove them. Given that all of those frameworks are still fairly new it wouldn't shock me to hear that that's the case.

I'd be hard pressed if a backend in Rust gives the same perf as Kotlin/F#. Especially if you write the Rust with an async stack you will likely come out significantly faster than the other options: small binaries, short startup time, less memory usage, more throughput, slightly shorter request cycles, ...

The only place that the langs you mention probably win is: learning curve (F# maybe not so much) a.k.a. getting language illiterate people up to speed, and compile times.

If you allocate and free small memory chunks at high frequency in a non-GC language, then it's quite likely that similar code in a GC'ed language will come out faster. The point of manual memory management isn't that it is "automatically" faster than a GC, but that it gives you a lot of control to reduce the allocation frequency (e.g. by grouping many small items into few large allocations, or moving allocations out of the hot code path).

Also, async itself doesn't magically make code run faster (on the contrary), it just lets you (ideally) do something else instead of waiting for a blocking operation to complete, or at least give you the illusion of sequential control flow (unlike nested callbacks).

Modern allocators (jemalloc, mimalloc) do pooling optimization just as well as GC. The point of non-GC language is usually about pointing directly to the source data instead of copy and having value type that nicely fit together in the same memory page instead of jumping around in heap space. I know JIT can do all that, but reliably? We'd need Sufficiently Smart Compiler.

Whether to use an async stack or not is very application dependent. For something closer to an application server than a proxy server I’m unconvinced async would be superior to threads.

I feel that Rust should be competing with C/C++ here though. I want a modern language with the same speed and control of memory usage. What’s A little frustrating (and don’t get me wrong, I’m rooting for Rust to get there) is that it feels like it should be possible to offer this, at least for threaded web servers.

There are other interesting languages in this space that I really enjoy playing with. Both Nim and Zig generate native code and are really fun, albeit very different, languages to work with.

I'm not sure. These reasons sound reasonable, but in webapps most of the time is spent IO handling anyway. As long as you're using async implementations in any language with performant "threads", I'm not sure of there would be a large difference in response times unless computation was involved in the request path.

You forgot development times.

I broke it down into:

> a.k.a. getting language illiterate people up to speed, and compile times.

With equally skilled people I dont think these languages yield different dev speed, apart from the damage done by compile times.

Rust never has to run a garbage collector, because it already figured out the lifetimes of all the values in your program statically, with relatively minor help from the programmer. This may make your code faster or more likely to be correct than code written in some of those languages.

It didn't figure them out at all, the programmer had to figure them out the hard way and then code them in so that the compiler understands them.

GC is figuring out things mostly correctly and the programmer can focus on other things, at least until they hit a performance problem :)

For the most part, lifetime is inferred from scope, so there isn't anything special to figure out; objects live exactly for as long as you can see and use them. This also includes such things as lock handles, or references to refcounted objects.

Only in the most trivial cases. Tons of real Rust uses copying or RC/ARC or even unsafe to avoid dealing with complex lifetimes because it's... complex.

Copying, refcounting and locking work within the exact same semantics. They're among the most typical tools that are used to transmit or share data, so I'm not sure why it should be strange to see them used in most real programs?

That doesn't change the reality of allocation though. Programmers in languages like C++ and Rust choose when and where to allocate memory, and if there's one thing we know about performant code, it's that we shouldn't be allocating and deallocating repeatedly. Nothing about lifetimes helps or hurts about this programming model. Lifetimes (Rust) and RAII (C++) are just semantic models to help reign in the cognitive load of memory allocation.

I don't think you've understood the nature of the borrow checker in Rust. It doesn't add any extra runtime behaviour, in fact you could compile Rust code without running these checks (though I don't think rustc provides you with a way to do so). Rust's memory management tools are no different from C++: you have stack allocated structures with destructors, which the standard library uses to implement heap-allocated smart pointers, shared pointers, mutex guards, etc, and you can hand out pointers to any of these. The only difference in rust is it checks how these pointers get handed out so you don't get use-after-free, data races, etc.

I doubt that you'll write a large Rust program without relying on refcounting at least for some of your objects. Refcounting is not fundamentally different than a GC.

Reference counting is a kind of garbage collection (taken broadly), but it's quite different from tracing GC.

Naive use of RC doesn't look like tracing much but as soon as you need cycle-aware RC like the CactusRef the algo looks a lot like tracing GC https://github.com/artichoke/cactusref#cycle-detection

That's a fair point, I should distinguish heap allocation from reference counting. It still seems somewhat tricky to replicate the well proven arena allocation approach of Apache and Nginx which avoids memory fragmentation in current stable Rust though.

For general areana allocation, you can use something like the Bumpalo crate: https://docs.rs/bumpalo/3.4.0/bumpalo/

Nice analogy. Here's somethine new. Rust is not just a sports car, but an eco-friendly electric one at that.

One topic OP hasn't touched upon is the disadvantage of "pushing client side to the server". JS is terrible at handling concurrent processes. The browsers have been able to make do because they typically care about just one user.

In the server though, it's terribly important to handle thousands of users concurrently. JS engines suck at it.

To put it in plain words, if you are into SPAs and want server side rendering, among all the other issues the one sure wall you'd be hitting is scaling issues.

JS will never be able to construct and send HTML over as efficiently as Rust could. This is going to be the reason why Rust(or a similar tech) will eventually win the web.

I wish speed mattered more, but JavaScript on Nodejs is performant enough for 95% of websites out there. Most websites are in the long tail, and never need to handle more than about 10 requests per second. For most websites, team velocity matters more than the AWS bill.

A clean static rendering system with caching, built on top of react / svelte / whatever performs well enough for almost everyone. And every year cpu costs drop, V8 gets faster and JS libraries get a little bit more mature.

I love rust, but I don’t see it displacing nodejs any time soon. Maybe when websites can be 100% wasm and the rust web framework ecosystem matures some more. Fingers crossed, but I’m not holding my breath.

Yep, I don't think Rust is quite there yet for web dev. I didn't use Rust for any frontend interactions. The web app is an SPA written in Svelte. I only used it for the core state update logic, which benefits from the typing and performance boost.

Yep! You found a really great open road to drive Rust. I'm jealous :D. If I had an app with as interesting requirements, I'd certainly consider your approach

Love the analogy!

I think there's value in mixing Rust with languages with mature libraries like JS/Python/etc to optimize performance critical code paths - basically for things we've been using C/C++ so far.

There's also a lot of progress in Rust community, I believe it'll get mature libraries very soon in future.

I use Rust for web not for performance, but for strict typing + smart compiler, safe refactoring, errors handling and many other language features - Rust is an amazing language even when you don't need performance.

You might enjoy this blog post I wrote: http://kyleprifogle.com/churn-based-programming/

It would be even more enjoyable if the text was black, had to use reader mode.

Disadvantage of retina screens, I didn't even realize. Thanks for pointing out, i'll fix.

The first exposure is the hardest part. It's not the kind of language that one can just pick up and use effectively without first understanding the fundamentals, especially if using an async runtime. The investment is worthwhile, though.

> webassembly is faster than javascript

Everyone says this, but I would dispute it as misleading in a lot of cases. I've been experimenting a lot with wasm lately. Yes, it is faster than javascript, but not by all that much.

It's the speed of generic 32 bit C. It leaves a lot to be desired in the way of performance. My crypto library, when compiled to web assembly, is maybe 2-3x the speed of the equivalent javascript code. Keep in mind this library is doing integer arithmetic, and fast integer arithmetic does not explicitly exist in javascript -- JS is at a _huge_ disadvantage here and is still producing comparable numbers to WASM.

This same library is maybe 15 or 16 times faster than JS when compiled natively, as it is able to utilize 128 bit arithmetic, SIMD, inline asm, and so on.

Maybe once WASM implementations are optimized more the situation will be different, but I am completely unimpressed with the speed of WASM at the moment.

I also have a WASM crypto library, focused on hashing algorithms: https://www.npmjs.com/package/hash-wasm#benchmark

I was able to archive 10x-60x speedups compared to the performance of most popular JS-only implementations.

You can make your own measurements here: https://csb-9b6mf.daninet.now.sh/

Yeah, hashing in WASM seems to be fine in terms of speed, though 60x faster does still sound surprising to me. Hashes with 32 bit words (e.g. sha256) can be optimized fairly well in javascript due to the SMI optimization in engines like v8. I should play around with hashing more.

I was in particular benchmarking ECC, which is much harder to optimize in JS (and in general).

Code is here:

JS: https://github.com/bcoin-org/bcrypto/tree/master/lib/js

C: https://github.com/bcoin-org/libtorsion

To benchmark:

    $ git clone https://github.com/bcoin-org/bcrypto
    $ cd bcrypto
    $ npm install
    $ node bench/ec.js -f 'secp256k1 verify' -B js

    $ git clone https://github.com/bcoin-org/libtorsion
    $ cd libtorsion
    $ cmake . && make
    $ ./torsion_bench
    $ make -f Makefile.wasi SDK=/path/to/wasi-sdk
    $ ./scripts/run-wasi.sh torsion_bench.wasm ecdsa

I think he doesn't complain at all about WASM speed vs JS.

He wants WASM to be closer to C performance. I.e. to move further along this line:


>> webassembly is faster than javascript

> Everyone says this, but I would dispute it as misleading in a lot of cases. I've been experimenting a lot with wasm lately. Yes, it is faster than javascript, but not by all that much.

I think even if it is faster in general, you might lose all that advantage as soon as you have to cross the WASM <-> JS boundary and have to create new object instances (and associated garbage) that you never would have needed to create if you had used only one language.

Therefore moving to WASM for performance reasons on a project which crosses the language boundaries very often due to browser API access doesn't seem too promising to me.

I am writing an app that needs worker threads both on the backend and also on the frontend (because of some heavy processing of large amounts of "objects") and my experience with TS so far is very poor. JS runtimes are just not suitable for heavy concurrent/parallel processing. Serialization/deserialization overhead between threads is probably (much) worse than it would be if the worker threads were in Rust.

It's not a matter of speed here. It's a matter of enabling certain types of programs which are borderline impossible with pure JS runtimes.

So I will probably move most of the logic to Rust

Javascript runtimes do fine with concurrent operations, but obviously are not intended for parallelism.

On the WASM side: Does WASM support real threads yet? Otherwise moving to Rust wouldn't really help you? If it's just "WebWorker" like multiple runtimes, you might still pay serialization costs to move objects between workers.

No, JS runtimes don't do "fine" with concurrent operations, unless you are "waiting". If you are doing heavy processing, the whole service freezes. That's indeed the primary reason I need worker threads.

Erlang's runtime does "fine" with its preemptive concurrency model, JS runtimes are a joke in this regard.

Have you tried async generator functions?

> Yes, it is faster than javascript, but not by all that much. ... My crypto library, when compiled to web assembly, is maybe 2-3x the speed of the equivalent javascript code.

2-3x may not be the 15-16x you see in native code, but it's still a massive speedup in already optimized code, and is likely enough to make a bunch of applications that weren't quite feasible to do in on the web now feasible.

I think the point is only certain use cases (usually related to number crunching like crypto, but undoubtedly games too) may see substantial improvements, they still aren't close to "native" speed, and nearly all other use cases won't see much if any benefit, especially compared to the additional complexity of another language, compiling to was, etc.

Plus things like competitive games and what I'll call "pretty" games have to squeeze out as much performance as possible, and no hitching is acceptable to competitive games, which IMO means WASM is still a no-go for those types of games(although games that don't have this requirement undoubtedly benefit)

Another consideration here is that you can use languages that offer control over data layout as a natural feature, which can matter a lot for good cache utilization etc. In many cases the data layout matters a lot for performance.

You can do this in JS too with TypedArray and whatnot but the key word here is 'natural.'

I've been working on a game engine as a side project with C++ and WASM -- and there are already many improvements over what I was getting with JS due to less GC, better data layout (the data layout thing is also esp. helpful for managing buffers that you drop into the GPU). I don't think it was about 'pure compute' as much as these things. C++ and Rust give you tools to manage deterministic resource utilization automatically which really helps.

A bonus is that the game runs with the same code on native desktop and mobile.

Chrome already has experimental support for SIMD, have you tried that as well?

Otherwise, eventually I expect WebAssembly to match what Flash Crossfire and PNaCL were capable of 10 years ago.

No, but I've been meaning to test this. I did notice it was available in node.js with --experimental-wasm-simd. I hope this proves me wrong about wasm, but I'll have to try it.

Note that Emscripten has an implementation of C++ intrinsics based on wasm simd: https://emscripten.org/docs/porting/simd.html

Have you tried with firefox? Last I heard they've got far and away the fastest wasm implementation.

No. I've just been building with the WASI SDK and running the resulting binary with a small node.js wrapper script. So, I've only tested v8's WASM implementation so far.

Does firefox have a headless mode, a standalone implementation, or some CLI tool I can use to run a WASM binary? Running stuff in the browser is cumbersome.

You can use jsvu to grab command-line shell binaries for spidermonkey (firefox's JS engine), v8, and jsc (safari's JS engine) to toy around with.

I think there should be a headless mode. For exampl e you can run unit tests using the gecko driver (?) without firefox showing up, running it as some process to perform the testing steps and give results.

On algorithmic code, idiomatic Rust + WASM is often about 10 to 40 times faster than idiomatic JS. The problem however is that each call between WASM and JS has a hefty cost of about 750ns. So your algorithm needs to be doing a significant amount of independent calculations before you will see these performance differences.

> each call between WASM and JS has a hefty cost of about 750ns

Why is this the case? Can we expect it to go away as browsers mature?

But 2-3x speedup over a well written Javascript version running in a modern Javascript engine is quite impressive, isn't it?

One thing that's often overlooked is that even though "idiomatic Javascript" using lots of objects and properties is fairly slow, it can be made fast (within 2x of native code compiled from 'generic' C code) by using the same tricks as asm.js (basically, use numbers and typed arrays for everything). But the resulting Javascript code will be much less readable and maintainable than cross-platform C code that's compiled to WASM.

> maybe 2-3x the speed of the equivalent javascript code.

That is an insane difference even if nowhere close to native code.

Some engineering fields would be crazy with a 20% gain... 200% is huge!

I agree it still is signficant, but this is not how wasm is being touted. Look at wikipedia: https://en.wikipedia.org/wiki/WebAssembly#History

They even say asm.js is supposed to be "near-native code execution speeds". In my experience, it is no where close to native speed. People should avoid this kind of deceptive marketing.

On that front, I agree. Wasm is being sold as the greatest innovation ever, when it is just another virtual ISA with lacking features and performance.

WebAssembly with Rust is sometimes bigger and "just as fast as JS".

But it's way more predictable, no more crazy 95th percentiles.

I'd be interested to see how current-day WASM stacks up against current-day Java. Don't suppose you've ported your crypto code to Java? Other than the maturity of the JIT compilers, are there reasons we should expect WASM to be any slower?

In some ways it should be faster because it isn't garbage-collected. But I agree that would be a much better benchmark for what should be possible.

Hi Nicolo,

Here are my two (or more) cents:

1 - Javascript is fast enough for your use case. No one will notice the difference in a board game website.

2 - AI should probably be implemented in python anyway. As ugly as python can be, you shouldn't fight the world; there are too many free advanced AI algo implementations in python out there.

3 - Regarding "Limitations of TypeScript" (strict typing / data validation / error handling): first of all arguable claims, but moreover, even if you think rust is better in these concerns, they are not important enough to justify reimplementation, disregarding away typescript's advantages and taking the risk involved in a new language and toolset. Yes, I can see the appeal as a programmer to learn new stuff, but seriously you should have much better reasons to rewrite existing code.

BTW, also, if I would think of a rewrite, I would have gone with scala or python, both are slower on the web, but seriously, it will not amount to anything meaningful in your use-case. Scala has better typing, and contextual abstraction is a killer feature for DSLs like game mechanics specification. Python is the lingua franca of AI, and pypy has greenlets [1]! Which is much cooler than people seem to realize. Specifically, it should allow writing immutable redux-like command pattern operations, as regular code, without the weird switches everywhere.

BTW2, I've contributed to boardgame.io, please consider staying with open source. We can build something like boardgame-lab together.

[1] https://greenlet.readthedocs.io/en/latest/

Hey Amit, good to hear from you!

> Javascript is fast enough for your use case. No one will notice the difference in a board game website.

No, actually. Have you seen how long boardgame.io's MCTS bot takes to make a Tic-Tac-Toe move? Not the end of the world, but certainly in need of improvement.

Yes, but AI search algorithms like MCTS are slow in general. Even with C, it will be very slow when you want the AI to be smart and consider many actions. IMO, you should train using python libs like [1] and move it to the browser with something like [2] OR run AI in the server. YES definitely writing an AI lib for the browser is a great goal, and as a programmer, it is super interesting. Still, it is tough, and time-to-market is much more crucial, as the entire codebase will change once it will interact with actual people.

[1] https://github.com/datamllab/rlcard

[2] https://onnx.ai/

Other than solved games like tic-tac-toe, game-playing bots can always use more performance because if you can search more efficiently, your bot gets smarter. Sometimes supposedly more sophisticated algorithms end up making things worse because they slow down the search.

It's an unusual area where functionality (what answer you get) and performance can't be separated. This is still true on the server.

To be clear, I'm not talking about the server implementation only about implementation on the browser.

And of course, Rust and C are faster than javascript, and it will be noticeable on search algorithms, but IMO, it will not cause user loss. It is smarter to get to the stage you have those users as fast as you can to validate this need.

So I would not go into implementing new AI libs in rust as part of a turn-based game engine just for the performance gain--mainly because it is a tougher project than building a turn-based game engine!

I would search for existing, viable solutions. As I said before, this includes server AI and server trained models running on something like tensorflow.js. Additionally, I will also try to research browser libs that may use compiled webassembly and webGL. In any case, I think it's not wise to build your own for this purpose.

Please ignore him whole hearty.

Some people want to write profitable apps in the shortest amount of time possible, while others want to advance the state of the art in technology.

IMO we always need more people in the second group. And sometimes, you can hit the jackpot and do both things at the same time!

"Please ignore him whole hearty."

My advice is not about profit.

My "time-to-market is most important" advice (not mine, it is a very sound and well researched strategy) is about making a sustainable project:

A) most passion-only projects are abandoned at some point in order to pay for food and rent. (Yes there are few exceptions among an ocean of failed projects)

B) The other point is that even if you can run this for years before showing traction. Having users mean getting feedback, which will render some of your efforts needless.

Moreover, it was only a small part of what I said. I do think that rust doesn't make sense for other reasons, and doing a rewrite in a real project needs better reasoning otherwise you would keep rewriting forever.

Moreover2, I think you're misinterpreting the original author's intent. I'm not sure if there is interest in building AI in rust as a goal for itself. Seems to me more oriented towards the result than about the love for rust.

For a lot of code, if you write JS like you write C (avoid allocations by avoiding the creation of objects, arrays or closures) you should get very comparable performance i.e. within 30% to 50% of C performance

You can indeed avoid most of the js overhead if you use a particular subset of the language. In fact, that's how asmjs started, but:

- it's not JavaScript anymore, you'll lose most idioms you're used to work with, and it's a nightmare to maintain because it's pretty low level.

- it won't be as fast as it could unless the runtime is aware of the fact that you are using that subset (that's why asmjs used some annotation to run on a special engine on Firefox, and part of the reason why browsers moved to wasm).

I don't know about JS being slow, while working on https://curvefever.pro it was not too hard to make the game run at 4k, 60FPS for a 6 player multi-player game that renders dynamic geometry with collisions which changes multiple times per tick.

JS is pretty fast if you use TypedArrays, are careful with WebGL calls and pool objects (reduce memory allocations).

[TypeScript] does not actually ensure that the data you are manipulating corresponds to the type that you have declared to represent it. For example, the data might contain additional fields or even incorrect values for declared types.

This is only a problem if you are mixing untyped code with typed code, isn't it? Like when you are parsing JSON, you need to do typechecking right then, rather than just using an "any" type. The only other situation I have run into this in practice with TypeScript is when I'm using a library that has slightly misdefined types.

> This is only a problem if you are mixing untyped code with typed code, isn't it?

I find it a bit strange that people talk about this as "only a problem", as though it was some weird niche edge case and not an ever-present issue. The written-in-plain-JS ecosystem completely dwarfs the written-in-TypeScript one; unless you're doing something rather trivial you're quite likely to end up depending on a library that barely had the shape of its objects in mind during development, with typings (if they're even present) that were contributed by someone that's almost definitely not the actual library author.

Of course, if you're competent enough you can correct typings and always remember to run runtime checks on data that doesn't come from you and so on. But it's too easy for a beginner to fall into traps like mishandling data that comes from a dependency (especially when it's buggy/ostensibly has typings) - in my opinion, there should be a first-party compiler/linter option to enforce checks on any `any`.

You don't have to depend on badly-written untyped third-party libraries, just because there are a lot of them out there. Many projects and companies will avoid doing so. This is especially reasonable if your comparison is switching to a language like Rust; there are probably fewer third-party libraries available in Rust overall than there are libraries with accurate TypeScript definitions available.

"Badly-written" is of course subjective, but as for untyped/loosely typed I think it's a bit difficult to claim that people are avoiding using them when (for example) the typings for an obviously popular library like Express are chock-full of `any` types[0]. Including several (like request bodies) that should instead be `unknown`. I'm sorry but it's rather naïve to expect a beginner to TypeScript, especially one that's coming from JS, to not trip up at all using typings like that and a compiler/language spec that implicitly casts things declared as `any`.

0. https://github.com/DefinitelyTyped/DefinitelyTyped/blob/mast...

You still have to validate input whenever you interact with servers, the operating system, or the user.

No, nominal types are extremely useful to ensure the correctness of your software. You can define a function to receive a FirstName and LastName instead of passing strings so you cannot accidentally mix up the parameters for example. There are several techniques to approximate nominal typing in typescript (https://medium.com/better-programming/nominal-typescript-eee... for example) and there is an open issue https://github.com/Microsoft/Typescript/issues/202 but it’s still not being considered for implementation as far as I remember.

That's correct. You have to insert validation code at all the entry points, which was the case for me.

Moving to Rust doesn't eliminate validation altogether, but you don't have to do any type related validation which is nice.

Not sure if this is a place to ask this but if someone does not have experience working with Javascript, they might have trouble reasoning about this code:


edit: complete code


class Person { id: number; name: string; yearOfBirth: number;

    constructor(id: number, name: string, yearOfBirth: number) {
        this.id  = id;
        this.name = name;
        if (yearOfBirth < 1900 || yearOfBirth > 2020) {
            throw new Error("I don't understand you. Go back to your time machine.");
        } else {
            this.yearOfBirth = yearOfBirth;

    getAge(): number {
        const currentDate: number = new Date().getUTCFullYear();
        return currentDate - this.yearOfBirth;

class Dog { id: number; name: string; yearOfBirth: number;

    constructor(id: number, name: string, yearOfBirth: number) {
        this.id  = id;
        this.name = name;
        if (yearOfBirth < 1947 || yearOfBirth > 2020) {
            throw new Error("I don't understand you. Go back to your time machine.");
        } else {
            this.yearOfBirth = yearOfBirth;

    getAge(): number {
        const currentDate: number = new Date().getUTCFullYear();
        return (currentDate - this.yearOfBirth) * 7;

const buzz: Person = new Person(1, `Buzz`, 1987);

const airbud: Dog = buzz;

console.log(`Buzz is ${buzz.getAge()} years old.`);

console.log(`Airbud is ${airbud.getAge()} years old in human years.`);


Unfortunately HN doesn't support commonmark's triple backtick code blocks. You'll need to use 4 spaces before each line of code.

Two spaces. Four will work, of course, it just wastes a bit of horizontal screen space.

What’s confusing? I must’ve missed it in my cursory scan.

Dog and Person are structurally the same so you can assign a person to a dog and vice versa. But that's just how structural typing works and as a user of TypeScript I haven't run into a case where this'd be an issue.

Haha oh. Yeah, assumed that the Dog definition wasn’t worthless. Indeed TS is structurally typed. And that’s nice!

I worked on a web based editor. A library would give us a range to highlight, in 1-based coordinates. The editor control was 0-based. As you can imagine it was easy to forget to translate back and forth in one path or another. In a strongly typed language I would simply define two Range types and the compiler would eliminate the mistake. I assumed Typescript could help me in the same way but it allowed the two types to be interchanged silently because they had the same structure. Perhaps I was holding it wrong?

Typescript has two hacks that help with mixing of similar data and introduce somewhat-nominal typing - branding and flavoring [1]. Also see smart constructors [2] for more functional approach.

[1] https://gist.github.com/dcolthorp/aa21cf87d847ae9942106435bf...

[2] https://dev.to/gcanti/functional-design-smart-constructors-1...

That's the most they could simplify the code to make the point?

A person value assigned to a dog variable

No[1], TypeScript is unsound in many ways. Most of them are intentional trade-offs to catch as many bugs as possible while not being too restrictive/allowing most JavaScript code to be annotated.

[1]: https://codesandbox.io/s/te0pn?file=/index.ts

Edit: Apparently TypeScript playground links cannot be shared :( Edit2: Published on codesandbox.io, hopefully that works

> The only other situation I have run into this in practice with TypeScript is when I'm using a library that has slightly misdefined types.

Unfortunately there's no way to know which libraries have defined their types correctly, so you end up having to check the types you get back from every library you use.

When getting external data, it always good to do a combination of casting and validating. There are a lot of good libraries to help you with this: https://github.com/moltar/typescript-runtime-type-benchmarks

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact