Hacker News new | past | comments | ask | show | jobs | submit login
Moving from TypeScript to Rust / WebAssembly (nicolodavis.com)
500 points by nicolodavis on July 9, 2020 | hide | past | favorite | 405 comments



Whenever I write Rust, I have a lot of fun, but I'm still not sold on it for most web dev.

The analogy that comes to mind is that Rust is a really nice sports car with a great engine. It handles like a dream and you can feel the powerful engine humming while driving it. With the right open road, it's a great time. You can really optimize code, make great abstractions and work with data easily.

Unfortunately, web dev generally isn't a wide open road. It's a lot of little turns and alleys and special cases. Driving Rust in web dev is a lot of accelerating only to screech to a halt. I wrote a backend API for a side project in Rust. And true to what I said, Rust handled like a dream. But I didn't need the power. I spent most of my time screeching to a halt worrying about the plumbing between the various immature Rust libraries. And that's on the backend, which is way more mature compared to the frontend Rust libraries.

Judging by this post, OP managed to find a great open road in web dev to use Rust. I only wish I could find one as worthwhile.


This really nails why languages like PHP and Ruby have won out over static typed, compiled ones for application level web development. The web is a massive collection of disjointed, loosely coupled APIs that all (just barely) interoperate to provide a massive array of functionality. Languages that tend to work well with it are those that are highly flexible around the corner cases of these technologies, and allow you to quickly iterate through the process of gluing them together.


I’ve worked on several large Ruby codebases, and the thing is, the same qualities that make it easy to get a Ruby project up and working quickly make it a complete nightmare to maintain later. Its expressiveness and malleability mean that you can never really be sure that you understand how code is being used, and the complexity that emerges from a few years of that is incredibly overwhelming. Nowadays I would much rather slog through ten times as much Java boilerplate, because later I’d expect that I could still refactor code without fearing that the whole thing would collapse around me


Exactly; I guess a lot of people just work on projects/jobs and then move on. Once you need to go back to systems you forgot about (things you wrote 5-10-15-20+ years ago), Ruby (in your example and indeed my experience) is a nightmare on speed. The (strange to me) idea that people have that their code won't be around that long, hits me in the face every time a client asks me to 'connect to something made by someone some time ago'; I go check it and it's almost always something php/RoR/(and lately node) built by someone that left years ago and no-one touched it since because it works. When you actually do touch it, ofcourse, it is completely out of date, nothing current works and because of the flexibility of these languages and systems things change way too fast (really unneeded API breaks in packages is something crazy to me; coming from Java, I expect people to keep things backward compatible, but nope; just toss it out!) (especially node npm and least of all PHP which is actually a joy to work with in this kind of spelonking as it is remarkably stable including it's (older) libraries) (and no, not everyone uses a framework with php 'nowadays'; I run into many PHP projects a few years old that are just plain php, more so than frameworks used and that actually does make it easier to jump as plain php still works from a decade ago after update). And then when you have it running on your laptop, it usually is full of these 'corner cases' (lazy programmer input/output handling I would call it) which don't age well (and often hides bugs).

On the other hand, running into Java or C# projects is hardly ever an issue; it's a lot of boiler plate (many (request/response/dto etc) models/entities and layers), but it's readable immediately and if it still runs, it means that all data is validated and I can trust it.


The dilemma: boilerplate vs dynamic languages is a thing of the past. Kotlin make code even clearer than in java while being the sexiest syntax out there. It's 100% compatible with your Java code so you can incrementally migrate starting now!


The dilemma was always a false choice. You don't need kotlin to get rid of your boilerplate. Java is just a capable of being concise as any other language. Java's verbosity is a cultural artifact not a technical one.


No it is not as capable. The cultural verbosity is only a subset of the verbosity in java. The cultural one is great as those long API names allows for standardized and maximal clarity. The one induced by language syntax is accidental.

https://www.mediaan.com/mediaan-blog/kotlin-vs-java Things like state of the art type inference, data classes and lambda syntax, no semi colon and ton of other things allow for reduced syntactic noise.


Don’t get me wrong. I use java every day and quite enjoy it, but when I compare it to writing in Go or Kotlin, java does sometimes feel a little like meta programming. Especially if it’s a spring app, which is quite common.


This is obviously a matter of taste but I enjoy Java more when I stay away from Spring.


So, technically, you can get rid of a lot of the more egregious clutter with things like Project Lombok.

But I'd argue that, if you need to lean on compiler plugins to do it, it's sort of a rhetorical Pyrrhic victory.


And I would argue that adopting guest languages, alongside the extra complexity that they bring on board regarding extra IDE plugins, more layers to debug (has to pretend to be Java to the JVM), FFI issues (calling coroutines from Java), and creating their own library eco-system is not worth the trouble, beyond some short term feel good.

As long as the platform exists, its main language is the one holding the keys.


Can you describe this a little more. How did the verbosity enter the culture?


Kotlin has sealed its fate turning into Android's main language.

On the JVM its novelty will fade out while Java adopts whatever features might be relevant to the millions of Java developers out there, just like it happened with all other that came before.

And I remember the days back when Groovy was supposed to be the scripting language for JEE, and Spring was going full speed with it as Java alternative, and we even got strongly typed variant.


I've discovered in large, long-lived Java codebases that it's common to Greenspun expressiveness and malleability back into the language through heavy reliance on things like Spring, stringly typed mechanisms, and dynamic code generation.

The impact on maintainability is about what you'd expect. Maybe, on an abstract level, the software isn't as complex as what you can do in Ruby, but the verbosity of the language brings its own maintenance challenges. I can't speak to Ruby specifically, but, in practice, I haven't found that I'm all that much more afraid to change things in a large Python codebase than a large Java one.

I'm now coming to think that the real challenge is dynamicism in general, regardless of whether it's built into the language or implemented as a library solution. So, if Java has an advantage here, it's simply that it tends to discourage people from overdoing it by making it awkward to do.


What I find interesting about dynamism, is that every templating library I’ve ever used has a tendency to become php over time. Even templating libs in php! It’s fascinating to watch, since each new feature really makes sense because certain logic is trivial in the template, that would be way more complicated elsewhere, and so the template lib has pressure to add little things. Next thing you know the complexity and feature set of the lib is it’s own beast to deal with.


Agree here, admittedly I have no prior Ruby experience, but I've inherited a code base in Rails. There is so much magic that goes on implicitly. I end up jumping around everywhere to see why something unexpected is happening.


Have you used sorbet on these codebases? I'd be curious if there'd be a benefit here in terms of maintainability.


Sorbet is the only way I can function in anything larger than a 10 line script these days. I’ve started adding it to personal projects because it makes revisiting them later so much easier.


Without the LSP implementation out in the open, though, sorbet is just a type checker. It's not made much for sense for me to start integrating it into my personal projects just yet, since I only get the benefits when I run tc manually. Once Stripe opens up the LSP, however, I can see it gaining widespread adoption.


I haven’t tried the open source release lately but it looks like master has the lsp option.


sorbet is relatively new and I've been avoiding ruby for the last two years


> Nowadays I would much rather slog through ten times as much Java boilerplate, because later I’d expect that I could still refactor code without fearing that the whole thing would collapse around me

Sure, but if you don't reach market fast enough you'll have no reason to slog through anything, period. Can you imagine writing something like Facebook using Rust? It's an almost ludicrous proposition.

Static typing requires that you know what you're building at a relatively fine-grained level--how the pieces will fit together at the source code level, at least roughly. No amount of automated refactoring could ever match the flexibility of dynamic typing when you're just trying to bang out features, let alone explore the feature space, which almost by definition is what any novel solution is doing--exploring. That's why I love using embedded scripting, particularly Lua. You can gradually move components outside the dynamic environment as their place in the architecture becomes fixed and well understood.[1] So-called gradual typing doesn't really permit the same kind of flexibility because it's not the static typing, per se, that burdens development, but rather that static typing has the effect of forcing a more rigid hierarchy of higher-order abstractions and interface boundaries. What you want is the ability to slowly solidify the architecture from the outside to the inside, not the inside to the outside.

Microservices is another way to address the problem, at least in theory, but in practice doesn't actually directly resolve the real dilemmas. At best it just multiplies the number of environments in which you're faced with the problem, which can both help and hurt.

[1] Statically typed scripting languages seem rather pointless to me, and more a reflection of a fleeting fascination with REPL.


I think if you were starting today, you would build the app as a set of backend APIs connected to an array of front ends (apps, mobile web, desktop web, 3rd party, etc).

Building some or all of those backend APIs in Rust from day 1 would not be crazy, or less productive than any other language.

I think differences in proficiency drive bigger differences in productivity than inherent language differences.

So if you were starting today to build Facebook, using a language you are already effective with would matter more than which language you use (there are plenty of people out there for whom that could be Rust, or Haskell, or Ruby, or PHP, etc).


After working with a 300,000 LoC code base of PHP I would gladly rewrite Facebook in Rust. It allows effortless atomic commits. When I build something I make an MVP of a first version, refactor, make an MVP of the next version, refactor, and so on.

Rust lets me make sure my code still works through refactors and code changes. With PHP I could easily break something that would only cause an error when that code actually runs. That's right, there is no sanity check at all until the parser reads that code.


Facebook uses a bunch of a Rust, incidentally. They also use a lot of statically typed languages, and have made some of their own, even.


Exactly this.

I was involved in developing large PHP and Rubby codebases and its so hard to maintain.

These days I just use SpringBoot. It strikes a good balance between providing speed to market (PHP/Ruby), maintainability and performance (the next version should support GraalVM). Cant complaint so far.


Won ? Ruby fell off a cliff once it got to a point where people had to maintain that shit in production - I'm currently working on a large mature RoR codebase and I'm switching jobs ASAP because it's incredibly painful to work with and feels like a dead end career wise - and I like the gig otherwise - good product and a decent team - but the technology is draining so much of my life energy on nonsense it's ridiculous. And the language is built for unmaintainability - standard functions being abbreviated to a single letter, 2 or 3 aliases for a single standard function (map/collect and such), Rails overriding the conventions of the language (! postfix). And then there's the architectural retardation of fat models, fat controllers and using mixins ("concerns") completely break encapsulation - all actively killing code reuse-

PHP has long been a meme and is mostly legacy stuff or bottom tier work.

Dynamic languages are adding static optional static typing because the value provided by tooling as the projects scales is undeniable.

It took ES6/TypeScript to drag JS out of the dump that it was with every library adding its own take on a class system - the maintainability jump from ES5 to TS is incomparable for anything larger than 1k LOC


I say this a lot, but if you are this irritated by Rails, then you ought to see the 20+ year old legacy C/C++ some of us get to work with ;)


I started on C++ back in VS6 days and I wrote tons of it when I was in to game dev and graphics programming - but I would not want to work on those systems and kinds of problems when I can get paid the same to work on higher level stuff - the tooling quality and slow iteration cycle when I'm really stuck on something leads to a lot of stress - I just want low stress dev environment where I focus on solving problems I'm being paid to solve - not problems created by the tools and then having to explain stakeholders why something seemingly simple takes 3x of the estimate. I like environments where I'm confident in my estimates - C# proved to be good in this regard simply because the tooling is there even when I hit a wall and it has a relatively consistent uniformity that makes it easier to navigate unknown territory.


Funny, after 10 years of Java and other high level languages I was very happy to go work on C++ at a lower level and have been pushing myself lower down the stack as I go.

C++ these days is a far more pleasant beast than the VS6 days, the tooling is pretty good and the language can be tamed into a pretty elegant beast with the right coding standards and discipline.


I agree. My team is just more productive in C#, so we just stay there as much as possible. As you say, the ability to work on just solving the actual problem is very under valued.


This is one nice thing about working at Google. We have C++ codebases that are 20 years old, but you'd never be able to tell because they are still continuously worked on and even if not there's a team of people at Google who constantly run company-wide refactorings and the like to modernize things.


This is, IMO, one of the most underrated aspects of the monorepo, that has such a valuable impact over time.

It only takes a handful of extremely passionate engineers (and I really mean a handful, like less that 2% of engineers in an org) to raise the quality-floor of the entire codebase.


For sanity, sustainability and fairness I would rather that read

It only takes a handful of extremely passionate engineers > It only takes a handful of paid engineers, full time assigned to the task


Anecdotally (I don't work at Google, and this might not apply there because it's such a large company): At the monorepo companies I've worked at, of the 2% passionate I mentioned, maybe half have naturally gravitated towards working FT on the infra teams, the other half make very occasional but astronomical contributions, while working full-time on an actual product.

What I mean to say is that with a monorepo, you leave the door open to essentially passion-driven contributions made by the other half of that 2%. In non-monorepo environments, the barrier to uniform adoption is too high for someone that isn't full-time assigned to the task to justify.

Some people are genuinely passionate about this stuff, and want to take on the task of migrating the entire code base, because it's a fun challenge and it'll improve their day-to-day work every day for the remainder of their employment. They're very rare, but they do exist, and with a monorepo those people are better-equipped to drag the company forward.


I don't get this - changing the code is not the problem - this is a political / marketing issue.

I have some news - passion is not enough, and there are more than 2% of people working on things they are passionate about - it only 2% get lucky.

I can think of a few instances in my recent career at large monorepo company where I have built something on the scale of "passionate, could improve our working lives"

One died because ... well I gave up, two are used locally by my team and those I could persuade, whilst other solutions built by others for the same fix have gone on to be blessed officially, (ie replaced by grassroots competition) and one was replaced by a fully mandated and funded project that spotted the need and just steamrollered over all the local fixes.

One is still outstanding and I think well worth pushing still.

But I don't dream of the big win where suddenly the Board says "why, without your glasses Miss Moneypenny you look ravishing".

I get paid, I work, and I try and make the world a bit better where I can. I am passionate about it. But i don't write blog posts about how passionate I am.

Maybe I should :-)


Ya, fb feels the same way fwiw. I'm very sold on the benefits of monorepos now.


I've got a large codebase i've been maintaining since 1999 that i've continued to evolve and is still in use today . I've been able to steadily improve it without breaking much. IDK maybe its being > 50 but I can appreciate less "dynamic" environments. I do we dev in vue with a go back end but I know the vue part is going to be a rewrite in 5 years. Who wants to keep doing that shit? not this 52 year old guy.


Modern refactoring and static analysis tools really make it far easier to keep old code bases up to date.

As a hobby thing I'm taking an almost 30 year code base written in C and reworking it into C++17. I've become somehow somewhat adept at this transformation over the years :-)


Yeah, the problem is that the large majority of the world isn't Google, software isn't their main business, and their codebases are maintained by a continuous rotation of external contractors that are paid by the tickets that they get to implement.


Yep, and don't get me wrong. Stray out of the monorepo and into the wilds of various git hosted code bases at Google and you'll find code rotting in the fields, maintained to a far lower standard (IMHO).


I'll see your legacy C/C++ and raise you legacy Fortran 77 :)


We needn't fill the young Rubyist's head with nightmares that deep, friend :)


I think you would really appreciate Rust for backend web dev. Quite a few people who were once upon a time hardcore rubyists gravitated to Rust and became some of its greatest contributors: Steve K., Sean G., Carl L., Florian G., and many more.


I like Rust a lot - especially since I have a C++ background - unfortunately I don't see any opportunities to move to it ATM, but I haven't really tried hard enough - maybe I'll take a month off after my current project to write something substantial in it and try to find a Rust gig.


PHP is definitely not a meme, it's got a ton of actively developed frameworks, PHP 7 was a huge step forward for the language and PHP 8 is shaping up to be as well.

It's been used at every workplace I've been at, sometimes well, sometimes not so well, just like any language. No idea what the point of blanket statements like this is.


Especially when said statements are just wrong. Anecdotal evidence from individual companies does not allow you to make representative statements about the entire internet...

https://w3techs.com/technologies/details/pl-php claims ~79% of websites whose backend is known use PHP.

https://w3techs.com/technologies/details/pl-ruby shows Ruby has grown from 2.5-3.5% over the last year.


Which standard function in Ruby is a single letter?


b, to_s, to_i, to_a, to_c, to_h, to_r, etc.

Should have said methods but you get what I mean. And I've encountered ambiguous and pointless abbreviations all over the place.


It's weird, when I was I younger I used to love shortform syntax like that as well as removing unnecessary punctuation

But now that I'm older, I appreciate things being more descriptive and orderly, including strict use of semicolons, functions that say what they are doing (e.g. to_string), or being explicit about converting (e.g. static_cast<type>)

I think it's because I find trying to make everything as succinct as possible ends up trying to be too clever.


When I was younger I tended to write more code than I read. As I got older I transitioned to reading more code than I wrote. This correlated with a desire toward a preference for code that self documents as I progressed.


I type fast enough that somewhat longer names don't hurt. Also tab completion has existed both in IDEs and in shells for many years now. But just like very nearly everyone else I fall into the "7 ± 2 items in working memory at once" capability range. Having to remember what a command/function/syntax element does instead of reading a word that says what it does takes away from that working memory.

Long-term memory is similar, while it's not bounded in the same way it takes time and effort to develop. I'd rather type `git new-branch` than have to remember `git checkout -b` and I certainly don't alias it to `git nb` or something: I can type `git n<TAB>` and let that complete it.


Well said. "Clean Code" goes into this a lot and it helped me realize why I was starting to become more and more bothered by looking back at the short variable names I wrote years ago on my projects.


When I was first learning java I loathed the verbosity, but I was coding everything in vi. Now days with good ides, I don't mind java's verbosity or even the boiler plate.

I find myself more annoyed by dynamic languages, because the tooling just isn't at the same level. All sorts of 'hints' that I rely on in java simply aren't there in other languages, and the ide throws up it's hands and is like '...? I guess this is right? Godspeed sir', and I wind up having to go lookup documentation rather than ctrl clicking into underlying functions and code. It's really annoying, and it's made me be more appreciative of staticly typed languages.


Interesting, I’d never seen or used .b or .p that someone else mentioned. I agree those definitely should not be a part of the standard library. Not sure about to_i etc. we use those pretty often and it’s never really been an issue.


p


I think you took the phrase too literally. I'm assuming by won they didn't mean that Ruby and PHP will dominate the web landscape for eternity. I'll just point out that dynamic programming languages have for most of the webs history been the primary tool for developers.


The meme here is that your comment reads like a parody of an outraged junior programmer at their Dunning–Kruger peak.


They've "won" so far because static types languages were cumbersome and unpleasant to use, but this is changing and dynamic languages are learning some type tricks too.

The issue here is with Rust: its strengths are mostly irrelevant for the web and its weaknesses (particularly slow development compared to the competition because of having to pacify the type checker) are really important.

Op is painstakingly beating around the bush, but what they're getting at is that Rust + webdev = mismatch.


For a lot of "usual" webdev, any reasonably modern language will do. Even modern JavaScript will be fine. Something like C# would have been way better, though, but we cannot change that now.

However... software development has become a mess of slow technologies and abstractions one on top of the other.

Some people are working routinely in a text editor that is behind three operating systems: the VM/hypervisor, the usual operating system (Windows/macOS/Linux) and then a browser instance (which is like a operating system now). Then add all the drivers, libraries, frameworks etc. that go in-between.

While I don't like that WebAssembly is yet another abstraction, at least it is a chance for a resurgence of system programming languages and to proper software engineering...


Why can't you use C#? .Net Core (upcoming .Net 5) are pretty decent to work with. I'd still rather use node for the most part though.

I have been playing with Rust, but not sure about how good an idea it is for the front end yet though.


Well, you can compile the .NET runtime into Wasm, send it to the client and run C# on top of it, but then you are adding yet another layer.


I didn't have the impression the discussion was a web context. In which case if lean straight js/react or rust/yew


Pretty sure the parent comment was talking about client side.


You can use C# on the front end now with Blazor. It's pretty new though so I don't know how good it is yet.


I don't totally agree about its strengths not being relevant. Memory safety is an extremely low bar that the web has generally managed to meet, but Rust has a lot of value elsewhere. The author talks about significant performance improvements, which would be very welcome on many sites, as well as improved error handling and data validation patterns.


Most things thank goodness are not SPAs, nor do they have to be.

The problem here is the concept of SPA itself - it's a complete hack which is plastered over with various tricks to make it halfway usable. This has been going on for years now.

Websites on the other hand are slow because of the tracking and the ads which load tons of JS. Remove that and the web will be blazing fast.


> Remove that and the web will be blazing fast.

With sufficient ad blocking and JS white listing, I can attest to this.

After removing ads and analytics, sites that rely on JavaScript to render significant amounts of content are the slowest.


If you have A LOT of content (eg. a table with 10000 rows containing rich data) it will be faster to use a JS virtualized list than to just have all those 100k+ DOM elements computed at once.


I am currently writing a hobby project in Rust with Rocket and Diesel. I could bang out a working prototype very quickly with either Python or something like ASP.NET, but I picked Rust because I want to get better at it.

Many of the things you say are true - Rust libraries in general need some love and polish before they can be beginner-friendly, but some are getting there. The difference between Diesel and Rocket, for example, is quite stark. The former has only the barest minimum 'examples' and 'guide', if they may be called that, and I feel like I'm expected to read and understand its source code to become really proficient with it. It takes a lot of experimenting and trial+error to do anything beyond the basics. Rocket, on the other hand, has a very comprehensive guide with useful examples and, so far, has been enjoyable to work.

That said, there are still a lot of "convenience" features missing. My current notable example is forms. As I'm doing it (I haven't looked into any addons to Rocket for this), I have to write out the HTML for the form myself, along with any Javascript I might require for validation, etc, then write the server-side methods for GET/POST, making sure to maintain the state myself. In this regard, something like Django's effortless ease to get a form on screen and store its data into a database really showcases where Rust (Rocket) still has a long way to go.

Hopefully, with more people using it, the tools and libraries will improve and mature, particularly the documentation.

When all is said and done, I enjoy "slogging" through Rust a lot more than I did working with Django, even with the slower progress. Something about this language really speaks to me.


Exactly. The frameworks aren't there yet but I think Typescript is pretty much ideal for web dev. We're still using dynamic languages for web dev for mostly legacy reasons now.


Not sure if "ideal" is a good choice of word here. We dont know what's still coming. It surely has some flaws (many inherited from JS). There is stuff like ReasonML/Elm/PureScript that seems more ideal to me from a pure language (not eco sys, job/talent market, etc) perspective.


Yeah sure there are many better languages than TS in an absolute sense but TS is the only one I see with the momentum to displace the current incumbents.


It's still a single threaded run time. There's one platform that's ideal for web, BEAM, but unfortunately it doesn't have a strongly typed language. Elixir and Erlang are both great languages though.


I can't speak to whether or not the BEAM is ideal for the web, but I will say I've been watching the Gleam project very closely because having static typing with the BEAM is a dream of mine.


I think they have "won" in certain domains. Java and C are still widely used in loads of places, though less so in web dev.


> This really nails why languages like PHP and Ruby have won out over static typed

It really doesn't.

Languages like PHP and Ruby "have won out" over statically typed languages because the representatives of statically typed languages at the time were Java and C++, both of which were bad (they still are, but they were): verbose, difficult (and verbose) to leverage for type-safety, missing a bunch of tremendously useful pieces (type-safe enums to say nothing of full-blown sum types), nulls galore, …

As a result, the gain in expressiveness and terseness (even without trying to code-golf) of dynamically typed languages was huge, the loss in type-safety was very limited given the choice was "avoid leveraging the type system" or "write reams of code because the language is shit", and the fast increase in compute performances more than compensated for the language's loss in efficiency.

And that's before talking about the horror show that Java's web frameworks ecosystem was circa 2006.

> Languages that tend to work well with it are those that are highly flexible around the corner cases of these technologies, and allow you to quickly iterate through the process of gluing them together.

That's really complete hogwash outside of the client, which isn't what we're talking about here since neither php nor ruby run there. On the server you have clear interfaces between "inside" and "outside", and there's no inflexibility to properly taking care of cleaning up your crap at the edges. Quite the opposite, really.


I think this is spot on. IMHO, everyone that uses Rails knows it's flawed. The problem is that there's not a better option out there that:

(1) Is "batteries included"

(2) Has the gem/engine ecosystem where basically every problem is already solved.

(3) Expressive code like has_many :things

If we could get those things + static typing + performance everyone would switch. But that doesn't exist.


That alternative in 2020 is Python 3.7+ with typing annotations.

Python is very "batteries included," and there is a large set of exisitng packages for most things. One can monkey-patch things and invent cute syntax, but most python is rarely hard to reason about what's actually going on.

It's easy to bag on parts of python (eg some syntax and terrible distribution story), but I feel much less crazy (and suspicious of every punctuation mark) when working on a modern python app than when working on a modern ruby app, especially in the presence of rails.


.NET and Java have enough batteries, 20 years of libraries solving every problem, several expressive languages available, alongside AOT/JIT compilers.


.NET has the problem if being a MS product. It's only relatively recently that they've decided to be fully cross platform and open source. I'm not sure if it's all the way there yet, but it could be a viable option.

Java is fine as a programming language. Java as a web app more or less means Spring. And Spring means XML files and annotations which come with their own problems. Kind of an out of the pan and into the fire thing.


You can get away without using Spring and do quite fine, but there's definitely a risk that if you invest in the JVM ecosystem you'll get hired to work on Spring.

(alternatives like Vert.x, the less-annotation-heavy quarkus, etc).

For Java web dev I actually like Kotlin better - ktor is pretty nice.


Plenty of companies don't care one second that it is a MS product, in fact it is a positive value on their eyes regarding product support and tooling.

First of all there is more to Java Web development than JEE and Spring, which in any case, other eco-systems don't have a mature answer for many of their deployment scenarios and cross system integrations.

XML is beautiful, there is yet a format that supports machine manipulation, IDE graphical tooling, comments, schema validation as XML.

Anyone that praises Rails will be right at home with XML and annotations magic.


because the representatives of statically typed languages at the time were Java and C++

And Pascal, Ada, Haskell, Eiffel, Standard ML, ...


Operative concept: representative.

None of the language you mention had any sort of wide-spread visibility at that time, meaning they were not representative of the (overwhelming) majority experience with statically typed languages.


As far as I'm aware, Pascal and Ada were still significant in the early 90s (ie when Python arrived). You're right that they where far less so when PHP entered the picture, but it's not as if they were dead (Borland Delphi and PHP 1.0 were realeased the same year).


> Pascal and Ada were still significant in the early 90s (ie when Python arrived).

That is not the timeline we're talking about here. Dynamically typed languages started taking off at the turn of the millenium and really exploded in the second half of the aughts. Java didn't even exist in the early 90s.


But it's not as if the programming community collectively just forgot about these languages (and, as mentioned, Delphi and PHP are in fact contemporaries). The question remains, why did new dynamic languages rise, but already existing static languages decline? That requires an explanation.


Yep Delphi 1.0 was launched in the mid-90s and had a good run for some time after that.


None of which really had mainstream visibility with the kind of people who actually made the decisions about what language your company would be using.


Yep and on the dynamic side there were also Erlang, Python, Lua, Lisp, etc.


I think you get it backwards. Things are disjointed, loosely coupled because highly dynamic languages have won. The flexibility of dynamic languages, which can be great when working around dynamic corner cases, does not force developers into fixed, well-defined contracts keeping everything loose and disjoint.

PHP is highly to blame. PHP, being a scripting language, is easy to deploy and keeps chugging along at all costs. It may do something nonsensical, but it will try to chug along to completion without aborting. It creates a system where it is easy to write and deploy something that usually works.


I completely disagree with this.

I work on a large C# platform that interops with a ton of vendor API's, the majority of which are also written in typed languages. The disjointed, loosely coupled nature of web systems is what causes problems. In particular, knowing what a "correct input" is to web systems is very difficult, where correct means:

1) Passes the API's immediate validation

2) Passes validation inside API's longer running processes

3) Passes business rules (it might be correct types, lengths etc, but fails anyway because of an incorrect combination of API calls)

Some of our vendors use Elixir, some use PHP, many use C# or Java. I really don't notice any significant difference between them on that account. The issues are all down to the complexity of the software and business requirements and that the whole thing is a giant distributed system.


> knowing what a "correct input" is to web systems is very difficult

I cut my teeth programming an XML web service using soap and while I can't say that it was as drop-dead simple as a restful HTTP web api these days, between the wsdl and the uddi, it sure was convenient to be able to know as a client what methods were being exposed and what inputs they expected.

Sometimes I wish the w3c had not required XML for those web services ,so that we could do that with json plus whatever add-ons can make it have an enforceable schema like XML. WSDLs were underrated.


I agree. It was nice being able to grab the WSDL from say, a .NET web service, and run your maven script on your Java app to generate the interop boilerplate and just start coding against a type-safe interface.


If have seen horrible javascript, but I have also seen horrible type-safe examples. It is always a compromise between errors, readability and productivity.

Attaching text in form of a script anywhere in the DOM can be abused, but it is also insanely practical. I don't think Javascript is too horrible. I think the whole toolchain to get minimized and bundled JS is. That is why I am wary of TypeScript. Yes, I see its benefits, but I don't like cross-compiling if I can just not do it.

Given, I am no web developer and my "projects" on the web are tiny. I completely understand someone who likes to take the increased effort to use TypeScript, especially if your project isn't just making a text blink. If it reaches a certain size, I would probaly look into it too.

Webassembly looks interesting, but will take some time to establish itself. There are also disadvantages to that, since it could reduce the openness of the web. All the tracking we are subjected to could only be inspected by network traffic instead of looking into the source.


I agree with both of you in that I don't think the direction of the origin really matters.

The problem is when dealing with other teams/products/api's it's a mess. Varying standards, varying strictness, corner cases everywhere. There are plenty of JAVA, ASP.Net APIs that are absolute nightmares to integrate with despite being strictly typed beginnings.

If you have a highly federated space, you control everything and thusly can do cool things like Rust, Lua, what have you. But, most engineers don't get such a luxury. Languages that can be rapidly morphed to accommodate all those situations achieve shipped solutions to business problems faster.

In the end, that makes money and that turns our world.

Regardless, here's the loosely related XKCD's Standards: https://xkcd.com/927/


Applying a schema doesn’t fix an architecture problem.

Describing those fixed well defined contracts on top of the architurally absurd stack that is TCP/DNS/TLS/HTTP/Ajax/Dom/JS engine/Server side graphql/rpc/rest / database... doesnt by you anything but slower iteration speed and an unwieldy schema.

These components don’t have the same impedance, the same flavour to their design. Necessarily their schema would be a mess.

Also, what stops something like

    function add(int a, int b) -> int {
        return a-b;
    }
Typing is overrated. It has productive use cases for sure, but only in specific scenarios. It is no panacea.

What would be nice is typing like typescript - optional typing. Where i can provide a typings binding separate from the code (in the way that i can bring my own tests without changing anything of production code if you’ve made an unholy mess of testing). Same argument for optional typing outside the main source code - odds on you’re going to misunderstand or misuse types, I’d like to use them where they make sense and ignore your steaming pile of types. Impossible in a language like haskell where types are inline rather than annotations attached to code.


> Also, what stops something like [defining add as subtraction]

cmon... "it doesn't guard you against every logic error under the sun" isn't a great argument


There's no strong evidence that static typing helps reduce application logic bugs https://danluu.com/empirical-pl/

Under the same pressures it's pretty likely that the same teams that deploy dynamic language code to production and get an unexpected nil deploy statically typed code to production which unwraps an unexpected None and throws exceptions or panics.


>There's no strong evidence that static typing helps reduce application logic bugs https://danluu.com/empirical-pl/

There's no evidence proving the opposite either. We can't say anything beyond that the existing studies don't really show anything. I think it's pretty obvious that the limit of static typing reduces logic bugs (since it can actually be used to prove statements about the logic), but beyond that everything is kind of up in the air.

>Under the same pressures it's pretty likely that the same teams that deploy dynamic language code to production and get an unexpected nil deploy statically typed code to production which unwraps an unexpected None and throws exceptions or panics.

There's nothing to support that that this is 'pretty likely'. As far as we know, it could very well be that statically typed languages push you in the direction of handling it the right way. Or hell, it could be that a team using a dynamic language is more cautious and conscious of edge cases.


I don't understand this reasoning at all. You have some form of contracts, whether you specify them or not. If it's too messy to formalize, that's still going to be an issue when you're working on it in an untyped language. At best you're just sweeping all those ugly edge cases under the rug.

>Also, what stops something like

having a more precise specification. E.g. adding a commutativity requirement would eliminate your example.


> What would be nice is typing like typescript - optional typing. Where i can provide a typings binding separate from the code (in the way that i can bring my own tests without changing anything of production code if you’ve made an unholy mess of testing).

A contract system like clojure's spec sounds close to what you're describing.


Having ridden the 90's .com startup wave with an in-house application server written in a mix of Apache plugins and Tcl, I learned the hard way to never again rely on anything that doesn't bring a JIT or AOT compiler to the party.


I’m interested in knowing more about your story. Would you please take the time to tell us more about it?


Not much to publicly tell.

Portuguese startup based on our own version of AOLserver, based on the same principles of Apache plugins + Tcl + C, basically Rails before it was even an idea.

We got acquired, by another bigger Portuguese company (Easyphone), which alongside another acquisitions became Altitude Software.

Our stack was used to some of their products serving multiple top tier companies in Portugal, we created an IDE for our tools, written in VB.

Supported all major UNIX flavours and Windows NT/2000.

Eventually scale problems happened and we were looking how to tackle them while avoiding rewriting everything in C, as MSFT partner we got invited to try out .NET pre-public announcement, so we decided it was a good opportunity to rewrite our product in this new .NET thing.

Some of the team members eventually took all these lessons and founded OutSystems.


What's the hard lesson? Performance? You have more power these days. The bottleneck won't be the interpreter.


Performance yes, there is always more power as long as the bank account is full, and even then it isn't enough when working at scale.

The amount of histories of porting code prove otherwise.

Note I am not pushing away dynamic languages, only those that don't have a JIT/AOT as part of their canonical implementation.

Common Lisp, Julia, JavaScript, PHP (7 and later) are all invited to the party.

Meanwhile maybe JRuby or PyPy will eventually get more community love, instead of being the black swans.


But php doesnt have jit...


JIT is coming to PHP in version 8.


Just In Time for 2020!


Java, .NET, Golang are actively used for web development. PHP is popular, especially for small projects, but it definitely did not win, web development is a contested area.


> PHP is popular, especially for small projects

Not especially for small projects, for projects of all sizes - huge swathes of the internet still run on PHP, and it's not limited to blogs built on Wordpress.

More than a few gigantic sites (Wikipedia and PornHub come to mind) are built with PHP. This is highly anecdotal but I still see the .php extension all over the place, sometimes in sites that perform really well BTW (again, Wikipedia is a good example here).


They are also evolving though. I don't know about Ruby and PHP but in Python gradual typing via type annotations really makes projects much easier to maintain and develop these days. While "mypy" (the semi-official type checker tool) still has a long way to go in terms of library support it works really well and helped me to find many issues by analyzing the code, as opposed to running it and finding the issues via tests (or in production).

So I think typing has its merits and languages like Typescript really make development safer and code easier to understand, while still making it possible to write and interface with untyped code if necessary.


> I don't know about Ruby

Ruby is moving in the direction with the addition of Sorbet[1] for gradual typing. I have heard some discussion that this typing could become a formal part of Ruby 3.0.

1. https://sorbet.org/


I'm not sure this is really true - the subtleties of HTTP are not best handled with dynamic languages, and the actual HTTP API is fairly simple for most web app development. The areas with the most complexity related to the web, itself, are usually do with servers, concurrency, etc - precisely where these languages tend to fall apart.


> This really nails why languages like PHP and Ruby have won out over static typed ...

Real reason is lowering the bar of entry, and that explains why web is horribly broken.

The "bootcamp webshit" meme exists for a reason. That's not gatekeeping - lowering the bar to entry below a certain level leads to drastic decrease in quality.


You realize the web started with just HTML at first right?

The web was always meant to be a platform anyone could easily build on, whether they were experts or someone's little old grandma who barely knew what a keyboard was.

Not only is it gatekeeping, it's historically wrong and betrays a flawed understanding of the platform. The web has always been about ease of use and there have always been people who went out of their way to teach others how to make good use of it.

Yes, the web is broken, but the web is broken because it's no longer about everyone running a little shoebox of a server and instead now we depend on giants who see us as cattle.


A lot of it is affordances. Every function signature is a user interface for your fellow programmer.

And if you are used to poorly-designed UI, you will expect poorly-designed UI from yourself.


Less testing, Documentation, Guarding against typos, better IDE support, better performance;

But there is a whole new generation of web developers who don't bother to learn algorithms or low level programming, and just churn out code with that hot new framework. That kind of programmers are also the ones that choose a technology because "it looks easier".

The world would be nice if management people understood not all programmers are equal, the 10x programmer is not a myth. But here on HN or reddit you see most people arguing that all programmers are similarly productive. And what we get for this is more electron apps.


> the 10x programmer is not a myth

You are right that quality of programmer skill and productivity has high variance, but the way people talked about the "10x programmer" was a vague vision that people pasted their ideas and personal bugaboos onto. So people got into increasingly-heated arguments and talked past each other. When we create social concepts, we need to strive for something like falsifiability -- something that lets you look at an example and say "Well actually no, thats not 'high-performing programmer' behavior -- for {{describable reason}}"

Categories matter. We should shape our categories for human happiness and human effectiveness, but categories matter.


I think it's really not that some programmers do 10x more, but some people with the same job title do a different job. Some engineers take responsibility for product or for ecosystem; some engineers tick off tasks on a list. (Both can be very valid, and bleeding for product quality at a company is definitely not inherently good.)


Is less because there are a few 10x programmers, and more because there are a whole hell of a lot of 0.1x programmers.


True. That was a vague notion. I think Joel Spolsky's "Hitting the high notes" summarized it better.


> Less testing

Why?


For example functions only take objects of supported types. Many invariants are verifiable at compile time. Especially when you have generics, structural typing and ADTs, the expressiveness is pretty good.


Depends what one considers application level web development, over here it has been pretty much Java and .NET, with occasional C and C++ written libraries, during the last 20 years.


This is probably true .... but watch out for Node with Typescript. You get that quite iteration, but with a decent amount of type safety most of the time, but with a drop down to the 'any' type for libraries that haven't written type definitions. I also use the existence of type definitions as a guide to how mature the library is :-), so that's a win win.


This is a mistaken view point. Every function you can ever write is bounded by a type, thus whether you use Ruby, PHP, or Rust can be described by a bounded type.

Every time you write logic in your code, your brain is aware that the logic is dealing with a specific type. Writing a type signature on top of that is just additional instructions to the compiler about the type you have already specified in your logic. It is a minor inconvenience.

There are many reasons why Ruby and PHP won out. One of the reasons is many people misunderstand the power and flexibility of types. Once they understand this, they will know there is really no trade off between statically typed and dynamically typed. Statically typed languages are infinitely better and the only downside is a minor inconvenience.

First off, note that there is only one function in the universe that can really take every single type in existence and that function is shown below:

  -- haskell
  identity :: x -> x
  identity x = x

  # python
  def identity(x: Any) -> Any:
      return x
This is the only untyped function in existence. Every other function in the universe must be typed whether it is typed dynamically or statically is irrelevant, either way your code will have to specify the type in logic or in the type signature. You can see this happening in the examples below...

The above example have no logic to specify a type... The minute you start to add any logic to your function immediately your function becomes bounded by a type. Let's say I simply want to add one in my identity function.

  -- haskell
  identity :: Int -> Int
  identity x + 1 = x + 1

  # python
  def identity(x: Int) -> Int:
      return x + 1
The very act of adding even the simplest logic binds your function to a type. In this case it binded my function parameter to an integer. Whether you use PHP or Ruby or Rust there is a type signature that describes all functions.

Let's say I want to do garbage code like write a function that handles an Int or a String? That can be typed as well....

  --haskell
  type IntOrString = Number Int | Characters String
  func :: IntOrString -> IntOrString
  func Number n = n + 1
  func Characters n = n ++ "1"

  #python
  IntOrString = Union[str, int]
  def func(n: IntOrString) -> IntOrString:
      if isinstance(n, int):
         return n + 1
      if isinstance(n, str):
         return n + "1"

Let's say I want to handle every possible JSON api in existence? Well it can be typed as well.

   -- haskell
   type JSON = Number Float | Characters String | Array [JSON] | Map String JSON
   func :: JSON -> JSON
   func Array x = x ++ [1.0]
   func x = x

   -- python
   JSON = Union[float, str, List["JSON"], Dict[str, "JSON"]]
   def func(x: JSON) -> JSON:
     if isinstance(x, list):
        return x + [1.0]
     else:
        return x 

  
There really isn't any additional flexibility afforded to you by a dynamically typed language other than the slight overhead of translating the types you already specified dynamically into types that are specified statically.

One caveat to note here (and this is specific to haskell) is lack of interfaces specific to record types. I cannot specify a type that represents every single record that contains at least a property named x with a type of int.


this was historically true, but I suspect in the age of the evergreen browser this argument has been seriously dented (but not invalidated)


> The analogy that comes to mind is that Rust is a really nice sports car with a great engine. It handles like a dream and you can feel the powerful engine humming while driving it. With the right open road, it's a great time. You can really optimize code, make great abstractions and work with data easily.

I prefer a different analogy:

Rust is high-end automated industrial equipment for machining high-precision metal parts that will fit perfectly with each other and which you can use to build anything you want, with high confidence that it will work and last a long time. But if you're under pressure to build lots of small, partially documented, constantly changing parts every day, you might be better off using more flexible equipment and materials (balsa wood, bailing wire, masking tape, etc.) to get the job done, even if the result will be messier and more prone to breaking.


> with high confidence that it will work and last a long time

And we need more of our software to be like that. Now if we could just do something about the constant pressure to compromise.


Certifications and more flexible laws for returning it when it doesn't fit the purpose, will do it.


>>But if you're under pressure to build lots of small, partially documented, constantly changing parts every day, you might be better off using more flexible equipment and materials (balsa wood, bailing wire, masking tape, etc.) to get the job done, even if the result will be messier and more prone to breaking.

Python & Deno/Node fit the bill for the latter category. Would you consider Julia in the former or latter category for ML use-cases?


For ML/scientific computing, it can fulfill both roles, I'd say. Julia looks about as flexible and easy to use as Python, R, and matlab for experimenting and iterating quickly with throwaway code, and superior to those languages in every other important dimension except one: it lacks Python's gigantic ecosystem.


Julia has a much larger ecosystem in terms of things like numerical linear algebra, scientific computing, and differentiable programming. How do you use block banded Jacobians inside of ODE solvers? Python's ML ecosystems just barely got non-stiff methods, so advanced accelerations are fairly far away, whereas these are things that have worked with Julia's ML solvers for a long time now.


Before responding, first let me say thank you for all the work you have done and continue to do. Also as I wrote above, I view Julia as superior to Python in every other way. :-)

By "ecosystem" I mean much more. PyPi currently offers around a quarter million Python packages, versus thousands for Julia. Github code and activity using Python is also currently around three orders of magnitude greater than for Julia. Python is currently the fourth most popular language on Stack Overflow (after JS, HTML, and SQL). It's a gigantic ecosystem that has evolved over three decades.


Yes true. I think it's a lot like the old Android vs iOS stuff though: counting the number of packages doesn't mean all that much because so many have been abandoned that Python probably has a much higher percentage of junk out there. That said, Python's ecosystem is still larger, but that doesn't mean it covers all domains well. I can name a bunch of holes that it has off of the top of my head, and same with Julia too, so it's not like either supersets each other. So I just think the "but Python has a bunch of packages" is a lot more nuanced than that: in many areas of ML or webdev it is much more developed, but when you get out of that realm it can get sparse in some ways.


It's not just packages and repos, but also installed base, developers, forums, discussions, corporate backers, etc.

Otherwise, I agree that not all domains are well covered by any language :-)


Something that I think is worth discussing is that julia has far more package developers per user than Python. That may seem like a weird thing to prop up, it seems to just suggest that Julia users more often find themselves needing to implement something themselves rather than use a pre-existing library, and that's definitely true.

However, Julia also makes the experience of growing from a package user to a package developer basically seamless. A big part of what makes this seamless is that almost all julia packages are written in pure julia and thus relatively easy to read and understand for a julia programmer. This coupled with the strong introspection capabilities makes it so that users who would never be writing their own packages in Python end up contributing to the Julia ecosystem.

Coupled to this is the fact that julia is a highly composable language meaning that it's very easy to combine functionality together from two separate packages that weren't designed to work together. This composability makes julia's fewer packages have far greater leverage and applicability than Python packages and also makes it so that when you do need to make your own package, you're less likely to be reinventing the wheel every time.

Consider the fact that all the big machine learning libraries in Python all have basically implemented their own entire language with their own special compiler, all the requisite datastructures, etc. In Julia, that's rarely necessary. Flux.jl is just built using native julia datastructures and julia's own (very hackable) compiler.


Great points. Agree with all of them.

Interestingly, I see the same trends with Rust -- high ratio of packages to users (already ~43K crates, despite having only 3% share of Stack Overflow last year), easy packaging (`cargo publish`), and high composability (e.g., via traits, type parameters, etc.).

An obvious question is, why does Rust have 10x more packages than Julia already, and almost a fifth as many as Python? I think it's because from day one Rust has appealed to a broader group of developers (anyone seeking better performance and/or concurrency, with memory safety and modern language features like algebraic data types, will consider Rust). Julia, in contrast, has always targeted the scientific computing community, which contains a much smaller group of developers who seem mostly content with existing tools.


I think Rust scratched a deep itch by targeting people who are unhappy with C++ and definitely started off targeting a bigger, less competitive market than Julia.

Julia has a harder marketing job than Rust. Julia needs to convince people who use $slow_dynamic_language and $fast_static_language together that they'd be better served by just using Julia for both jobs.

Even more importantly though, julia needs to convince people that it's premise is even possible. People have deep seated biases that make them unaware that it's even possible for a dynamic, garbage collected language to be fast.

Rust on the other hand 'just' has to convince people that it's a better ~1:1 replacement for C++. Most people who have written C++ deeply believe that a better language is possible and yearn for that language.

Python, R and Matlab users typically don't believe that it's possible for a language to do what julia does.

Furthermore, Julia did initially spend a lot of marketing effort on the scientific community, which is somewhat small, and is more composed of people who just see their language as a tool that only needs to be 'good enough', so they're less likely to want to switch than say systems programmers who spend all day faced with C++'s inadequacies.


I think an even bigger aspect of this is that C++ users are technical people who make technical decisions. If Rust is a better tool for them, they will switch. For a lot of scientific programmers, or even just a lot of general Python users, they don't necessarily choose Python because they know 20 languages and think Python is the best tool for the job. A lot of people use Python because... they use Python and it's what they were taught. That's a much harder audience to go to and say "would you change to this technically better language?". Most just say "I'm not really a programmer, I'm a researcher/scientist/etc. so I shouldn't spend time learning more programming", and that makes it fairly difficult.


Agree.

Speaking from personal experience, I would add that some people use Python, not only because that's what they already know, but also because... everyone else in their field is using Python and its ecosystem. New advances in their field occur largely within the confines of that ecosystem. It's hard to leave the pack behind.

Large ecosystems have significant network effects that act as barriers to new entrants.


Yeah definitely. That's what I was trying to get at in the last paragraph, and you might be right that it's an even bigger aspect than the other things I mentioned.


Additional considerations here:

Rust invested in a package manager very early, and specifically designed things to make it easy to publish packages.

Rust has a smaller standard library, and so you often need packages to do things that you may not need in other languages, which encourages the use of packages, which I think encourages the publishing of them as well, not to mention that there are gaps that would get filled by packages.


It's still a significant upgrade over JavaScript. I'm developing a web app with the Backend in Rust and Front-end in Vuejs/JavaScript. Sure Rust was slower to develop, but it doesn't throw errors like 'val in undefined' and then I have to go debugging where I check for the value of val. You do that beforehand on Rust and that avoid a whole class of problems.

JavaScript is cheaper to get started but becomes way more expensive to maintain. My Rust code on the other hand: Once it works it keeps working. If it compile it probably won't bug down the road.

I'd switch to Rust / WebAssembly once the libraries are mature/stable/documented enough. The overhead cost is worth it in the long run.


> JavaScript is cheaper to get started but becomes way more expensive to maintain.

I think this is assumption is a bit too broad. There is another side of this:

In JS you write less code than in Rust and there is less explicit coupling in a dynamic codebase. So changes tend to be smaller and faster as well.

For example if you pass a top-level data-structure X through A, B... and C but only C cares about some part Z. Then you change how Z is produced in A and consumed in C.

If you are dynamic you just do exactly that, which is actually just fine. In a static language you have to change all the B's. Or you take the time and write a sensible abstraction over Z or X so in the future you don't have to change the B's anymore, which is better, but ultimately less readable and more complex.

Stuff that is well understood and specified in advance suits a language like Rust better. You get all these runtime guarantees and performance. You get all this expressive (but explicit!) power to fine-tune abstractions and performance. It's great.

But the closer you get to a UI (especially GUI) and non-technical, direct(!) users, the more dynamic you want to be. Because in this context you are more of a human to computer translator: requirements get discovered via iterations, interactions need to be tested and evolve etc.


You can get that kind of reliability on the front end with Elm, Purescript etc but investing in such niche languages is obviously a nuanced decision.


You're referring to Rust's static type system. What do you make of TypeScript?


I've tried TypeScript and I didn't really like it. Plus you can ignore the strict type if you'd like to which is a slippery-slope if you are in a hurry to get something to market.


For backend web development I was somewhat disappointed to see that many of the new web frameworks (all async) allocate extensively on the heap (for example lots of Strings in their http request types.)

I mean it works but I don't understand why I would go to the bother of thinking about lifetimes when it seems performance would be similar to Kotlin / Swift / F#.


The short answer is that because async is not done yet in Rust.

The long answer probably includes GATs (generic associated types), and the even longer answer starts with the amazing work that Nico et al. does to reinvent the internals of the Rust type checker. (Basically - if I remember correctly - the compiler team is currently refactoring big parts of rustc, to librarify the type checker, and replace it with a PROLOG-ish library called "chalk", which is able to prove more things, so it allows better handling of GATs, so it allows better handling of async, where you get "impl Future<Output = T>"-s everywhere.

But also somewhere there is that the ergonomics of async/.await are still not very much in progress too. With better error messages people will be able to forego Box<>-ing and figure out what to use instead, and only heap allocate where they must.

It's possible to write low-level async code with hand rolled-polled Futures, and push/manage as much stuff on the stack as possible. But ... that takes time, and performance is already "good enough". (And/or probably there are bigger gains in performance in other areas, such as scheduling.)

And, finally, boxing helps with compile times. (Because it's basically trivial for the compile to prove that a heap allocated simple trait object behaves well compared to a stack allocated concrete, but usually highly complex (plus unnameable) type.)


Can I ask what you mean when you say that async isn't done? async/await and Futures are on stable now. There is no runtime in the core/std but that's by design, there is no short term intent from the Rust team to do that. Tokio and async-std are both fairly usable as well.


The team itself described the stabilization of async/await as being in an "MVP" state. Stuff that's been fixed since then:

* error messages (I think there may be more to do, but they're better and better all the time)

* size of tasks (Still more to do, but they've shrunk a lot since it was first implemented

* async/await on no_std (initial implementation used TLS to manage some state, that's since been fixed)

Stuff still to do:

* Async functions in traits (this has a library that lets you work around this with one allocation, having this requires GATs)

* More improvements to the stuff above

* other stuff I'm sure I'm forgetting

> There is no runtime in the core/std but that's by design, there is no short term intent from the Rust team to do that.

This is true but with an asterick; https://github.com/rust-lang/rust/pull/65875 ended up being closed with "please write an RFC," rather than "no."


Huh! Thanks for chiming in and that's good to know!

Side note, I just want to say that I have mad respect for you always finding a way to stay so positive while continuing to be so heavily involved with the community, even with job changes and all of that stuff going on. You're an absolute role model for how to participate in community management and a godsend for Rust.


Thanks. It has taken a ton of work, frankly. I made a conscious decision to try and improve in this way a while back, and while I'm not perfect at it, I'm glad that I'm moving in that direction.


As a complete noob to rust, I found attempting to manage a memory pool that is handed out per request to cause all kinds of borrow checking headaches.

My bet is people choose to allocate heavily rather than figure out how to share memory correctly with Rust. I need to learn more rust to get a hang of it so I don't allocate so much.

Are there any rust folks who have a great "list of memory management models for rust"?


Rust does not yet have local custom allocators ala C++. (You can change the global allocator, but this doesn't help the "memory arena" use case.) The feature is very much on the roadmap, though.


While this is true, this doesn't inherently mean you can't do this pattern. https://crates.io/crates/typed_arena for example. It depends on exactly how allocations and your data structure are tied together.


It would be amazingly handy if someday it was possible to direct Rust to make all heap allocations under a certain block within an arena. At the moment it looks like you have to rewrite any third party code that makes heap allocations to use the arena explicitly at each allocation.


Yep, it would. There’s certainly a bunch of work to do here still.


I often write my code to allocate at first just to get a feel for the API and business logic and then go through it later to remove them. Given that all of those frameworks are still fairly new it wouldn't shock me to hear that that's the case.


I'd be hard pressed if a backend in Rust gives the same perf as Kotlin/F#. Especially if you write the Rust with an async stack you will likely come out significantly faster than the other options: small binaries, short startup time, less memory usage, more throughput, slightly shorter request cycles, ...

The only place that the langs you mention probably win is: learning curve (F# maybe not so much) a.k.a. getting language illiterate people up to speed, and compile times.


If you allocate and free small memory chunks at high frequency in a non-GC language, then it's quite likely that similar code in a GC'ed language will come out faster. The point of manual memory management isn't that it is "automatically" faster than a GC, but that it gives you a lot of control to reduce the allocation frequency (e.g. by grouping many small items into few large allocations, or moving allocations out of the hot code path).

Also, async itself doesn't magically make code run faster (on the contrary), it just lets you (ideally) do something else instead of waiting for a blocking operation to complete, or at least give you the illusion of sequential control flow (unlike nested callbacks).


Modern allocators (jemalloc, mimalloc) do pooling optimization just as well as GC. The point of non-GC language is usually about pointing directly to the source data instead of copy and having value type that nicely fit together in the same memory page instead of jumping around in heap space. I know JIT can do all that, but reliably? We'd need Sufficiently Smart Compiler.


Whether to use an async stack or not is very application dependent. For something closer to an application server than a proxy server I’m unconvinced async would be superior to threads.

I feel that Rust should be competing with C/C++ here though. I want a modern language with the same speed and control of memory usage. What’s A little frustrating (and don’t get me wrong, I’m rooting for Rust to get there) is that it feels like it should be possible to offer this, at least for threaded web servers.


There are other interesting languages in this space that I really enjoy playing with. Both Nim and Zig generate native code and are really fun, albeit very different, languages to work with.


I'm not sure. These reasons sound reasonable, but in webapps most of the time is spent IO handling anyway. As long as you're using async implementations in any language with performant "threads", I'm not sure of there would be a large difference in response times unless computation was involved in the request path.


You forgot development times.


I broke it down into:

> a.k.a. getting language illiterate people up to speed, and compile times.

With equally skilled people I dont think these languages yield different dev speed, apart from the damage done by compile times.


Rust never has to run a garbage collector, because it already figured out the lifetimes of all the values in your program statically, with relatively minor help from the programmer. This may make your code faster or more likely to be correct than code written in some of those languages.


It didn't figure them out at all, the programmer had to figure them out the hard way and then code them in so that the compiler understands them.

GC is figuring out things mostly correctly and the programmer can focus on other things, at least until they hit a performance problem :)


For the most part, lifetime is inferred from scope, so there isn't anything special to figure out; objects live exactly for as long as you can see and use them. This also includes such things as lock handles, or references to refcounted objects.


Only in the most trivial cases. Tons of real Rust uses copying or RC/ARC or even unsafe to avoid dealing with complex lifetimes because it's... complex.


Copying, refcounting and locking work within the exact same semantics. They're among the most typical tools that are used to transmit or share data, so I'm not sure why it should be strange to see them used in most real programs?


That doesn't change the reality of allocation though. Programmers in languages like C++ and Rust choose when and where to allocate memory, and if there's one thing we know about performant code, it's that we shouldn't be allocating and deallocating repeatedly. Nothing about lifetimes helps or hurts about this programming model. Lifetimes (Rust) and RAII (C++) are just semantic models to help reign in the cognitive load of memory allocation.


I don't think you've understood the nature of the borrow checker in Rust. It doesn't add any extra runtime behaviour, in fact you could compile Rust code without running these checks (though I don't think rustc provides you with a way to do so). Rust's memory management tools are no different from C++: you have stack allocated structures with destructors, which the standard library uses to implement heap-allocated smart pointers, shared pointers, mutex guards, etc, and you can hand out pointers to any of these. The only difference in rust is it checks how these pointers get handed out so you don't get use-after-free, data races, etc.


I doubt that you'll write a large Rust program without relying on refcounting at least for some of your objects. Refcounting is not fundamentally different than a GC.


Reference counting is a kind of garbage collection (taken broadly), but it's quite different from tracing GC.


Naive use of RC doesn't look like tracing much but as soon as you need cycle-aware RC like the CactusRef the algo looks a lot like tracing GC https://github.com/artichoke/cactusref#cycle-detection


That's a fair point, I should distinguish heap allocation from reference counting. It still seems somewhat tricky to replicate the well proven arena allocation approach of Apache and Nginx which avoids memory fragmentation in current stable Rust though.


For general areana allocation, you can use something like the Bumpalo crate: https://docs.rs/bumpalo/3.4.0/bumpalo/


Nice analogy. Here's somethine new. Rust is not just a sports car, but an eco-friendly electric one at that.

One topic OP hasn't touched upon is the disadvantage of "pushing client side to the server". JS is terrible at handling concurrent processes. The browsers have been able to make do because they typically care about just one user.

In the server though, it's terribly important to handle thousands of users concurrently. JS engines suck at it.

To put it in plain words, if you are into SPAs and want server side rendering, among all the other issues the one sure wall you'd be hitting is scaling issues.

JS will never be able to construct and send HTML over as efficiently as Rust could. This is going to be the reason why Rust(or a similar tech) will eventually win the web.


I wish speed mattered more, but JavaScript on Nodejs is performant enough for 95% of websites out there. Most websites are in the long tail, and never need to handle more than about 10 requests per second. For most websites, team velocity matters more than the AWS bill.

A clean static rendering system with caching, built on top of react / svelte / whatever performs well enough for almost everyone. And every year cpu costs drop, V8 gets faster and JS libraries get a little bit more mature.

I love rust, but I don’t see it displacing nodejs any time soon. Maybe when websites can be 100% wasm and the rust web framework ecosystem matures some more. Fingers crossed, but I’m not holding my breath.


Yep, I don't think Rust is quite there yet for web dev. I didn't use Rust for any frontend interactions. The web app is an SPA written in Svelte. I only used it for the core state update logic, which benefits from the typing and performance boost.


Yep! You found a really great open road to drive Rust. I'm jealous :D. If I had an app with as interesting requirements, I'd certainly consider your approach


Love the analogy!

I think there's value in mixing Rust with languages with mature libraries like JS/Python/etc to optimize performance critical code paths - basically for things we've been using C/C++ so far.

There's also a lot of progress in Rust community, I believe it'll get mature libraries very soon in future.


I use Rust for web not for performance, but for strict typing + smart compiler, safe refactoring, errors handling and many other language features - Rust is an amazing language even when you don't need performance.


You might enjoy this blog post I wrote: http://kyleprifogle.com/churn-based-programming/


It would be even more enjoyable if the text was black, had to use reader mode.


Disadvantage of retina screens, I didn't even realize. Thanks for pointing out, i'll fix.


The first exposure is the hardest part. It's not the kind of language that one can just pick up and use effectively without first understanding the fundamentals, especially if using an async runtime. The investment is worthwhile, though.


> webassembly is faster than javascript

Everyone says this, but I would dispute it as misleading in a lot of cases. I've been experimenting a lot with wasm lately. Yes, it is faster than javascript, but not by all that much.

It's the speed of generic 32 bit C. It leaves a lot to be desired in the way of performance. My crypto library, when compiled to web assembly, is maybe 2-3x the speed of the equivalent javascript code. Keep in mind this library is doing integer arithmetic, and fast integer arithmetic does not explicitly exist in javascript -- JS is at a _huge_ disadvantage here and is still producing comparable numbers to WASM.

This same library is maybe 15 or 16 times faster than JS when compiled natively, as it is able to utilize 128 bit arithmetic, SIMD, inline asm, and so on.

Maybe once WASM implementations are optimized more the situation will be different, but I am completely unimpressed with the speed of WASM at the moment.


I also have a WASM crypto library, focused on hashing algorithms: https://www.npmjs.com/package/hash-wasm#benchmark

I was able to archive 10x-60x speedups compared to the performance of most popular JS-only implementations.

You can make your own measurements here: https://csb-9b6mf.daninet.now.sh/


Yeah, hashing in WASM seems to be fine in terms of speed, though 60x faster does still sound surprising to me. Hashes with 32 bit words (e.g. sha256) can be optimized fairly well in javascript due to the SMI optimization in engines like v8. I should play around with hashing more.

I was in particular benchmarking ECC, which is much harder to optimize in JS (and in general).

Code is here:

JS: https://github.com/bcoin-org/bcrypto/tree/master/lib/js

C: https://github.com/bcoin-org/libtorsion

To benchmark:

    $ git clone https://github.com/bcoin-org/bcrypto
    $ cd bcrypto
    $ npm install
    $ node bench/ec.js -f 'secp256k1 verify' -B js

    $ git clone https://github.com/bcoin-org/libtorsion
    $ cd libtorsion
    $ cmake . && make
    $ ./torsion_bench
    $ make -f Makefile.wasi SDK=/path/to/wasi-sdk
    $ ./scripts/run-wasi.sh torsion_bench.wasm ecdsa


I think he doesn't complain at all about WASM speed vs JS.

He wants WASM to be closer to C performance. I.e. to move further along this line:

    |JS|========>|WASM|=========>=========>========>|C|


>> webassembly is faster than javascript

> Everyone says this, but I would dispute it as misleading in a lot of cases. I've been experimenting a lot with wasm lately. Yes, it is faster than javascript, but not by all that much.

I think even if it is faster in general, you might lose all that advantage as soon as you have to cross the WASM <-> JS boundary and have to create new object instances (and associated garbage) that you never would have needed to create if you had used only one language.

Therefore moving to WASM for performance reasons on a project which crosses the language boundaries very often due to browser API access doesn't seem too promising to me.


I am writing an app that needs worker threads both on the backend and also on the frontend (because of some heavy processing of large amounts of "objects") and my experience with TS so far is very poor. JS runtimes are just not suitable for heavy concurrent/parallel processing. Serialization/deserialization overhead between threads is probably (much) worse than it would be if the worker threads were in Rust.

It's not a matter of speed here. It's a matter of enabling certain types of programs which are borderline impossible with pure JS runtimes.

So I will probably move most of the logic to Rust


Javascript runtimes do fine with concurrent operations, but obviously are not intended for parallelism.

On the WASM side: Does WASM support real threads yet? Otherwise moving to Rust wouldn't really help you? If it's just "WebWorker" like multiple runtimes, you might still pay serialization costs to move objects between workers.


No, JS runtimes don't do "fine" with concurrent operations, unless you are "waiting". If you are doing heavy processing, the whole service freezes. That's indeed the primary reason I need worker threads.

Erlang's runtime does "fine" with its preemptive concurrency model, JS runtimes are a joke in this regard.


Have you tried async generator functions?


> Yes, it is faster than javascript, but not by all that much. ... My crypto library, when compiled to web assembly, is maybe 2-3x the speed of the equivalent javascript code.

2-3x may not be the 15-16x you see in native code, but it's still a massive speedup in already optimized code, and is likely enough to make a bunch of applications that weren't quite feasible to do in on the web now feasible.


I think the point is only certain use cases (usually related to number crunching like crypto, but undoubtedly games too) may see substantial improvements, they still aren't close to "native" speed, and nearly all other use cases won't see much if any benefit, especially compared to the additional complexity of another language, compiling to was, etc.

Plus things like competitive games and what I'll call "pretty" games have to squeeze out as much performance as possible, and no hitching is acceptable to competitive games, which IMO means WASM is still a no-go for those types of games(although games that don't have this requirement undoubtedly benefit)


Another consideration here is that you can use languages that offer control over data layout as a natural feature, which can matter a lot for good cache utilization etc. In many cases the data layout matters a lot for performance.

You can do this in JS too with TypedArray and whatnot but the key word here is 'natural.'

I've been working on a game engine as a side project with C++ and WASM -- and there are already many improvements over what I was getting with JS due to less GC, better data layout (the data layout thing is also esp. helpful for managing buffers that you drop into the GPU). I don't think it was about 'pure compute' as much as these things. C++ and Rust give you tools to manage deterministic resource utilization automatically which really helps.

A bonus is that the game runs with the same code on native desktop and mobile.


Chrome already has experimental support for SIMD, have you tried that as well?

Otherwise, eventually I expect WebAssembly to match what Flash Crossfire and PNaCL were capable of 10 years ago.


No, but I've been meaning to test this. I did notice it was available in node.js with --experimental-wasm-simd. I hope this proves me wrong about wasm, but I'll have to try it.


Note that Emscripten has an implementation of C++ intrinsics based on wasm simd: https://emscripten.org/docs/porting/simd.html


Have you tried with firefox? Last I heard they've got far and away the fastest wasm implementation.


No. I've just been building with the WASI SDK and running the resulting binary with a small node.js wrapper script. So, I've only tested v8's WASM implementation so far.

Does firefox have a headless mode, a standalone implementation, or some CLI tool I can use to run a WASM binary? Running stuff in the browser is cumbersome.


You can use jsvu to grab command-line shell binaries for spidermonkey (firefox's JS engine), v8, and jsc (safari's JS engine) to toy around with.


I think there should be a headless mode. For exampl e you can run unit tests using the gecko driver (?) without firefox showing up, running it as some process to perform the testing steps and give results.


On algorithmic code, idiomatic Rust + WASM is often about 10 to 40 times faster than idiomatic JS. The problem however is that each call between WASM and JS has a hefty cost of about 750ns. So your algorithm needs to be doing a significant amount of independent calculations before you will see these performance differences.


> each call between WASM and JS has a hefty cost of about 750ns

Why is this the case? Can we expect it to go away as browsers mature?


But 2-3x speedup over a well written Javascript version running in a modern Javascript engine is quite impressive, isn't it?

One thing that's often overlooked is that even though "idiomatic Javascript" using lots of objects and properties is fairly slow, it can be made fast (within 2x of native code compiled from 'generic' C code) by using the same tricks as asm.js (basically, use numbers and typed arrays for everything). But the resulting Javascript code will be much less readable and maintainable than cross-platform C code that's compiled to WASM.


> maybe 2-3x the speed of the equivalent javascript code.

That is an insane difference even if nowhere close to native code.

Some engineering fields would be crazy with a 20% gain... 200% is huge!


I agree it still is signficant, but this is not how wasm is being touted. Look at wikipedia: https://en.wikipedia.org/wiki/WebAssembly#History

They even say asm.js is supposed to be "near-native code execution speeds". In my experience, it is no where close to native speed. People should avoid this kind of deceptive marketing.


On that front, I agree. Wasm is being sold as the greatest innovation ever, when it is just another virtual ISA with lacking features and performance.


WebAssembly with Rust is sometimes bigger and "just as fast as JS".

But it's way more predictable, no more crazy 95th percentiles.


I'd be interested to see how current-day WASM stacks up against current-day Java. Don't suppose you've ported your crypto code to Java? Other than the maturity of the JIT compilers, are there reasons we should expect WASM to be any slower?


In some ways it should be faster because it isn't garbage-collected. But I agree that would be a much better benchmark for what should be possible.


Hi Nicolo,

Here are my two (or more) cents:

1 - Javascript is fast enough for your use case. No one will notice the difference in a board game website.

2 - AI should probably be implemented in python anyway. As ugly as python can be, you shouldn't fight the world; there are too many free advanced AI algo implementations in python out there.

3 - Regarding "Limitations of TypeScript" (strict typing / data validation / error handling): first of all arguable claims, but moreover, even if you think rust is better in these concerns, they are not important enough to justify reimplementation, disregarding away typescript's advantages and taking the risk involved in a new language and toolset. Yes, I can see the appeal as a programmer to learn new stuff, but seriously you should have much better reasons to rewrite existing code.

BTW, also, if I would think of a rewrite, I would have gone with scala or python, both are slower on the web, but seriously, it will not amount to anything meaningful in your use-case. Scala has better typing, and contextual abstraction is a killer feature for DSLs like game mechanics specification. Python is the lingua franca of AI, and pypy has greenlets [1]! Which is much cooler than people seem to realize. Specifically, it should allow writing immutable redux-like command pattern operations, as regular code, without the weird switches everywhere.

BTW2, I've contributed to boardgame.io, please consider staying with open source. We can build something like boardgame-lab together.

[1] https://greenlet.readthedocs.io/en/latest/


Hey Amit, good to hear from you!

> Javascript is fast enough for your use case. No one will notice the difference in a board game website.

No, actually. Have you seen how long boardgame.io's MCTS bot takes to make a Tic-Tac-Toe move? Not the end of the world, but certainly in need of improvement.


Yes, but AI search algorithms like MCTS are slow in general. Even with C, it will be very slow when you want the AI to be smart and consider many actions. IMO, you should train using python libs like [1] and move it to the browser with something like [2] OR run AI in the server. YES definitely writing an AI lib for the browser is a great goal, and as a programmer, it is super interesting. Still, it is tough, and time-to-market is much more crucial, as the entire codebase will change once it will interact with actual people.

[1] https://github.com/datamllab/rlcard

[2] https://onnx.ai/


Other than solved games like tic-tac-toe, game-playing bots can always use more performance because if you can search more efficiently, your bot gets smarter. Sometimes supposedly more sophisticated algorithms end up making things worse because they slow down the search.

It's an unusual area where functionality (what answer you get) and performance can't be separated. This is still true on the server.


To be clear, I'm not talking about the server implementation only about implementation on the browser.

And of course, Rust and C are faster than javascript, and it will be noticeable on search algorithms, but IMO, it will not cause user loss. It is smarter to get to the stage you have those users as fast as you can to validate this need.

So I would not go into implementing new AI libs in rust as part of a turn-based game engine just for the performance gain--mainly because it is a tougher project than building a turn-based game engine!

I would search for existing, viable solutions. As I said before, this includes server AI and server trained models running on something like tensorflow.js. Additionally, I will also try to research browser libs that may use compiled webassembly and webGL. In any case, I think it's not wise to build your own for this purpose.


Please ignore him whole hearty.

Some people want to write profitable apps in the shortest amount of time possible, while others want to advance the state of the art in technology.

IMO we always need more people in the second group. And sometimes, you can hit the jackpot and do both things at the same time!


"Please ignore him whole hearty."

My advice is not about profit.

My "time-to-market is most important" advice (not mine, it is a very sound and well researched strategy) is about making a sustainable project:

A) most passion-only projects are abandoned at some point in order to pay for food and rent. (Yes there are few exceptions among an ocean of failed projects)

B) The other point is that even if you can run this for years before showing traction. Having users mean getting feedback, which will render some of your efforts needless.

Moreover, it was only a small part of what I said. I do think that rust doesn't make sense for other reasons, and doing a rewrite in a real project needs better reasoning otherwise you would keep rewriting forever.

Moreover2, I think you're misinterpreting the original author's intent. I'm not sure if there is interest in building AI in rust as a goal for itself. Seems to me more oriented towards the result than about the love for rust.


For a lot of code, if you write JS like you write C (avoid allocations by avoiding the creation of objects, arrays or closures) you should get very comparable performance i.e. within 30% to 50% of C performance


You can indeed avoid most of the js overhead if you use a particular subset of the language. In fact, that's how asmjs started, but:

- it's not JavaScript anymore, you'll lose most idioms you're used to work with, and it's a nightmare to maintain because it's pretty low level.

- it won't be as fast as it could unless the runtime is aware of the fact that you are using that subset (that's why asmjs used some annotation to run on a special engine on Firefox, and part of the reason why browsers moved to wasm).


I don't know about JS being slow, while working on https://curvefever.pro it was not too hard to make the game run at 4k, 60FPS for a 6 player multi-player game that renders dynamic geometry with collisions which changes multiple times per tick.

JS is pretty fast if you use TypedArrays, are careful with WebGL calls and pool objects (reduce memory allocations).


[TypeScript] does not actually ensure that the data you are manipulating corresponds to the type that you have declared to represent it. For example, the data might contain additional fields or even incorrect values for declared types.

This is only a problem if you are mixing untyped code with typed code, isn't it? Like when you are parsing JSON, you need to do typechecking right then, rather than just using an "any" type. The only other situation I have run into this in practice with TypeScript is when I'm using a library that has slightly misdefined types.


> This is only a problem if you are mixing untyped code with typed code, isn't it?

I find it a bit strange that people talk about this as "only a problem", as though it was some weird niche edge case and not an ever-present issue. The written-in-plain-JS ecosystem completely dwarfs the written-in-TypeScript one; unless you're doing something rather trivial you're quite likely to end up depending on a library that barely had the shape of its objects in mind during development, with typings (if they're even present) that were contributed by someone that's almost definitely not the actual library author.

Of course, if you're competent enough you can correct typings and always remember to run runtime checks on data that doesn't come from you and so on. But it's too easy for a beginner to fall into traps like mishandling data that comes from a dependency (especially when it's buggy/ostensibly has typings) - in my opinion, there should be a first-party compiler/linter option to enforce checks on any `any`.


You don't have to depend on badly-written untyped third-party libraries, just because there are a lot of them out there. Many projects and companies will avoid doing so. This is especially reasonable if your comparison is switching to a language like Rust; there are probably fewer third-party libraries available in Rust overall than there are libraries with accurate TypeScript definitions available.


"Badly-written" is of course subjective, but as for untyped/loosely typed I think it's a bit difficult to claim that people are avoiding using them when (for example) the typings for an obviously popular library like Express are chock-full of `any` types[0]. Including several (like request bodies) that should instead be `unknown`. I'm sorry but it's rather naïve to expect a beginner to TypeScript, especially one that's coming from JS, to not trip up at all using typings like that and a compiler/language spec that implicitly casts things declared as `any`.

0. https://github.com/DefinitelyTyped/DefinitelyTyped/blob/mast...


You still have to validate input whenever you interact with servers, the operating system, or the user.


No, nominal types are extremely useful to ensure the correctness of your software. You can define a function to receive a FirstName and LastName instead of passing strings so you cannot accidentally mix up the parameters for example. There are several techniques to approximate nominal typing in typescript (https://medium.com/better-programming/nominal-typescript-eee... for example) and there is an open issue https://github.com/Microsoft/Typescript/issues/202 but it’s still not being considered for implementation as far as I remember.


That's correct. You have to insert validation code at all the entry points, which was the case for me.

Moving to Rust doesn't eliminate validation altogether, but you don't have to do any type related validation which is nice.


Not sure if this is a place to ask this but if someone does not have experience working with Javascript, they might have trouble reasoning about this code:

https://codesandbox.io/s/is849

edit: complete code

```typescript

class Person { id: number; name: string; yearOfBirth: number;

    constructor(id: number, name: string, yearOfBirth: number) {
        this.id  = id;
        this.name = name;
        if (yearOfBirth < 1900 || yearOfBirth > 2020) {
            throw new Error("I don't understand you. Go back to your time machine.");
        } else {
            this.yearOfBirth = yearOfBirth;
        }
    }

    getAge(): number {
        const currentDate: number = new Date().getUTCFullYear();
        return currentDate - this.yearOfBirth;
    }
}

class Dog { id: number; name: string; yearOfBirth: number;

    constructor(id: number, name: string, yearOfBirth: number) {
        this.id  = id;
        this.name = name;
        if (yearOfBirth < 1947 || yearOfBirth > 2020) {
            throw new Error("I don't understand you. Go back to your time machine.");
        } else {
            this.yearOfBirth = yearOfBirth;
        }
    }

    getAge(): number {
        const currentDate: number = new Date().getUTCFullYear();
        return (currentDate - this.yearOfBirth) * 7;
    }
}

const buzz: Person = new Person(1, `Buzz`, 1987);

const airbud: Dog = buzz;

console.log(`Buzz is ${buzz.getAge()} years old.`);

console.log(`Airbud is ${airbud.getAge()} years old in human years.`);

```


Unfortunately HN doesn't support commonmark's triple backtick code blocks. You'll need to use 4 spaces before each line of code.


Two spaces. Four will work, of course, it just wastes a bit of horizontal screen space.


What’s confusing? I must’ve missed it in my cursory scan.


Dog and Person are structurally the same so you can assign a person to a dog and vice versa. But that's just how structural typing works and as a user of TypeScript I haven't run into a case where this'd be an issue.


Haha oh. Yeah, assumed that the Dog definition wasn’t worthless. Indeed TS is structurally typed. And that’s nice!


I worked on a web based editor. A library would give us a range to highlight, in 1-based coordinates. The editor control was 0-based. As you can imagine it was easy to forget to translate back and forth in one path or another. In a strongly typed language I would simply define two Range types and the compiler would eliminate the mistake. I assumed Typescript could help me in the same way but it allowed the two types to be interchanged silently because they had the same structure. Perhaps I was holding it wrong?


Typescript has two hacks that help with mixing of similar data and introduce somewhat-nominal typing - branding and flavoring [1]. Also see smart constructors [2] for more functional approach.

[1] https://gist.github.com/dcolthorp/aa21cf87d847ae9942106435bf...

[2] https://dev.to/gcanti/functional-design-smart-constructors-1...


That's the most they could simplify the code to make the point?


A person value assigned to a dog variable


No[1], TypeScript is unsound in many ways. Most of them are intentional trade-offs to catch as many bugs as possible while not being too restrictive/allowing most JavaScript code to be annotated.

[1]: https://codesandbox.io/s/te0pn?file=/index.ts

Edit: Apparently TypeScript playground links cannot be shared :( Edit2: Published on codesandbox.io, hopefully that works


> The only other situation I have run into this in practice with TypeScript is when I'm using a library that has slightly misdefined types.

Unfortunately there's no way to know which libraries have defined their types correctly, so you end up having to check the types you get back from every library you use.


When getting external data, it always good to do a combination of casting and validating. There are a lot of good libraries to help you with this: https://github.com/moltar/typescript-runtime-type-benchmarks


I don’t know Rust and haven’t done JavaScript for years but isn’t JavaScript a bit more of a higher level language, which would make a developer more productive?

Rust being closer to the hardware should require more code, and effort, to accomplish the same task.


> isn’t JavaScript a bit more of a higher level language

Rust has zero-cost abstractions that feel like using Ruby and a rich type system that makes it incredibly expressive. Working in other languages feels like going back to assembly.

I wouldn't want to design state machines in any other language. Rust's enums make it feel smooth as butter. They're a killer app. (The whole ecosystem is. Cargo. Package naming. Traits. So much good stuff.)

Rust is my most productive language now, and I work in Python, Java, and Typescript codebases frequently.

Edit: I'm being downvoted for expressing preference? What's with all the Rust hate? Learn it instead of being a hater. It's a wonderful tool, and it's silly to dismiss because you think people are being hipsters or something. There's a reason people love it.


I'm a big Rust fan too. I have experience in all of the languages you've mentioned, and I'm most productive in Rust as well.

It's funny you mention enums, there was just another thread last week where I brought up how crazy it is that sum types (Rust enums) and pattern matching have been around since the 70s but have largely been limited to the FP languages. After extensively using Rust for ~3 years now, I don't ever want to use a language without sum types - you can write incredibly expressive and concise code with them.

The other major productivity boost for me is the ability to compose and chain Iterators and mix those with collection types.


People often demand generics in Go. But I think that actual sum types would be a bigger boon (if I got to choose).

You can basically avoid interface{} in generic Go code with some copy and paste. But you can't really avoid it when you're trying to write code that would be cleaned up by an actual sum type.

Not to commit the predicable sin of comparing Rust to Go any time either is brought up. I just mean to +1 the idea that sum types should actually be a fundamental tool that we can reach for in every language. So weird that it's taking so long to go mainstream, so to speak.


I just implemented an iterator in rust yesterday. After having done it in c++ back in the day, I was blown away by how easy and powerful it is.


I think a lot of the best the best things in Rust don't really have anything to do with low level programming per-se. And I find that for most applications, even with all of the extra goodness, the lack of a GC totally craters productivity. Its not because I'm fighting the borrow checker -- I got the hang of it pretty quick. But it means that every API is complicated by the need to think about lifetimes and ownership, having to think about different types of smart pointer, etc. It means you have to manage all of these silly little details which for most applications are pretty irrelevant.

This isn't a criticism of Rust; it's specifically designed for applications where those things do matter. But for the overwhelming majority of apps they don't, and I'd reach for a different tool.

I've found that once I get into a rhythm and I've been working on something in Rust for a bit, it doesn't feel that hard to deal with all that. And a few times I've thought to myself "hey maybe the GC isn't actually buying you all that much?" But then I go back to a GC'd language and watch just how much faster stuff gets done. It's not even close. I think it's one of these things where your brain doesn't notice the time that goes by when you're doing what is essentially mindless busywork.

Part of this is also having come from doing a fair bit of stuff in Haskell, Elm, and a bit of OCaml; the best "high-level" language features are inspired by that language family (including enums) and so it feels like a better control for the difference a GC makes vs. js and friends. It makes a big difference.


What's puzzling is why a language designed for memory safety and low-level control and performance is even being considered for web development where they had the former all along and they generally don't care about the latter. Or if they do they use Java, Go or throw a couple dozen more servers at the problem.


It's because it also has expressiveness features that Java and Go don't.

* Better handling of "null"-ness

* Sum types

* Stricter/different error-handling

* Move semantics, which can actually be nice for some APIs outside of any performance considerations

Kotlin checks a couple of these boxes, but then is also GC'd, so also gets rid of a lot of "noise" that would be in the equivalent Rust code.

For typical backend junk, I'd be Kotlin first, but I'd definitely consider Rust if performance (non-IO) was a concern.


With all the "Rust is as productive as X" and "If your data structure/logic won't work in rust, it was bad design" bullshit the silent majority of us are accustomed to see on HN..


With Java you only need to throw more RAM at the server. The performance is perfectly acceptable compared to something like Python or Ruby. You can always squeeze out more performance with Rust but the primary benefit is the lack of a GC.


I don't understand your last sentence. What benefits does lack of a GC have other than more performance?


A bit offtopic for HN, but that's a really nice username. (Just looking at that, I can definitely see how you'd find those things puzzling!)


> But it means that every API is complicated by the need to think about lifetimes and ownership, having to think about different types of smart pointer, etc. It means you have to manage all of these silly little details which for most applications are pretty irrelevant.

Rust gives you the features to auto-manage these "silly little details", you simply have to opt-in to them with a bit of boilerplate. Stuff like Rc<RefCell<T>> and the like is there for a reason.


You still have to deal with the fact that existing APIs use a broader range of types; using Rc everywhere in your own code doesn't save you from dealing with library interfaces.

Also "stuff like" is kind of the problem; all of the smart pointer types have differences that matter, and none of them is general enough to cover all use cases. Arc<Mutex<Box<T>>> comes the closest to a "general" solution, but the boilerplate is rather a lot just for the sake of not having to think about this stuff, and you still can't use it if T isn't Send.

You're really picking a fight with the language if you insist on avoiding making these decisions, and in the end it will just slow you down even more than going with the grain. And to add insult to injury, if you write all your code like that it'll likely be slower than OCaml or Haskell anyway. Rust can't keep up with a good GC on allocation throughput; the performance advantages come from managing things yourself (and avoiding heavy allocation in the first place).

Rc and friends are useful, but they don't make the problem go away.

As I said, it's not that it's even really all that hard. But it is time consuming.


In a sense Rust could have tried to become a "tiered" language with 2 standard libraries: one low level that allows precise control on heap allocations (including custom allocators) and another higher level that relied on a GC. They could both be used at the same time.

There would also be a compiler attribute #![no_gc] that make it so that you have to provide your own implementation of the GC runtime to use GC types. (similar to how executors work for futures)


D tried that one. I don't know it very well, but I believe it turned out not to be a good idea.


When you're writing synchronous, single-threaded code like, say, a binary file parser or even an HTML scraper with blocking APIs, Rust is 1:1 with high level languages, and usually better.

Where Rust diverges is when you're doing async or multithreaded things with complicate lifetimes.

Getting things right from the start instead of solving them JIT as runtime issues pop up in production over the course of the year is Rust's wheelhouse. But even after using Rust for three years, I sometimes feel like I'm one requirement-change away from a problem I can't solve myself in Rust where I could solve it in a couple hours in another language.


> But even after using Rust for three years, I sometimes feel like I'm one requirement-change away from a problem I can't solve myself in Rust where I could solve it in a couple hours in another language.

Rust does have these sorts of issues wrt. managing highly generic graph-like data, possibly with cycles. That's where tracing GC actually shines, and where writing that whole portion separately in something like Go might be the right answer.


Yep.

When I'm writing Go, I spend the whole time yearning for the abstractions of Rust, becoming a little more bitter each time I copy and paste or have to manually reify interface{}, knowing my ceremonious verbose Go code is slower than the simpler compact Rust solution.

When I'm writing Rust, I'm loving life until I hit this sort of "intractable" [for me] problem, and I wonder if I should have used Go for this part.


That is true if you have to write your own. 99.99% of the time, if you need a graph, you use petgraph and move on with life.


Reference-counted graphs are usually quite fine.


You are probably being downvoted because of this sentence:

> Working in other languages feels like going back to assembly

This is not a preference, it is an exaggeration and an attack on all other languages.

It is also quite ironic given Rust is intended for low level programming.


I have seen this being asked many times in /r/rust and 90% of the answers disagree with you. A common theme is to explore in Python then rewrite in Rust.


gamedev here and I basically do exactly this. explore in GDScript, rewrite in cpp.

(GDScript is pretty much python)


There's the hype and there's the reality. Some people don't appreciate the former much and reach for the down arrow...

Rust is nowhere near the productivity of Ruby or Python. If that's the case for you maybe your projects have some peculiarities or it's a personal thing.


This is complicated. I’m no expert with WASM, but I’m fairly familiar with Rust and have toyed around with Rust and WASM.

With Rust and WASM you pay an expense of FFI between the languages. There are tools that minimize this, but there’s still a cost. Transferring data is limited to very primitive data types today, which adds a cost to translation between Rust and JS. I expect this to reduce in cost as WASM gains abilities to access the DOM and such, but it is overhead. This generally means that for many things Rust is not much faster than JS, but it is for very hot loops over CPU bound computations.

As to productivity. This debate between languages will never end. JS like Python and other interpreted languages give the impression that tasks are being accomplished, but until all code paths are tested, knowing if it is correct is not obvious.

Rust like many other typesafe languages (and it’s definitely on the further end of type safe) allow the compiler to detect errors in usage at compile time, well before testing and production usage. Some of us consider this to be more “productive” as it reduces the overall maintenance of the program after it’s released, but YMMV.


This is an important point.

Languages compiled to wasm have an FFI cost, and we will never fully remove it because it just uses different types than Web APIs do. This isn't a Rust problem or a C problem, it's just how wasm is.

Wasm also can't use the JS standard library, and usually ends up shipping some runtime support, like malloc or string handling, which increases download size.

Both of those limitations are why wasm won't make sense for the great majority of web dev work. But wasm shines for "engine" type code, like in this post - pure computation, without lots of links to the outside JS/DOM world.


Best comment in this thread. To the point and factual.


Rust actually can be a higher language. It has true generics, static algebraic data types, functional programming features, and so on, even as it is closer to the metal. It is closer to Haskell or OCaml, but more pragmatic.


In what ways is Rust more pragmatic than OCaml?


Does not really require you to drop to a lower-level language for performance, unless you really need to use assembly for something.

OCaml is great for the application level stuff, and also performs quite well in general. But for building fundamental parts of the stack in which perf is important (like the multi-threading runtime, the garbage collector, etc.) you need to drop down to C++, C, or Rust.

OCaml is definitely much nicer to use than C and C++. But I don't find it that much nicer to use than Rust TBH.


> OCaml is definitely much nicer to use than C and C++. But I don't find it that much nicer to use than Rust TBH.

I find OCaml quite a bit nicer to use than Rust, though in a lot of ways that's due to my preference for structuring code using modules and functors.

However, due to Cargo and the fact that Rust has good Windows support, I end up reaching for Rust much more often than I do for OCaml.


Modules is pretty much the only OCaml feature I miss in Rust. I can work around functors with some effort in Rust, and hopefully GATs will make this even simpler (although I don't expect GATs to give Rust functors).


A good package manager, multicore support, larger community, and so on.


Rust’s type system is sound, unlike TypeScript, and Rust provides a lot more performance by default. You can get good performance out of JavaScript, but it can be difficult, and the code can end up being difficult to maintain.

If you’re trying to right very fast, very correct code in JavaScript/TypeScript then Rust may be more productive.


Depends on what you're optimizing for. If you want memory efficiency and near-native speeds, then you need WebAssembly. And Rust is a fantastic source language to compile to WebAssembly because of its memory safety.


Mind explaining a bit more here why Rust is a particularly good language to compile to wasm? Why does memory safety help here?

Asking out of ignorance!


There isn't necessarily any inherent thing about the language itself that makes it better at WASM than others, it's more that it was one of the first languages that was ready for WASM.

- It's low level like C and C++ so it maps cleanly onto WASM

- In your average Rust project, all dependencies are already built from source, greatly increasing the likelihood all your dependencies can be built for WASM

- Already using LLVM as the compiler backend made WASM targeting a lot less work than for languages that need to do that work from scratch

- Probably most importantly, a lot of developers involved in core rustc development were motivated to get Rust working on WASM

At least that's my perception for how Rust became such a prominent language for WASM development early on


You forgot the most important thing: it doesn't need a garbage collector. This is important because wasm garbage collection is a work in progress.

The solution other languages targeting wasm typically use is bundling their own garbage collector in the compiled code, which of course adds a bit of code bloat. E.g. C# blazor wasm applications are not exactly small for this reason.

There was a message yesterday in the Kotlin slack about them starting work on a wasm compiler backend for Kotlin (they already have java, native, and js compilers). https://github.com/JetBrains/kotlin/tree/master/compiler/ir/... Interestingly they plan to depend on the wasm GC proposal instead of bundling their own: https://github.com/WebAssembly/gc/

So, things are improving on this front. But it's a big reason why Rust is particularly popular for wasm right now because they have no GC and lots of developer tools that are relatively mature because they've been working on this for a while.

IMHO, this will take another few years to fully mature but inevitably lots of people are going to be writing web applications that don't involve any or very little javascript. Kotlin is starting to look very solid for any kind of cross platform Android, IOS, and web based development (as well as server development, which is what I do). Swift would be another candidate and there already is a wasm compiler for that as well.


Interesting, one of the big reasons I want to learn Rust is to "future proof" myself once JS loses its monopoly on the web. Maybe Kotlin will be a better one than Rust?


> - In your average Rust project, all dependencies are already built from source

That is only the case for open source code like crates.io, there is nothing that guarantees you will get the source code of a third-party, though.

I mention this because in the native world it is common to give customers precompiled libraries.

> - Already using LLVM as the compiler backend

Some people don't seem to know this, but all languages that target LLVM (including C, C++, Fortran, Ada, Julia, Swift and others) can be used in WebAssembly.


> all languages that target LLVM (including C, C++, Fortran, Ada, Julia, Swift and others) can be used in WebAssembly.

Having LLVM doesn't mean that webassembly Just Works, in the same way that having LLVM doesn't mean that all of its architectures Just Work. And even after getting past the "hello world" stage, there's a lot of other work to do to make it more than just a toy.

Let's take Ada, for example. https://blog.adacore.com/use-of-gnat-llvm-to-translate-ada-a... talks about how to use Ada to build stuff for wasm, but you need to include https://github.com/godunko/adawebpack/ to make things work well. Someone had to write that code.


No, that is if you want runtime support, etc.

If you just want to run some computational code (which is the case for most of the Wasm use today), it will Just Work, as you say.

In fact, that is how I sped up a webpage: I just wrote myself the minimal support needed to run the code that computed X, and that's it. I don't want the entire world or standard library for computational bits to work.


The example I pointed out was for extra stuff, sure, but even just to get a compiler to spit things out, work needs to be done. I don't know Ada's compiler well enough to point to where that work is, but here's the initial implementation of the asmjs and wasm targets in rustc, for example https://github.com/rust-lang/rust/pull/36339

This is just true of any architecture. LLVM is a toolkit, it isn't magic.


WASM doesn’t have a garbage collector built in. Languages that use a garbage collector have to also convert their language runtime into WASM which can dramatically increase the WASM file sizes and is more work to process by the browser. Since rust doesn’t have a garbage collector this should make its WASM files smaller and more performant


There aren’t very many good wasm options that will produce web-friendly binary sizes. The options are really c/c++, rust, and some other less-supported languages like Zig and Nim. Rust has a good mix of high-level constructs (making it more productive than C) and has a good ecosystem around it. Cross compiling is never easy, and the rust wasm story is arguably the best and most supported via wasm-bindgen.


From what I understand, this mainly means that a lot of work has been put into making this process possible, while for some other languages the workflow to wasm is lacking.


Wasm is not memory safe inside the sandbox, so the language should enforce it if you care about correctness and/or dislike the C/C++ memory bugs and debugging.


Mostly because WebAssembly does not include GC yet, so being a non-GC language helps.


ML family languages are so much more productive that that probably outweighs the costs of manual memory management. Rust will always be less productive than Haskell or Scala, but a language with a decent type system is going to have a huge advantage over JavaScript.


...Haskell and scala are not part of the ML family. OCAML and F# are languages belonging to the ML family.


By what definition? Haskell and Scala both have the complete ML featureset (in particular they have sum types and full pattern matching; they're also part of the small group of languages that has typeclasses, which were originally invented for Standard ML) and are on record as being heavily influenced by ML languages (via Miranda in the case of Haskell).


https://en.m.wikipedia.org/wiki/ML_(programming_language)

“Today there are several languages in the ML family; the three most prominent are Standard ML (SML), OCaml and F#. Ideas from ML have influenced numerous other languages, like Haskell, Cyclone, Nemerle, ATS,[citation needed] and Elm.[3]”

If you know F#, OCAML, Haskell and scala you can see that the first two have an extremely similar syntax to ML while for the last two the syntax is very different.


If you actually click down to [3] in your own link it says '"these languages" is referring to Haskell, OCaml, SML, and F#'.


An experienced Rust programmer will write generally better solutions in roughly the same amount of time as what is taken to write a solution in NodeJS, where the work is familiar. There is absolutely more code to write for a language where you can have fine control over practically everything, as long as it's within the constraints governing the compiler. However, web development consists of well-treaded paths using familiar patterns of data access and manipulation. If a new project does not benefit by code re-use, the experienced Rust programmer is going to have to use more tools and techniques to accomplish something similar in functionality to that created in other languages, but with far more guarantees and control. Who cares about guarantees and control? The team that is going to have to share the burden of maintaining the code you write.

With the patterns and libraries used in just this example, you can write a very high performance, memory efficient web server that can handle a large volume of concurrent requests that involve database interactions: https://github.com/actix/examples/tree/master/async_pg

All of the work has been consolidated down to a single file for illustration purposes, and it would usually be spread across files. Does it seem like a ridiculous amount of additional work, compared to other languages? I am misleading you some as there is actually a lot more to write the moment we move beyond plain vanilla workflows but this example proves what is possible.


JavaScript's a sloppier language.


It’s not that bad once you get used to it.


Alon Zakai[1] gave a good talk[2] on the current state of WebAssembly, particularly on the current state of performance

1. https://twitter.com/kripken

2. https://youtu.be/4ZMY3QE5t9o


I'm not certain exactly how LLVM works so I am not sure this is the right question to ask but - How does Web Assembly relate/compare to LLVM? Does LLVM solve the problem of a single binary that can be run portably?


They're trying to solve different problems. LLVM is trying to take care of most of the language-independent bits of writing a compiler, while wasm is a format for delivering code to browsers, with an eye towards being a good target for languages like C and making use of the existing JIT compilers in modern browsers.

LLVM as a toolchain provides backends for many target machines, so as a compiler author you can just emit LLVM IR and (mostly)automatically be able to compile for x86, arm, mips, etc... And they have a wasm backend too, so the technologies are complementary -- you can use LLVM to compile to wasm. LLVM will do a fair bit of optimization too. The intent with LLVM is not to provide a portable binary distribution format, but to allow different compiler frontends to all share the same compiler backend logic.

By contrast, with WASM the assumption is that by the time it gets to the browser the ahead-of-time compiler has already done a fair bit of optimization, so what a wasm compiler has to do is much simpler.

WASM is also designed for space efficiency, since the code will be shipped over the network, and safety -- LLVM has a C-like notion of undefined behavior (which the optimizer makes use of), but this would be wildly inappropriate for WASM due to security (and portability) concerns.


There are not comparable afaik. WASM is also more ambitious, along with WASI (system interface) creating a portable final executable (compare to JVM), whereas LLVM is only a set of intermediate language and tools.

When this transition period goes away, LLVM should hopefully compile to WASM binary as a target. The current path described in the video of WASM -> C -> clang -> llvm -> native-binary is a temporary workaround afaik.


WASM isn't "more ambitious". It is a spec for portable bytecode, nothing more, nothing less.

LLVM is one of the best toolchains for many languages.

They are completely different things.


Yes, I agree. Wording wasn't right. What I meant was that it is trying to define a universal target with runtime system interface and such. The only reason I drew a comparison was that both are trying to bring standardization, but in different contexts.


I spent the last 2 years making an online adaptation of a board game. I chose Typescript mainly because I was comfortable enough in this language and I don't regret this choice.

Typescript is, for me, the right balance between the strictness required to manage a mid-size codebase, and the looseness necessary to be productive (I definitely don't want to manage low-level things like memory when coding the gameplay of the board game).

On top of that, if your project is open-source, using JS/TS makes it more likely that someone can contribute to the project, compared to Rust, which has a higher learning curve.

Being able to easily share code between the client and the server is a big bonus too. The article points that with WebAssembly, it's possible to do it with any language, but I'm not sure it's stable enough to be used yet.

I'm thinking of making a library similar to boardgame.io (because I think there are some parts that could be done better), and I'll most probably keep Typescript for it.

Curious to see how using Rust for this will play out!


Keep in mind that WebAssembly is not a silver bullet for performance, and carefully crafted JS application (written in engine-friendly way) will perform roughly the same or even better than its WA equivalent. I've rewritten chunk of my math-intensive app in AssemblyScript half a year ago - it took me about a month to fight through all the compiler bugs, I used every optimization possible (used floats everywhere, disabled array boundary checks, disabled GC for 70% of classes) and still end up with a binary that is 30% slower than the original JS code. It was mostly AssemblyScript's GC, which was extremely slow (and probably still is), and with GC completely disabled (which is an unfair advantage for WA) performance was almost the same.


You can't judge the performance of WASM solely based on experiments you made using AssemblyScript (a TypeScript subset that compiles directly to WASM). As WASM is meant to be a low level compiler target, a proper comparison would be with binaries produced by the Rust or C compilers that don't rely on a specific GC implementation.


GC aside, I've inspected assembly code (.wast files) for top 10 hottest methods in my code, and it was pretty much perfect, Rust and C will probably end up with something similar. However, there was no performance improvements whatsoever, those methods performed similarly to their JS equivalents.

UPD: modern JS engines are extremely capable of optimizing stuff and they can probably come up with machine code similar to what Rust or C will produce given that you keep your code predictable and optimization-friendly.


That's pretty interesting. Could you share you project?

There are benchmark which compare JavaScript and AssemblyScript: https://github.com/nischayv/as-benchmarks https://github.com/nischayv/as-benchmarks/issues/3#issuecomm...


I'm not the OP but I can confirm in my own project, we found about a 10x performance gap between AssemblyScript and TypeScript. In essence, we're working on a rewrite of DNAVisualization.org[1][2], a serverless web tool for the interactive inspection of raw DNA sequences. We hoped that WASM would give us a performance boost but have been generally disappointed with both the performance and the amount of complexity involved in getting the tooling to work.

We did do a benchmark[3] and, unless we made an error (likely, given that all of us are new to WASM), found that JS was much faster for our simple algorithms. WASM had the approximate performance of our original pure Python implementation[4], so not great.

[1]: https://dnavisualization.org [2]: https://academic.oup.com/nar/article/47/W1/W20/5512090 [3]: https://github.com/Lab41/dnaviz/tree/benchmarks/benchmarks/a.... [4]: https://github.com/Lab41/squiggle


I made PR which suggest some changes and fixes: https://github.com/Lab41/dnaviz/pull/21


As you can see AssemblyScript approx 4x-4.5x times faster now


Unfortunately, no, it's a commercial closed-source project. I profiled both JS and WA versions, the hottest methods took basically the same amount of time in both versions, except that WA build additionally spend ~25% of all time in __retain or something like that.


Do you intensive use small objects like Vec3, Quaternion and etc? AS hasn't scalar replacement optimization pass yet which JavaScript definitely has. Also it will be much better after implementing tuple / records which depends on multi-value proposal for now. All this significantly reduce ARC / memory pressure. But my assumption it could be also wrong measurement. Most of people uses benchmark.js which also measure js <-> wasm interop overhead which usually main bottleneck.


> AS hasn't scalar replacement optimization pass yet which JavaScript definitely has. Also it will be much better after implementing tuple / records which depends on multi-value proposal for now.

Yes, I do use a lot of small objects. That's an interesting information, I'll keep an eye on multi-value proposal, thank you!

> Most of people uses benchmark.js which also measure js <-> wasm interop overhead which usually main bottleneck.

Nope, I loaded all the data into WA module upon initialization and don't perform any additional synchronization between WA and JS afterwards (which is also kind of unfair advantage for WA).


> Webassembly Is Faster Than Javascript

Just curious, how's the OP measuring this? Also, would be very interested in the stack - is OP using Yew/Seed/etc or server rendered pages with Rust?


I'm just using an SPA written in Svelte on the frontend. I only use Rust for the game state updates.

About measurements, I just whipped up a quick benchmark comparing the JavaScript state updates with the WASM version. You do pay a cost crossing the JS / WASM boundary, but the WASM version is faster overall for my application. I'm not particularly concerned with performance at the moment (it was just a nice to have).


Presumably the argument is 'webassembly optimizes better', which is true provided you aren't crossing runtime boundaries (between wasm/js or js/browser api) too frequently.


I will only nitpick on the part about TypeScript not being type-safe enough, where extra fields are possible or wrong types can be sent over the wire.

io-ts [0] completely solves this issue, to the point where I don't think it's less type-safe than Rust in any practical manner. I've been writing apps in TS this way for the past year and I have quite a large codebase in production. The errors you mention do not happen.

[0] - https://github.com/gcanti/io-ts


I love things like this, but at the same time when I look at the API this library provides, it's not junior-friendly.

Even though I acknowledge it solves some problems I'd like to solve, if I put code using this library infront of a junior developer, they'd be paralaysed, and at best, if they weren't, write code with it that another junior wouldn't understand. That makes it difficult for me to justify introducing it.


I agree in principle, though in this particular case io-ts does not tend to produce incomprehensible code. Maybe if juniors use it, it would, but it's easy to encapsulate the IO codecs in their own modules where the functional style of io-ts/fp-ts does not leak out to other parts of the code and only types can get reused.


That's not an unreasonable concern but it does mean you're writing code targeting the lowest common denominator. That's not a decision free from negative consequences.


Thanks for the link! I'll look into this for my other TypeScript codebases.


One thing articles like this always miss, and what I'm most keen on, is how exactly the bridging works. Is there a standard Rust wasm crate that exposes the DOM? Do you pass some struct with function pointers? Is the Rust code exposed as a black box with no connection to the outside world and you just call a few entrypoint functions? Can you call Rust from JS and JS from Rust? If you go JS->Rust->JS->Rust->JS and the innermost JS function throws an exception, is all that properly propagated up the stack?


There are two low-level bridges: js-sys provides bindings to all the ECMAScript stuff, and web-sys provides bindings to all of the rest of a browser environment, including the DOM.

> Can you call Rust from JS and JS from Rust?

Yes.

I personally have written some JS code that returns a promise, wrapped in wasm that turned it into a Rust Future, then converted that Future into a promise that I've returned to Javascript.

> If you go JS->Rust->JS->Rust->JS and the innermost JS function throws an exception, is all that properly propagated up the stack?

Sort of. Rust doesn't use exceptions, but the binding converts a JavaScript exception to a Rust "result" type and vice versa at the boundary, so if you propogate it, it will get properly propagated.


The Rollup plugin linked to in the article produces JS functions that call into WASM code. JS objects are automatically converted to Rust structs via Serde and vice-versa.

I use TypeScript / Svelte for DOM interactions and only call into Rust code for the state updates (treating it like a black box).


In general, I think the significant improvement in Speed in case of WebAssembly binaries will be largely due to SIMD support. But performance gap between JS and WebAssembly is not at all deciding factor unless you are creating a high quality Games. There are lot of online games with low to medium graphics quality created using 2D canvas or 2D/3D WebGL libraries which run just fine on modern web browsers.

Most developers, at least in my experience see just plain Array and JSON. If they get themselves familiar with API like WeakRef (WeakSet/WeakMap), ArrayBuffer along with DataView and Various Typed Arrays (Int8, Int16, Int32, Uint8,Uint16,Uint32, Uint8Clamped and likewise for Float and BigInt) they can write Garbage heap friendly and more computation focused code.

Besides that, strictly speaking for standard web application development, JS ecosystem of libraries and frameworks are miles ahead for handling various kinds of UX/UI journey scenarios. Developer tools in the browser to do various kinds of performance inspection and profiling of the JS code.

I wonder why JS engines (like V8 for example) rely highly on Scalar instructions ? There is lot of scope to apply auto-vectorization and generate SSE, AVX, NEON (for ARM) instructions. If they implement this or at least provide some API like SIMD.js (one they abandoned in the past), it will make WebAssembly close to redundant unless you hate JS or other PLs which transpiles to JS.


Rust is extremely poor for web apps and graphical interactive software in general. Having been using Beads (a competitor to typescript) for the last few months, i couldn't possibly build my food ordering app quickly in Rust. There are hundreds of art assets to manage, and every screen has to automatically re-flow depending on the resolution of the device. Without a concept of DPI and the ability to measure text, and have a layout system, all of which is built into the Beads language, it would require a very tedious programming effort. Most of the time coding a graphical interactive product is making it look nice on all the different sized screens; Rust does nothing for this very time consuming task.

Typescript may force you to use the awful CSS, but having some layout system is better than none. And we could also talk about event management, and sync between client and server, none of which Rust is particularly good at.


> For example, the data might contain additional fields or even incorrect values for declared types.

I understand 'incorrect values' (this is true, see https://github.com/Microsoft/TypeScript/issues/15480) .

But TypeScript definitely doesn't like adding additional fields. Right now I'm working on TypeScript, added the new Fastify raw body plugin, and forgot to extend our custom Request type. The error is:

  Property 'rawBody' does not exist on type 'CustomRequest<DefaultQuery, DefaultParams, DefaultHeaders, any>'.ts(2339)
Ie, we can't just add a field.

Appreciate I may be wrong, or we have different definitions here, but I'd like to discuss.


Something like this is valid TypeScript I think?

  interface MyType {
      a: number;
  }

  function process(value: MyType) {
      console.log(value);
  }

  const value = { a: 1, b: 2 };

  process(value);


As someone who has (mostly) enjoyed working with Python and JS (CoffeeScript in the past, and ES6+ and TypeScript more recently), and also dabbled in Ruby/Rails, I've been intrigued by Rust for a while. Its promise of being close in speed to C/C++, but safer and more ergonomic, sounds great. I have not attempted to build anything with it yet, but I've looked at what (I think) is enough examples to be familiar with its constructs, and have seen some benchmarks.

For systems software or libraries that are meant to be hooked into from, say, a JS/TypeScript environment (see https://deno.land, a promising alternative to Node), it seems like Rust has a lot of potential.

But for building applications (which is what I'm focused on currently) it seems Rust has a "No OOP for you!" attitude that gets in the way of modeling your application domain quickly. Yes, you can use impl and trait as an alternative to classes. But it seems like you have to jump through a lot of hoops; define your structs, define a trait for default behavior and remember that &self is provided as an implicit first argument to the methods (this is similarly cumbersome in Python), then define an impl for each corresponding struct (don't forget about &self!). As opposed to just declaring a base class and overriding when you need to in the subclasses. I am definitely not saying classes are the only way to do OOP, but at least give developers the option of using them.

I have found Swift attractive for application development because it gives you structs for one-off (value-based) data types, classes when you need an easy way to do inheritance and polymorphism (and/or need reference-based data types), and extensions as a more abstract way to enhance structs and/or classes (and/or enums). It doesn't force you to use classes (and the community appears to think protocols are a better construct), but the language remains approachable, and you can always start with classes then refactor to protocols when the need arises.


Is Vapor the rails-like go to web framework for Swift? Is it viable for indie hackers, or is it more like Java geared towards the enterprise? I find Swift interesting as a language because it seems that it's well balanced, usable for low level systems programming and higher level applications. But protocol first kinda indicates big design up front, something more geared towards enterprise usecases..


I don't have any statistics, but based on the amount of learning articles, books, etc. out there it seems like Vapor is the dominant Swift Web framework. Kitura looked promising, but IBM has stopped supporting it, so its future is not clear (the last commit to the main branch was in November 2019). Also, the two main Vapor devs are members of the Swift Server Work Group (with the other two being from Apple): https://swift.org/server/

I'm not sure how to answer your question regarding whether Vapor is viable for indie hackers, because I've only been skimming the docs and looking at small examples. But it seems to draw a lot of inspiration from Express.js and/or Ruby's Sinatra library. It doesn't seem to have the up-front complexity of something like Java's Spring Framework, although I understand you can use services and dependency injection if needed. So my first impression is it's easy to get started, but it enables you to refactor to something more complex if your application demands it.


Thanks for your answer. So Vapor for building APIs, not web monoliths, good to know. It kinda makes sense considering that most of the users will use mobile apps on the client side.

Personally I also find Elixir and the Phoenix framework very interesting.


No problem! Vapor does have its own template system if you don't want to build "just" an API: https://github.com/vapor/leaf

I have also looked at Elixir and Phoenix, but have not written any code in them. I have yet to really dive into anything outside the C-like languages (Python, JS, Java, Swift, Ruby, C, C++), and Elixir seems like a good reason to venture outside my comfort zone.

By the way, no pressure, but if you ever want to discuss Swift, maybe do some paired programming/studying, let me know. I'm still fairly new but I have experimented with building a Swift-to-CSS DSL that I would like to eventually spend more time on. If you want to email me, ragheed.almidani is the name, and protonmail.com is the domain.


> However, it [Typescript] does not actually ensure that the data you are manipulating corresponds to the type that you have declared to represent it. For example, the data might contain additional fields or even incorrect values for declared types.

This made me curious, as it sounds like something that can be added as a transpiler option under tsconfig.json. But it turns out Typescript doesn't already have a support for "nominal" type, but there are some tricks one can leverage to achieve similar thing:

https://medium.com/better-programming/nominal-typescript-eee...


Does anyone know the progress on exposing browser APIs (and hopefully real JS objects) to WASM directly? I think its use-cases will remain pretty narrow until then, but I haven't heard much noise about those developments lately.


I replaced Node.js with Rust ages ago.

It was a brutal learning curve but having the compiler cover everything I do is a massive advantage that never stops paying dividends.


I am actually converting a work project from TypeScript to Swift as it makes the writing of code more fun and then just leverage either for the backend application or as part of web application via WebAssembly. Main reason it's easy to share the code with different targets (win32, arm, WebAssembly, osx) so its highly reusable. And I like Swift more as a language compared to Rust

Really enjoying it so far.


Completely off topic, but I'm a front end developer who dabbles with a range of languages and also enjoys trying out new languages, but I've never really used a 'back-end' language and I've been trying to find one that works for me. What do people recommend nowadays? (Only asking as I've seen people talking about Rails/PHP/Node, etc.).


Rust's type safety and error handling are helpful tools to have in your belt when gluing together the pieces needed for a web api. I have been working on a new product at my day job that has needed to change a lot as we learn and grow and I am happy with how Rust has been reliable in the face of that change.


Good idea.

TypeScript is too Java/C# for my taste. Rust feels more OCaml-ish.

It has more predictable performance than JavaScript and often it can be better.

If Rust gets a bit more ergonomic (via language or crates) it could be a good alternative and finally save us from the horrors of Electron/Slack/Bitbucket/etc.


I would like to get started with Rust & Game Development. Any resources for that?


https://arewegameyet.rs/

Rust is not really there yet for game development, in my opinion. Amethyst is probably the most advanced 3D engine at the moment, but they are having some issues with picking an ECS implementation, and it's still very young.

There are a few 2D engines, but they are also not particularly mature either. Coffee looks interesting, although I haven't tried it.

For something lower-level, there's gfx.rs (Amethyst uses this), which is quite impressive. gfx-hal is quite nice for abstracting over DirectX/Vulkan/OpenGL/Metal/etc.


There is a nice book that goes through teaching Rust, while doing a Tetris game with SDL.

https://www.packtpub.com/eu/application-development/rust-pro...



I haven't used it but there are some unofficial Rust bindings to the Godot game engine: https://github.com/godot-rust/godot-rust


i'm glad it worked for the author, and i am sure you can achieve better performance with Rust than with NodeJS, but i'd like to discuss some of the mentioned advantages:

in general, typescript is optimized for cooperating with other javascript code, to make it easy to drop it into a javascript-based project, and also, it does not really have it's own "runtime". typescript is javascript plus types. in fact, if you want to convert from typescript to javascript, you can take a typescript-file, and just delete all the type-definitions and you get a working javascript file (there are some exceptions to this, but it generally is true). this approach has it's downsides of course, for example,when you need to interface with a javascript (not typescript) module, you can easily, but you have to know what types it expects and returns,otherwise you might get invalid structures in your code. there are things that can help you there (https://github.com/DefinitelyTyped/DefinitelyTyped), and it works quite well in practice, but there is no 100% guarantee that the types will be correct.

>> STRICT TYPING (in typescript) For example, the data might contain additional fields or even incorrect values for declared types.

- typescript is structurally typed most of time, so if you have a function that needs an `{a:string, b:string}`, and you send it an `{a:string, b:string, c:string}`, it will accept it. it's a different trade off,sometimes better, sometimes worse, compared to nominal-typing (https://en.wikipedia.org/wiki/Nominal_type_system).

- the part "even incorrect values", it should not happen in your own code. as i wrote above, i can imagine it happening when interfacing with other non-typescript code.

>> DATA VALIDATION: You have to write data validation code to ensure that you’re operating on correct data in TypeScript. You get this for free in Rust...

in typescript if you want to read,let's say JSON data,and map it to a typescript-structure, and return an error if it does not have the correct structure, you can use libraries like IO-TS (https://github.com/gcanti/io-ts/blob/master/index.md) to make that happen. i do not know how you get this "for free" in Rust, i know there are libraries like Serde (https://serde.rs/) that do it.


The gist of this article applies to switching from JS/TS to Uno/Blazor as well as they both support WebAssembly just as well, and they both get robust UI frameworks as a bonus.


@nicolodavis I would love to hear your thoughts and reasoning for not using go + wasm compile target? Not as a critique but to understand your point of view.


No reason except that I've written Go in the past and wanted to learn a new language :)

Comparing the two languages, I would say that Rust's enums and pattern matching make it easier to work with for my specific use case.


I like Rust but I would much rather use Swift for something like that especially if SwiftUI like thing was part of the deal.


Note that Rust is only used for the state updates. All the UI stuff is TypeScript (using Svelte).


> For example, the data might contain additional fields or even incorrect values for declared types.

Can you give an example?


I guess AssemblyScript would have been a better option, given it being a language subset for WebAssembly.


Except that doesn't address any of the other reasons they preferred Rust...


As long as you use a language with a good, static type system, I don't care so much what you use. TypeScript or Rust, for all I care you can transpile Haskell into JavaScript if you feel adventurous. But don't use a dynamically typed language as the source language for anything that is supposed to be more than a simple script.


> But don't use

This is just cargo-culting static typing: "If I avoid dynamic typing at all cost, the cargo crates will surely rain on down!"

You'll find that picking a language is more of a business decision than the sort of technical static-vs-dynamic checkbox test you might use to pick the language of your next weekendware.

This kind of absolutist rule of thumb is more flame bait than morsel of wisdom.


My experience is similar (Milan, Italy.)

If a company is small and/or has to rely on freelancers it goes with dynamically typed languages (PHP, Node, Ruby, Python and Elixir) because that's what the bulk of freelancers use.

If they have many internal developers or use big consultancy firms they go with Java or C#.

Of course there are exceptions (I know a few of them) and my point of view can be biased because I use to see projects and developers of dynamically typed languages.

But I know a few small companies with a code base in Java that had a hard time at finding Java freelancers. Almost all of them are employees, unreachable for them.


This is the fallacy of the grey. Yes, there are nuances and special circumstances, but the overwhelming majority of the time, static typing is simply better. That's one of a small handful of clear consensuses from the last 20-30 years of programming language evolution.


It's not any better if business trade-offs make another decision better. You don't have "all things equal" trade-offs in the real world.

Want an easy concrete example? You and your cofounder have 10 years of Ruby experience and investors/customers who want a product yesterday. Or the deliverable that makes most sense is a PHP script that users can drag into CPanel because that's your customer base.

The tinkerer inside you always wants to try new things and convince you that some greener tech on the other side of the fence are going to make the difference. It's usually just procrastination from the actual hard stuff: building things people want, today.


> Want an easy concrete example? You and your cofounder have 10 years of Ruby experience and investors/customers who want a product yesterday. Or the deliverable that makes most sense is a PHP script that users can drag into CPanel because that's your customer base.

True but facile. Sure, for every imaginable technology, there is a conceivable scenario in which that technology is the right choice. Doesn't mean there's no such thing as a good or bad technology.

> The tinkerer inside you always wants to try new things and convince you that some greener tech on the other side of the fence are going to make the difference. It's usually just procrastination from the actual hard stuff: building things people want, today.

That's actually an argument for being more absolutist about the technical side: you're unlikely to be in the obscure circumstances where a dynamic language would actually be helpful, and even more unlikely to have that be more important than your business decision. So rather than carefully considering your circumstances and weighing up the tradeoffs, it's better to follow the consensus for what's a good general-purpose language (which these days means a statically typed language) and get on with building your business.


> It's not any better if business trade-offs make another decision better.

I'm sorry, but cleaning your windows with Windex instead of tap water is then "not any better if business trade-offs make tap water better". I am just not sure what you are arguing for: Static typing is better, except for when you don't have static typing?

Some folks here are in the position to choose their current, or shape a collective future landscape, and for those static typing is a consideration.

You mention PHP a few times, but I think this is actually a good example for something that has been pushed away in favor of better things. My impression is that at least the bigger software corporations have moved away from PHP if they ever used it to begin with. Even Facebook, one of PHP's biggest users, replaced it with their own so-named "Hack" eventually, which is apparently a PHP-derivate with "both dynamic typing and static typing" according to Wikipedia.


A business decision, what drivel. By that argument, no technical merit of anything ever matters, and yet many things of what’s considered viable in a “business” have changed all the time, some of them because of the push of engineers towards better things.

This is a forum for engineers (not only, I know), who collectively will at least have a large part in steering what languages are considered common in future.


I actually don't pay any attention to what HN considers technically valid. Collective opinion among HNers is often very skewed from the actual reality.

Here is a comment from 9 years ago. It sounds like parody to me and demonstrates how echo chambers eventually deviate from reality. HN can often be an echo chamber

> A good programmer should wake up at 6 am in the morning get a solid 2.5hrs of coding done by 8:30 am, at 8:30 leave for work, work till 6 (it goes without saying that the lunchbreak must be spent trying to learn the Haskell or if you are feeling lazy answering questions on stackoverflow). Commute from 6 to 6:30 (it's a bonus if you listen to a technical podcast during this time and no stuff like TWIT does not count, perhaps audio lectures from the Advanced Algorithms course on MIT OCW). 6:30 to 7:00 time for supper and excellent time to catchup on r/programming and hackernews. 7-8:30pm is the time for relaxation by doing some recreational mathematics, doing problems from project Euler and that proof from The Art of Computer Programming excercises which you have been itching to get a go at! 8:30pm to 1 am code contribute to that open-source project, write patches for the Linux kernal and continue working on your startup.

> Anyone who does less programming that what is mentioned above cannot call himself a "good programmer", I would have serious reservations in calling that person even a mediocre programmer.

--

https://news.ycombinator.com/item?id=2664409


> Here is a comment from 9 years ago. It sounds like parody to me

It is a parody, isn't it?


Yes, it is. But it makes a valid point about HN and the elevation of technical prowess over other concerns. It points out the danger of using an echo chamber as a gauge for reality.

I don't use technical recommendations from HN users because they consistently underestimate pragmatic concerns because the HN echo chamber overvalues technical virtuosity so real world pragmatic concerns get drowned out by other voices.


This is silly. There are systems which have had nine nines of uptime written in dynamically typed languages.


That it can be done doesn't mean is the optimal approach.

I was a diehard fan of dynamic languages for a decade, but eventually I saw the light.

I still use dynamic languages daily, but the bigger the project the more I want proper static typing. None of that mypy stuff, the real deal.


You're just using the wrong dynamic programming language. Don't get me wrong type checking, static or otherwise, is useful, but eventually you're going to have to throw in the towel and deal with truly hard problems like distributed systems, which in the end must be dynamic (you cannot safely assume that a node separated by time and space respects the same type system your node does).

I have a 20k LOC distributed system, 40k if you include all the libraries I've written for it, running in prod, written in a (typechecked) dynamic language, and it's fine. I had very few typing errors in dev, and one typing error that I pushed to prod. Even this had no discernible customer effect. I discovered it months later when I was checking over my logs understand logical errors. Currently all of the reported errors are race conditions on startup (and sporadic network errors). But it's fine. The system tolerates it, and restarts the failing modules, instead of being pedantic, and so instead of spending weeks debugging concurrent startup, I get a no-hassle fast booting system (which also means nonblocking bootup and thus higher customer availability during failure and restart). The other nice thing about dynamic systems, is that if they're not otherwise terrible, (logical) debugging is a breeze, because all types are inspectable and I don't have to worry about implementing observerability hooks for every single datatype. I can just look at the data as it moves through my system. And that also means that telemetry and logging with structured metadata just "happens", and is highly composable with no hassle.

I guess my biggest problem with static typing absolutism is that it leads to this attitude that code can be correct. It can't. Even if it's provably correct (in the mathematical sense), if your axioms are violated by your system then your system can wind up in an undefined state. You must plan for failure. And sometimes the most efficent/effective plan is "do nothing".


> if your axioms are violated by your system then your system can wind up in an undefined state.

Yes, and if your hardware is broken or a cosmic ray hits your system in just a bad way, then all the static type checking in the world won't help against that. But that's a different problem to solve, isn't it?

I'd postulate that for most applications, you'd usually be happy to have a software stack that's less likely to contain programming bugs in the first place. It's also true that you don't want faulty Airbag software in your car to affect your steering, but don't you want non-faulty Airbag software as well?


That you can't capture all constraints in s type system does not make a type system unnecessary.

You still drive value based on how much of the correctness checking you can offload to the compiler.

I don't really get why programmers better than anyone understand the value of automating work with a computer, but fail to grasp that value applied to their own craft. Let the computer do more of the work in making sure your program is correct.


You're arguing with a strawman. I'm just saying static typechecking is usually sufficient this day in age. Your language itself doesn't have to be static. Actually I'll go even further: It wasn't enough three years ago when language server protocol wasn't a thing.


Can you explain that distinction, possibly with an example?


Unfortunately mypy was my exit point back to typed languages.

After seeing how much I got wrong in Python (but apparently worked) I realized I couldn’t really trust myself (even after a decade in Python!)


So? There are people who crossed the Atlantic in a kayak, does not mean it's an efficient use of your time. The difference is that the people in the kayak were aware of that.


Python folks with their strong + dynamic typing: (O.O')

I will say from exp. that @anyfoo has a point. In the end, even webapp backends are clients, worker nodes within a much larger system infra. Anything that has to do with data that isn't a pass-through entity ideally should be both strongly and statically typed.


Python has things that it calls "types" but they're not types according to the standard definition of the word (true types are associated with terms in a language, not with runtime values). And while a runtime error is far better than silent data corruption, it's a poor substitute for a build-time error.


There's another way to deal with it - which is to make crashes not matter. That's actually I would argue even better because you're already coding for robustness.


If you push Rust to client-side WASM, can it still perform DOM manipulations?


Yes. It has to call out to JS to do so. It's implemented in a way that's forwards-compatible so once wasm can do so natively, your code will just magically get faster.


That's right. For my use-case I decided to just stick with JS for UI stuff (using Svelte). I call into Rust code like a black box to update some state.


A little disappointed not to see https://www.assemblyscript.org/ included in the discussion. Especially since it removes the "JavaScript is slower than WASM" argument.


Why would you be disappointed? The author wanted more from the type system and didn't care if they were going to switch syntax. Seems like AssemblyScripts biggest 'selling point' is that it looks like TypeScript.


I'm not quite sure what you mean here, because AssemblyScript compiles to WASM, not JS.


I guess the argument was that you can write in JS (well, something that is close to TypeScript which is close to JS anyway) and compile to WASM, therefore the speed argument becomes invalid.


That is quite a move. Like moving from USSR to USA.


No substantial arguments in the article. Hackernews used to be one of the rare places where quality is valued. How can such a low content and low quality article get so high on the HN frontpage?


[flagged]


Please don't "rust wankers" here.


I hope this is a sign in your executive moderator suite at HN Intergalactic HQ. That you can point to occasionally and say things like

https://www.youtube.com/watch?v=P8E7_u2qgjE


> Migrating a web app to a static typed system programming language is asinine.

Why?


Because it's redundant to have low level functionalities for building a web app.


Low level functionalities and typed languages aren't necessarily equal, though.


Have an upvote from me, your attitude put a smile on my face, which is unusual for a comment on HN.


If we're are talking about Rust criticism it's pretty average in terms of tone and substance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: