Hacker News new | past | comments | ask | show | jobs | submit login
TypeScript as fast as Rust: TypeScript++ (zaplib.com)
280 points by janpaul123 on April 7, 2022 | hide | past | favorite | 113 comments



Related but more high level, I think we should have language sets instead of single languages with simpler interop and shared infra: https://gist.github.com/xixixao/8e363dbd3663b6729cd5b6d74dbb...


Cool idea! It seems like you are trying to extend the whole "optional types" idea of Typescript further to add optional low level programming. One thing that came to mind. One of the major reasons why Typescript's optional/gradual typing system works so well is because the the type system and the runtime logic are mostly orthogonal. All the types that are used and declared in typescript, are stripped away by the compiler and don't affect the javascript logic. This means that if I import some other team's typescript code, I can freely write javascript on top without worrying.

Low-level operations like manual memory management are not like that. They are part of the runtime, and _do_ affect the business logic. It's a leaky abstraction. So if I import some other team's Rust code and I try to write RustScript on top, I might have to worry about some special edge cases where the argument I pass in isn't properly garbage collected or what not.


As far as I can tell, D already implements this idea. The Rust/RustGC duality is very similar to D's approach, which has manual memory management and an optional GC [1], and DMDScript [2] could be the scripting language.

[1] https://dlang.org/spec/garbage.html

[2] https://github.com/DigitalMars/DMDScript


D is the scripting language. The whole benefit of D for the company I work for is that we only have one language.


Sil is distinct from d, and has distinct goals.


If we did SIL in C++ there's a good chance it'd have something like the UE4 header preprocessor to deal with all the hooks/metaprogramming we were able to D directly in D.


Common lisp can be and has been successfully used for high-performance numerics[0] as well as low-level system code[1], all while being a high-level, garbage-collected language. (The latter property likely leading to better overall performance in large programs.)

Verification may be interesting, for high-assurance (e.g. automotive/aerospace) applications. It's for this reason that a theorem prover[2] and an ml[3] have been implemented, and they interoperate freely with the rest of common lisp.

In other words: we are already where you want to be, and we are even less stratified than you propose we would need to be.

0. https://github.com/marcoheisig/Petalisp

1. https://github.com/froggey/mezzano/

2. https://www.cs.utexas.edu/users/moore/acl2/

3. https://github.com/coalton-lang/coalton


Is there a example formally verified common lisp project to look at? Either pedagogical (I'm impressed by lisp but ultimately not particularly familiar with it) or impressive like CompCert is for Coq perhaps.


I don't know; I haven't paid close attention to such things. But pulling on the acl2 thread will probably lead somewhere.

Also maybe of interest, not quite the same but on vaguely parallel lines, minikanren and microkanren.


I’ve had similar notions.

For example, Rust used to have a garbage collector. A language set boundary would be a good way to add this back in without requiring a runtime at the lower levels.

Rust already has “levels”, such as no-std or core.

Similarly C# has a scripting variant: https://visualstudiomagazine.com/articles/2021/06/14/csharp-...

And it also supports AoT compilation to a single EXE, but there were a lot of limitations. The latest attempt is a WIP: https://github.com/dotnet/runtime/issues/61231

I would argue that libraries especially need highly restricted language subsets. E.g.: an image parser library should be “pure” in the sense of never being permitted to access system APIs. Image binary in, decoded bitmap out. Like a pure function, but at the module level.


There are various approaches to native compilation on .NET.

NGEN which was there since day one, but its main purpose was fast startup and only allows for dynamic linking.

Mono AOT one, used by Xamarin workloads for iOS and Android deployments.

The Sing# and System C# dialects used by Singularity and Midori respectively.

The MDIL used by Windows Phone 8 and 8.1, based on Singularity's Bartok compiler.

.NET Native introduced in Windows 10 for UWP workloads, replacing the MDIL based one, originally known as Project N.

The NativeAOT pointed by you, which started as a research project on how to bring .NET Native back into main .NET.

Finally some alternative ones like Unity's IL2CPP or CosmOS.


Rust is getting support for local allocators/arenas as part of its core library. So it could become quite feasible to get support for tracing and garbage collection within a specific arena, with outside "GC roots" being managed by the borrow checker. You'd be fine as long as nothing in there tried to link back to your "GC roots". Similarly, "leaf" allocations could also live outside the GC arena provided that they did not try to link back into it. Either of these constraints seems like it could be checked during compile.


Would you say Ruby, Elixir, and Crystal are an example of such language sets?



Agreed, this is compelling!


I love this concept!


Yes!!


Love this


Not sure if you're familiar with or heard about AssemblyScript [1]. It sounds like the goal you are trying to achieve, but with a saner approach: create a TypeScript-like language that compiles to WebAssembly. This means it gives you native access to SIMD, and eventually even threads! All while avoiding the overhead of learning a new language (well there are still some gotchas since it is actually a new language, but very close). No need to think about whether you need to optimize X function or manually manage memory somewhere, and still get the awesome performance of Wasm.

[1]: https://www.assemblyscript.org/


Yep, but it doesn't really solve any of the problems that I mention in the article, unfortunately. The Typescript familiarity part is nice, but it doesn't really work well with existing code; you still have to set up Wasm-JS communication; it doesn't work with existing ArrayBuffers / multiple memories; it's garbage collected if you want to use more complex objects; etc. In many ways it's better to use Rust (which looks a bit like Typescript) than AssemblyScript..


Yeah, AssemblyScript is a key WASM language variant - I made a list of some of the other TypeScript-like implementations last year. Static TypeScript and TypeScriptCompiler do something similar in order to run inside web pages. At a quick glance, this proposal might get some ideas from the thinking done on BorrowScript

https://github.com/orta/awesome-typescript-derived-languages...


Wow cool!


I've tried AS for a while but then just learned Rust. The wasm ecosystem is a lot more mature on Rust (probably the best of all languages) so I'm glad i'v learned it. AS syntax looks like TS, but under the hood it's another language with a lot of open problems like testing, sending data back and forth between wasm and js. AS supports operator overloading, but only for for values of the same time, like Vector * Vector, but not BiVector * Vector, and Rust does.


Hi! Author here. Really curious what you all think. It's a pretty crazy idea but maybe there's something to it! Hacker News is always a special place for ideas like this — you never know what you're gonna get ;) — so I'm looking forward to discussing this with y'all!


The TypeScript++ example looks like the type of code that JITs typically like. Python has Numba, where users affix a @jit annotation to functions to request that. It works pretty well in my experience.

However, at this point, I'm mostly of the opinion that langs need some notion of mechanical sympathy if they want to be used for high-performance. JS engines probably have tens of millions of dollars of dev effort poured into them by now, yet they get soundly beaten by WASM, which is much younger. The conceptual foundation matters a lot.

That said, there's probably a nice space for an ergonomic, reasonably fast PL that sits between Rust and JS.


> That said, there's probably a nice space for an ergonomic, reasonably fast PL that sits between Rust and JS.

Other than not running natively in web browsers, isn't that language called Ocaml?


Is it still possible to use the ReasonML syntax for native binaries?


That was the main motivation behind the inventors of dart. They got frustrated with all the hacks they had to put into V8 just because JavaScript was so mechanically unsympathetic. You can hardly blame then because they originally came from a background of developing smalltalk and self VMs.


Isn't JS much more Smalltalk-ish than Dart? I don't have much experience with Dart, but it always seemed like a very OOP language to me (and not in the "message passing" meaning).


This is super interesting to me as a VM geek.

I'd written off Dart as "Google's Swift" for the most part, but this makes a lot more sense. Thanks for the comment.


> JS engines probably have tens of millions of dollars of dev effort poured into them by now

And JS engine JIT codegen modules are put to good use compiling wasm.


Good points. I agree that choosing the abstractions wisely is key here.


Your example really clicked with me, having written a substantial quantity of JS in a "memory managed" way to avoid GC, and avoiding memcpy by passing buffers in place of pointers.

My initial instinct is that putting such considerations behind a preprocessor syntax has the disadvantage of making them more opaque and magic, and that I personally would continue to do such things more explicitly - which I realise is an ironic thing to say about an untyped, interpreted language, it's probably just the little pseudo "grey beard" on my shoulder which occasionally whispers silly things about "building their own computer out of rocks and twigs".

I guess the point is that there's a balance to the cost vs benefit: making it more convenient (what you call ergonomics), making it more accessible to those that were less likely to manually use that technique, and perhaps resulting in an average of more performant code; but at the cost of more obscurity, and more "magic", perhaps resulting in less educated programmers.

I'm probably overthinking it. Ultimately this argument even applies to for loops, but it's worth being conscious of that cost - it's likely a good idea.


Yeah I hear you; and even allude to that in some parts of the article, like this part:

> Another idea to make Typescript–– a bit less restrictive and more ergonomic, would be to use reference-counting of objects, and then try to optimize most away using compile-time reference counting as pioneered by Lobster. This gives most of the advantages of manually managed memory, but makes the ownership model much easier to reason about. It is however slightly less performant, and more importantly, makes performance less predictable, since it becomes more reliant on compiler cleverness.

Even Rust can hide actual performance behind abstractions, and I've already been bitten by that a few times.. Not sure what the right solution is here, but I agree that it's super important to think about this when designing a language like this.


> having written a substantial quantity of JS in a "memory managed" way to avoid GC, and avoiding memcpy by passing buffers in place of pointers.

What projects were you working on that required that? It sounds interesting


Mostly a variety of web based science education simulations. Usually boiling down to some kind of miniature specialist physics engine or procedural animation.

Beyond that, it's more of a personal philosophy than a requirement (I could easily have done all those things more carelessly, but the result wouldn't be so nice, some people's laps would be hotter, and some people would end up with a slow or jerky simulation). I like things to be efficient, I enjoy finding a balance between "efficient" and "minimal". That doesn't mean I avoid GC all over the place adding unnecessary complexity and over optimizing, it just means being considerate where it matters and "idiomatic" where it doesn't - in a physics engine (or any kind of posteriori simulation) it matters because it's running on an interval and will constantly cause perceptible GC hiccups if you don't be considerate with memory.


Would love to read a blog post about what this kind of code looks like!


I've done this before for plenty of tight loop code. Nothing irks me than staring a profile and seeing "Major GC" multiple times in the middle of a big algorithm.

It's certainly not helped by the latest language fads just completely flooring the GC gas pedal; special mention should go to for...of creating a brand new object for each loop iteration. That's usually the first thing I rip out of tight loops.

A bit of care can make this stuff go 3x fast on PC, and I've gotten completely unusable experiences on mobile (most websites) to be 30-40fps.


Do you have any resources you like for diving deeper into profiling and learning the nature of the JS GC?


Really interesting article.

I was doing some high-level research on a related question the other day: https://transitivebullsh.it/webassembly-research-9e231b80e6d...

I know you've responded to a few people on here that Assemblyscript doesn't address the main issues you're talking about in the article, but honestly it does have a lot of things going for it.

I'd love to see a more direct breakdown of this space that explains why prior works like assemblyscript, nectarjs, walt, etc all fall short, and where an ideal solution would need to do better.

^^ this is the type of thing I'd love to chat about if you're interested btw


Nectarjs is interesting; I wasn't aware of that. How did it manage to avoid (for example) a garbage collector without making changes to the language?


Why not write Rust directly, and target WASM for client side environments?

I support efforts to tune existing tech stacks. But there is such a more direct path to performance, reliability, etc etc by dropping (alt)JS entirely now that anything can compile to WASM.


I agree; that is what we built the Zaplib framework for. But the reality is that there are tons and tons of existing JS/TS codebases out there, and it'd be nice to have an incremental speedup path for them too.



"Why can't we have a language that compiles to highly optimized JS?" is definitely something that has crossed my mind, especially in situations where I reached for faking structs with TypedArrays. As soon as that gets more complicated than something like an array of x,y,z points things get difficult (not to mention almost impossible to debug) really fast.

I suspect that if you could come up with an easy way to express a large tree really efficiently in a TypedArray in a relatively generic way, that you already will do better than most attempted solutions here.


Love it, you should check the Typescript compiler used by MakeCode as well.

Maybe there are some ideas there.


The main feedback I have is that I wanted to read this near the end of my lunch break, because I've been interested for years in the idea that a reasonably-restricted subset of TypeScript (who needs all that dynamism) can almost definitely be compiled to machine code and be made super fast, faster than V8 -- so I was looking for a TLDR in this article and spent a couple minutes looking for it and then just reading whole paragraphs and still don't really know what you've got here.


dang imagine how much more leisurely you could've read the article if you didn't also take some of your lunch break to write this comment!


Good feedback; I should add a TL;DR. Basically this is a proposal to create a language that sits somewhere between Typescript and Rust, and which you can incrementally adopt if you already use Typescript.


You might want to reconsider the ++ naming convention. Many people capable of a critical appraisal of C++ wouldn't want an association with C++. The prejudice that your name triggers is that you don't understand what others think C++ got wrong, and you're likely to be making the same mistakes.

It doesn't look like you're making the same mistakes, but why trigger that prejudice?


These two lines are worth looking at because they showcase why I am not really fond of Rust as much as people who would rewrite everything from the Linux kernel to the IMF in Rust if they could (which in itself is seriously off putting if my tone didn't tell you that):

type Vec2 = { x: number, y: number };

function avgLen(vecs: Vec2[]): number

versus:

struct Vec2 { x: f64, y: f64 }

fn avg_len(vecs: &[Vec2]) -> f64 {

There has been more than two decades between the release of Perl and Rust and you haven't learnt anything? Wow. Listen: code written in languages heavy with sigils and shorthand are hard to understand, read and most importantly: maintain. This adds a mental load which is -- as clearly visible from above -- is totally unnecessary. I have no idea why would anyone in 2010, several years after the famous "memory is the new disk, disk is the new tape" do this to save on characters in the source code. The teletype is a distant memory.

Thanks for letting me vent a little.


I know nothing about rust, but what are you actually complaining about here? The only additional "sigil" in the rust example is the & for vector which I assume marks a reference. Languages that confuse objects with references are a pet peeve of mine, so good for rust to mark the difference. : vs -> and type[] vs [type] are just preferences, neither is obviously better. The Vec2 declaration in the first example actually has more sigils than the rust example (= and ;).

Regarding shorthands, just number would be unacceptable for 99% of the work I do, and it would be something that specifies signedness, integer vs float and size. As a c++ programmer, having to write std::uint64_t gets old very quickly and I'm happy that rust uses a shorter name for these.

I don't think that using a shorthand for something very common is an hindrance to comprehension. Do you think one should write "have not" instead of "haven't" in English?


> Do you think one should write "have not" instead of "haven't" in English?

Given how many people do this wrong? Yes, that sounds like a great idea.

I have to agree with OP that Rust often looks like sigil soup. The example isn’t great, but there is a _lot_ of noise when looking at Rust code. Of course they all have a meaning, but I feel sometimes that meaning could be more easily expressed as a verb than a sigil. At least it would make things more readable (until you get into generics I think Typescript scores pretty high on readability).


> Of course they all have a meaning, but I feel sometimes that meaning could be more easily expressed as a verb than a sigil.

Why not try to come up with an alternate syntax, expressing everything in English-like verbs and compiling into Rust proper? You could call it RUBOL. Then #[RUBOL] code fragments could even be supported within .rs files


Not a bad idea. I may see if that makes Rust more palatable.


> Wow. Listen: code written in languages heavy with sigils and shorthand are hard to understand, read and most importantly: maintain.

As someone who maintains a 145k line Rust codebase all on his own, I have to hard disagree on this one.

Once you get used to the syntax it just blends into the background and becomes a non-issue.

> This adds a mental load which is -- as clearly visible from above -- is totally unnecessary.

Totally unnecessary? Okay, so how would you do the following with the sigilless syntax (and disambiguate between them):

- Specify a unique reference? (`&T` is shared, `&mut T` is unique)

- Specify a lifetime? (`&'a T` is a reference to T with lifetime `a`)

- Specify a raw pointer? (`*const T` and `*mut T` are const/non-const raw pointers)

- Specify that the argument is moved into the function? (just the raw type `T`)

People have been trying to come up with a better syntax for Rust, and AFAIK so far nobody has been able to do it. There's always a tradeoff somewhere that ends up making matters worse than what we have now.


> Specify a unique reference? (`&T` is shared, `&mut T` is unique)

shared T

shared mut T

> Specify a lifetime? (`&'a T` is a reference to T with lifetime `a`)

shared T lifetime a

> Specify a raw pointer? (`const T` and `mut T` are const/non-const raw pointers)

const point T

point mut T

> Specify that the argument is moved into the function? (just the raw type `T`)

T


While this may be better in retrospect, it doesn't mean it's how Rust should have been. Rust was designed to replace C++, so using more C++ was probably the right call to get people to consider it more.

I do disagree on the syntax for the references though. I prefer they be easier to type than the owned versions (eg &[u8] vs Vec<u8>) as this can promote the usual better choice (this is a weak argument, I acknowledge).

I do wish that history had chosen a different symbol for references though. But there's only so much we can do now


Interesting idea, but honestly that seems too far in the other direction. A midpoint between the two ends that optimizes both brevity and readability is probably ideal.


If you view Rust as a high-level language, then yes, there's probably a slight overload of sigils, operators and characters. However, Rust was not outright designed to replace languages like Python, Java or JavaScript but systems level languages like C++. Given that is has no runtime and a fair few compile time guarantees, as a programmer you need to tell the compiler explicitly what something is and how it is owned. If you come from a language with a runtime and start using Rust as if it is another high-level language you will hit the borrow checker wall at some point.

You could also write the function above as `fn avg_len(vecs: Vec<Vec2>) -> f64 {`, and I personally would write the TS one as `const avgLen = (vecs: Array<Vec2>): number`. However, there's a difference between the ownership of a `Vec` and `&[]`, something a language like TS doesn't need to worry about but Rust does.


Perl was a vastly more complex language than Rust. It was literally trying to build something with the complexity and ad-hoc features of natural language out of random symbols. It's not surprising that it didn't work very well.

The more complex the underlying semantics you're working with, the more verbose and less symbol-driven your language should be. You can see this clearly with things like COBOL, which tried to integrate every feature that might ever be needed for business programming within its base language - a staggering amount of complexity. English-like syntax kept that workable.


I don't mind venting at all, but don't agree.

> This adds a mental load which is -- as clearly visible from above -- is totally unnecessary

For numbers in Typescript you basically have a number type (there is also that story about BigInt, but...) - this is as good as Rusts f64

In Rust you have: u8, u16, u32, u64, u128, i8, i16, i32, i64, i128, f32, f64

Sure, you could have some kind of alias number = f64 - but why? Rust and Typescript have different uses.

For the web, number (f64) is good enough. If you do system programming then you need some options.


> Various articles have been written, comparing the performance of Javascript versus WebAssembly ("Wasm") in the browser.

Nit pick: Every word in this sentence in the article is a link. This way of linking multiple related resources is so hard to keep track of and navigate. I need to hover over or click each word to discover the resource.


It's not that bad in my opinion?

I just middle click to open each link in a new tab. I can read the new tab, close it, leave it open, whatever. The links I've already clicked will turn purple, meaning I can easily see which links have been visited already.

Is there something I'm missing?


The nit pick I think is that because every word is a separate link and the link style doesn't show the extent of the underline until you hover over it, it isn't easy to see without hovering on every word individually that it isn't just one link with longer text.


Nice work!

Note that Microsoft actually has a compiler from a Typescript subset into C++, as part of the MakeCode IoT education project.


Ah yes; I did see that but forgot to mention it: https://makecode.com/language


Should be called transpiler instead?


That is a fashion word from the JS community.

A compiler is a compiler, regardless of the target language.

Are you aware originally C compilers did not generate machine code directly, rather Assembly source and then called the Assembler on it.

C++ and Objective-C initially generated C.

Eiffel to this day generates either C and bytecode.

Nim and Haxe are another examples, and so forth.


fashion word or not, it's meaning is clear


The only thing that is clear is that not enough people learn compilers properly.


I wonder if you've looked at AssemblyScript https://www.assemblyscript.org/


Yep, but it doesn't really solve any of the problems that I mention in the article, unfortunately.


Sorry for nitpick, but in the TypeScript example, should it be

    for (const vec of vecs)
Instead of:

    for (const vec in vecs)


That syntax of JS bugs me the most. The “in” keyword could be scrapped and have a .keys() on the object but alas back-compat.

Mnemonic: of = objects fail, in = index names

Yeah yeah objects can be arrays and iterable but you know what I mean. Best I could do! If you dont like just use the “in” mnemonic and Sherlock Holmes.


Tnx; fixing!


We briefly toyed with the reverse idea; wanted to share some code between web and native, and TypeScript / C++ already look very similar - what if we invented a subset of TypeScript that can be simply translated into C++ (as long as you stick to static use and types).

Worked ok for a while but broke down real quick once we brought in external npm dependencies.

Now we're just embedding V8 in our native apps and running the TS code through that. Not as interesting, but lets us import any random npm package (which is both good and bad).


Are you using V8 heap snapshots to speed up startup time by any chance?

IME this can greatly improve cold starts.


Hah, cool! I agree that for your use case V8 makes more sense :)


I thought WASM was considered slower than JavaScript only for the use cases where heavy usage of DOM is necessary. Since that's not available through WASM it needs to make a call to a JS function to perform the DOM manipulation and that incurs extra overhead.

I think these cases will just never be good for WASM. DOM manipulation heavy Apps (which is a good chunk of JS code out there) may continue to be in JS and only the CPU intensive tasks would be computed in WASM.

Isn't that the best practice for WASM?


In http://dioxuslabs.com we share our strings (interned) across the JS/Rust boundary and only send primitives back and forth. All diffing stays in Rust with bump memory allocators managing allocations. The JS side is only there to do DOM manipulation. [1] It's about 3-4x faster than React.

I'd say a lot of the ergonomics are there, but not every pattern has been translated cleanly from React into Rust yet.

[1] https://github.com/DioxusLabs/dioxus/blob/master/packages/in...


Browsers are good enough that if you're careful about it you can just use a WebWorker and write standard JS, no need to worry with the build/compilation/compatibility headaches adding a whole new language to your ecosystem entails. Nice part with this is you can even have a fallback routine that runs the WebWorker code in the main thread to support really old browsers.

I wouldn't recommend dropping to WASM until you've ironed out the ideal data structures/algorithms to use, and have spent some time in the devtools profiler. Hacking on ideas in JS is (IME) much faster than doing the same in a compiled language, and the built in profiling in browsers works far better for JS than any WASM (again IME). Only after you have an optimized algorithm, data structure, and implementation, and still find perf isn't where you want it would I recommend rewriting in a compiled language -- even then I'd wager having the optimized reference implementation around will make this much easier and faster than if you tried to start from 0 in the compiled language.

Bonus points if you can reuse tests and/or have a debug mode that runs both implementations in parallel and throws on any differences in output.


But I prefer to write other languages, so a compilation target is perfect.

So it's either wasm or something like scala.js or fengari.io (lua on the web)

(Worry not, I am paid to write JS and TS these days... But my rate increased due to the pain it causes ;))


The overhead of going to JS from Wasm these days is very small, and the DOM is relatively slow. So doing DOM operations from Wasm is not much different than from JS; the vast majority of the time is spent in the actual DOM API calls.


Add to that the fact that complexity tends to exist in an exponential curve, and even though the DOM is relatively slow, it is still very fast.

So the number of projects where there is enough DOM interaction that WASM will be slow, but not enough that JS is still usable is very small.



A feature that Haxe offers could help here. Abstracts are compile time only types of which the implementation is fully inlined. Meaning we could define an abstract over ArrayBuffer which has an iterator of abstracts over Float64Array which define the x and y property getters. Once compiled the code will look very similar to the example given. It's one of the things I miss the most in TypeScript, coming from haxe.

https://haxe.org/manual/types-abstract.html https://code.haxe.org/category/abstract-types/color.html


Sounds like BorrowScript; which is TypeScript syntax, a Rust borrow checker, and Go-like coroutines. It's designed for wasm and web api targets. (not compatible with TypeScript though)

https://github.com/alshdavid/BorrowScript


I've never written (or even read) JS code with ArrayBuffers. I wonder how much mileage you might get with better education and/or libs for using them? And perhaps a highly opinionated linter for things like optimal for-loop patterns?


Dumb question: Do you mean "Fast" as in speed of execution, or "Fast" as in compile times? (Or maybe you mean both? I haven't used Rust.)

Might be worth clarifying because I immediately got excited about a faster TypeScript compiler!


Yeah I meant speed of execution. Sorry for the letdown! But https://github.com/swc-project/swc is a very fast Typescript compiler written in Rust :)


Rust isn't known for quick compile time.


Sorry, are we saying that Typescript is as fast as Rust in general, or when Rust is compiled to WASM, or what? I am confused and don't know enough about the problem to figure it out myself.


In the end, wouldn't it all be rendered into JavaScript, thus there's a hard performance limit?


Kind of; if it's compiled to WASM instead of JS it could be faster. And some years ago there was asm.js, a sub (or super?) set of JS that removed a lot of the dynamic nature of JS, allowing an interpreter to skip a lot of checks and make more assumptions, and therefore make it run a lot faster. Probably made obsolete due to WASM though.


The older I get the less I care about "expressive power" and "speed of development". I don't view these things as an advantage most of the time, actually often the opposite. The price is always paid later on, and it is high.


Those things are very important for getting a product off the ground. You may have a jaded view from fixing what someone else wrote, but there's a non-trivial chance that in many of those instances no such app would exist to fix had it not been for the ability to rapidly develop and launch said app.

You also speak as if speed of development/expressivity are in direct opposition of maintainability and structure, which just isn't true.


I absolute care about "speed of maintainence". There is no reason "speed of development" need be construed to mean banging out the initial garbage prototype, and only that. We can be tortoises rather than hares and still care about average speed long term.


I think we are more in agreement than your post suggests! I was mostly against the notion in the original post that dynamic/interpreted languages are the pinnacle of development speed (which is why I thought the article was referring to "speed of development" as slinging code). In reality, I care about the same average speed long term as you do and for that I'd take a statically typed language any day.


Glad to hear it!! :)

I agree untyped for more developer speed is a bad joke.


Years ago I wanted to use Typescript for native development, then Swift came along, try Swift if thats what you're after.


If Apple would truly commit to making Swift applications run cross-platform (Windows, Mac, Linux, Android, iPhone, Web/wasm) in a performant way, they could completely take over and become the next stack for performance-intensive applications.


I like(d?) Swift and I do believe there were some projects to make it more usable for server-side applications, but it still seems to be limited to Apple applications.

But it has the potential to go beyond, it just needs marketing. It worked for Go (from a Google-internal language to replace C++ to a widely used popular language), Rust (from a Mozilla internal project to solve their issues to a widely used and popular language), even Typescript (as a replacement for flowtype, coffeescript (?), atscript (that was a thing) and some others, again developed, sponsored and promoted by a big organization (Microsoft) after being necessary for their large scale web applications (Office).

anyway, Swift uses LLVM to compile so OS-specific libraries aside, cross compilation shouldn't be a problem - even to WASM.


They would need to catch up to Kotlin Native [0]. They fixed the differences to the JVM memory model in 1.6.20 [1].

[0] https://kotlinlang.org/lp/mobile/

[1] https://kotlinlang.org/docs/whatsnew1620.html#concurrent-imp...


> Toolchain integration can be daunting. You need to set up Rust in development builds, production builds, local testing, continuous integration, and so on.

Impressive that someone familiar with the usual JS development toolchains can write this and keep a straight face...


You didn't see my face when I wrote that ;)


What's so hard about JS toolchain?

    yarn create next-app --typescript
... and I am done


Now include installing node/npm initially and setting that app up for CI/CD


Sure:

    FROM node:16
Done (including dev env with VSCode dev containers).

And if you created a Next app as I suggested, your CI config is super-easy - you need to place 3 commands into your YAML and that's it:

    yarn lint
    yarn build
    yarn start


I feel like using docker is cheating as you could do that with Rust as well


But can I? I haven't tried recently, but when I tried to setup a dev container for Rust last year I had to do the setup by hand on a raw Ubuntu image.

Anyways, cargo and rustup are cool tools, I don't want to criticize Rust. I just don't think the JS tooling is so outlandish as people make it seem so.



For me it’s the same

        yarn create react-app my-app --template typescript




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: