Its easy to quickly put up a prototype. There are a ton of libraries for everything, and most come with copy-pasteable examples. Yes, this is frowned upon, and with good reason -
but for prototyping, its exactly what you want. The prototype you write will also work much better than your average dynamic language prototypes (except for Erlang).
With generators and async/await the code became fairly pedestrian and un-convoluted. The only additional element in the mix are the few await/yield keywords sprinkled around.
The language is flexible enough that you can also write code in FP style (higher order functions, combinators, etc). For example, RxJS code looks pretty natural.
ES6 and above has all the modern bells and whistles: decent module system, classes, short lambda syntax, template strings... As a result code is pretty much as pleasant to write as Ruby or Python
TypeScript is where client/server code sharing starts to shine. Shared types enable end-to-end type-checking. You can arrange your code in such a way that data fetching is injectable, and use the same code with fetch on the client and direct method calls on the server (thanks to the same interface). If you have a very demanding client-side app, there will definitely be a lot of code reuse opportunities. (Nowadays I suppose this is also possible with Scala.js and bucklescript)
The lack of threads is a mixed bag. Most people already covered the disadvantages very well so I'll mention some advantages. Its much easier to reason about stateful code, since you can treat each synchronous code chunk as uninterruptible, and the possible interleaving points are always obvious (e.g. await/yield keyword). There are also several techniques and libraries to work around the disadvantages (e.g. dnode for RPC to separate processes for intensive calculations) and while none of them are very convenient, they usually end up being what you have to do eventually anyway. Threads will only get you so far with servers running CPU intensive jobs - soon you will need more than one machine and then you can't take advantage of shared memory any longer.
If all else fails, you can probably throw more processes at it and put a load balancer. Thats an interesting project, and AFAIK not yet solved (at least not in the node ecosystem). You would need a load balancer that is aware of the current state of the node processes - round-robin or random algorithms will be far from optimal to be helpful. I started some work on this here: https://github.com/spion/least-latency-balancer but I'm sure that its not the best approach and that it can be improved a lot. Might be a good idea to look at what the OCaml folks have - I'm pretty sure they've been dealing with a somewhat similar problem due to the lack of multicore.
Another way to attack this problem is to have better monitoring for long CPU-bound tasks e.g. https://www.npmjs.com/package/long-task-detector
But I will admit that in this regard, there are languages/runtimes where the situation is far better: Haskell, Go, Erlang to name a few.