Hacker News new | past | comments | ask | show | jobs | submit login

Does every language have to be suitable for every task? “Operating a business-logic-embedding CDN at scale” isn’t ever something Node claimed to be capable of. I wouldn’t expect any other dynamic runtime-garbage-collected language, e.g. Ruby or Python or PHP, to be suited to that use-case either.

Use the right tool for the job. The PHP website is running PHP, but it isn’t running a web server written in PHP. Web servers are system software; PHP isn’t for writing system software. Same thing applies here. Node does fine with business logic, but isn’t really architecturally suited to serving the “data layer” when you have control/data separation.

> The PHP website is running PHP, but it isn’t running a web server written in PHP.

Are you sure? I'm not familiar with the PHP.net architecture, and there may be less gains from how PHP has traditionally tied itself as a module to web servers in the past, but Rails (and any number of other dynamic language frameworks) are actually web servers implemented in that language, with an optional separate web server such as NGINX or Apache you can run in front to handle the stuff they aren't as good at (static file serving, etc).

Now, that is a framework, and not the language proper, but I wouldn't be all that surprised to find python.org running on top of a Python framework.

Their mirroring page suggests they most likely use Apache.

"NOTE: Some of our maintainers prefer to use web servers other than Apache, such as Nginx. While this is permitted (as long as everything ultimately works as directed), we do not officially support these setups at this time"


Seems Python.org runs on the Django framework.


From the paper it appears to be an authorization service that decides what rights a particular user has. Not a webserver or CDN. It mentions it being CPU bound, though it isn't clear to me why it would be, or why JS wouldn't work well enough for that.

The paper makes it clear they were evaluating languages based upon efficiency.

The rust implementation was more efficient than the JS one. A CPU bound service of course is bottlenecked at the CPU, and this benefits from efficiency.

At scale, it makes sense to replace this with Rust. Javascript did the job, but did not provide the same efficiency as Rust.

I'm not clear on why this is particularly CPU heavy, though: "the authorization service that determines whether a user is allowed to, say, publish a particular package"

Lots of users. Anything will be CPU-heavy if you give it enough work.

Except the things that end up being memory-bound instead, but the NPM database isn't large enough for that.

> Anything will be CPU-heavy if you give it enough work.

Not in a relative sense. If authorization is 5% of the work, scaling it leaves it at 5% of the work, and it's never a bottleneck. Authorization was being a significant bottleneck, not a tiny percent, and that is somewhat surprising.

Obviously authorisation will be a huge overhead compared to sendfile + nginx right? Am I misunderstanding what npm does?

I mean, to use sendfile you need to open the file, and that does a permission check too...

No I'm talking about private NPM right. The perms on the file system are not equal to (or as costly as) the auth I need to have to access my private NPM repo.

Couldn't you implement the authorization logic in, say, Redis? Then the npm service is I/O-bound again, and everyone is doing the job they're optimized for.

Authorization checks aren't that expensive. The overhead of using an external service might well make it take up more hardware overall.

It's not clear either, it makes sens to use Rust for CPU heavy task, but a CRUD service that do authentication would be fine in Nodejs since every low level crypto are using C. So I'm not sure exatly what they mean, tbh the paper is very light on details.

Some CPU heavy operations like crypto are not put in a threadpool. While it may be running C code, it will block the main thread while executing. See https://github.com/nodejs/node/issues/678

Perhaps the new worker threads may alleviate this, but I'm not sure (it's still an experimental API).

My gut feeling is that the user keys are not random generated, but actually encrypted strings that contain the permission values. Decrypting them with modern encryption algorithms (like ChaCha) is pretty CPU intensive.

"Javascript did the job", but not very well apparently. They specifically say that they "were able to forget about the Rust service because it caused so few operational issues". Add to that the increased CPU- and RAM-efficiency that generally comes with a rust implementation, and that rust rewrite looks like a no-brainer.

Well I'd say that if it's an authorization service there will be cryptographic calculation which is "heavy" on the CPU.

I don't think that's what it does. "the authorization service that determines whether a user is allowed to, say, publish a particular package"

Also, I would assume any crypto in v8 is already written in C, with JS calling into it.

Users who suggest cryptography are getting downvoted (myself included). I am curious as to why. Is it perceived as spam or an attack on nodejs? Encrypted cookies and JWT (json web tokens) rely on a similar strategy so this is pretty standard. It would be pretty secure as long as the encryption key is not unique, so definitely not criticism on the NPM team but mere theories as to why a (what one would assume is a) database or memory bottle-neck is being presented as a CPU bottleneck in this scenario.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact