Hacker News new | past | comments | ask | show | jobs | submit login

When we talk about sandboxing dependencies, we're talking about sandboxing at an API level, not an OS level -- in some languages (particularly memory-unsafe languages) that's difficult, but in general the intention isn't to put dependencies in a separate process; it's to restrict access to dangerous APIs like network requests.

Sandboxing might be something like, "I'm importing a function, and I'm going to define a scope, and within that scope, it will have access to these methods, and nothing else." Imagine the following pseudo-code in a fictional, statically typed, rigid langauge.

  import std from 'std';
  import disk from 'io';
  import request from 'http';

  //This dependency (and its sub-dependencies) can
  //only call methods in the std library, nothing else.
  //I can call special_sort anywhere I want and I *know*
  //it can't make network requests or access the disk.
  //All it can do is call a few core std libraries.
  import (std){ special_sort } from 'shady_special_sort';

  function save (data) {
    disk.write('output.log', data);
  }

  function safe_save (data) {
    if (!valid(data)) { return false; }
    save(data);
  }

  function main () {
    //An on-the-fly sandbox -- access to safe_save and request.
    (request, safe_save){
      save('my_malware_payload'); //compile-time error
      disk.write('output.log', 'my_malware_payload'); //compile-time error
      safe_save('my_malware_payload'); //allowed
    }
  }

We're not treating our dependencies or even our inline code as a service here -- we're not loading the code into a separate process or forcing ourselves to go through a connector to call into the API. We're just defining a static constraint that will stop our program from compiling if the code tries to do something we don't want, it's no different than a type-check.

The difference between this and pure static analysis is that static analysis isn't something that's built into the language, and static analysis tries to guess intent. Static analysis says, "that looks shifty, let's alert someone." An language-level sandbox says, "I don't care about the intent, you have access to X and that's it."

Even in a dynamic language like JS, when people talk about stuff like the Realms proposal[0][1], they're talking about a system that's a lot closer to the above than they are about creating standalone services that would live in their own processes or threads.

This kind of style of thinking about security lends itself particularly well to functional languages and functional coding styles, but there's no reason it can't also work with more traditional class-based approaches as well -- you just have to be more careful about what you're passing around and what has access to what objects.

  class Dangerous () {
    unsafe_write (data) {
       //unvalidated disk access
    }
  }

  class Safe () {
    public Dangerous ref = new Dangerous();
    safe_write (data) {
       validate(data);
       ref.unsafe_write();
    }
  }

  function main () {
    Dangerous instance = new Dangerous();

    (instance){
      //I've just accidentally given my sandbox
      //access to `unsafe_write` because I left
      //a property public.
    }
  }

Even with that concern, worrying about my own references is still way, way easier than worrying about an entire, separate codebase that I can't control.

[0]: https://github.com/tc39/proposal-realms

[1]: https://gist.github.com/dherman/7568885




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: