Hacker News new | comments | show | ask | jobs | submit login

So we need gpg signed packages :> And... all packages should be namespaced under the author who published them. And... I kind of want to say "once it's published, it's forever".

> And... I kind of want to say "once it's published, it's forever".

This is effectively the norm with more traditional, curated package managers. Say I release a piece of open source software, and some Linux distro adds it to their package manager. Under a typical open source license, I have no legal right to ask them to stop distributing it. They can just say "sorry, you licensed this code to us under X license and we're distributing it under those terms. Removing it would break our users' systems, so we won't do it."

The difference is that NPM is self-service - publishers add packages themselves, and NPM has chosen to also provide a self-service option to remove packages. I honestly wouldn't have a problem with them removing that option, and only allowing packages to be removed by contacting support with a good reason. (Accidental private info disclosure, copyright violation, severe security bug, etc.)

  I honestly wouldn't have a problem with them removing that option, and only
  allowing packages to be removed by contacting support with a good reason.
  (Accidental private info disclosure, copyright violation, severe security 
  bug, etc.)
Even Rust's Cargo won't allow you to revoke secrets [1]. I think this is the correct policy.

[1] http://doc.crates.io/crates-io.html#cargo-yank

Aside from secrets there is also sensitive data. If someone accidentally uploads some personal information, they need a way to remove it if, say, they receive a court order ordering them to remove it.

If they receive a court order, and there is no technical way to do that, then the court is out of luck. "A court might order it in the future" is not a design constraint on your decisions today.

Sure there's a technical way to do it: you unplug the server hosting it (or more likely, your hosting provider does that for you).

No court is going to shed any tears over fact this has wider consequences than if you'd been able to comply with a narrower takedown request.

This combined with the cost of hosting (I remember the ruby community freaking out over rubygems costs a couple years ago) makes me think maybe we're evolving towards decentralized dependency hosting. Something like Storj where users offset hosting fees with blockchain payments when dependencies are fetched.

The go solution seems more reasonable and achievable- the host is part of the namespace. Instant decentralization.

There's nothing preventing decentralization with npm now; it's a matter of configuration. Tying the namespace to a host seems more like instant excessive coupling.

Tying namespaces to a hostname isn't really that controversial -- it's no different than email.

If you want to be your own provider then host your packages on your server(s) and tell your users to add npm.cooldev.me/packagename to their configuration.

If you don't want to host your own then you can choose from a few public providers like npmjs but then have to be subject to their guidelines, policies, and fees.

Throw in some automatic bittorrent support in the client to help offload costs and you've got something great.

npm already supports all of that except the bittorrent bit, with the proper configuration, and without requiring that idiosyncratic namespace convention. [0] I don't think bittorrent is actually relevant to most use cases. Most people complaining here just don't want their site to go down, so they should vendor or fork all their deps and run their own registry to support that. Downstream users of public modules can either go through npmjs or perform the same vendoring and forking work themselves.

[0] https://docs.npmjs.com/misc/registry

There have been links to child porn in the Bitcoin blockchain. To date, this has not resulted in any courts preventing full nodes from running in the US.

This why sites that don't allow package authors to "unpublish" have contact information so that data deletion can be handled on a case-by-case basis.

I'm not sure how the court could force you to do something you can't possibly do...

"So what you're saying is, your computers cannot possibly not continue damaging the plaintiff's interests." "That's correct." "You're being honest with me." "Yes, your Honor." "Will the computers continue harming the plaintiff's interests if shut off?" "... That would be dreadfully inconvenient, your Honor." "Do you have a more convenient solution?" "No, your Honor." "You are hereby ordered to turn off your computers in 48 hours." "... You can't do that." "I can do a lot of things, including jailing you if you disobey my lawful authority. 48 hours."

Engineers often think that they are the first people in history to have thought "Hey, wouldn't it be easy to pull one over on the legal system?" This is, in fact, quite routine. The legal system interprets attempts to route around it as damage and responds to damage with overwhelming force.

What Patrick says is technically true. But before granting the "extraordinary remedy" of an injunction, U.S. courts would apply the traditional four-factor test, which includes assessing:

+ the balance of hardships between allowing the conduct in question to continue vs. issuing the injunction;

+ whether the damage being caused by the conduct in question could be satisfactorily remedied by a payment of money as opposed to a mandate or a prohibition; and

+ (importantly) the public interest.

See, e.g., the Supreme Court's discussion of the four-factor test in eBay v. MercExchange, 547 U.S. 388 (2006), https://scholar.google.com/scholar_case?case=481934433895457...

How about a blockchain-based NPM? Can't take all the computers down.

Legal, shmegal.

You can still be jailed for contempt of the order, though.

"I've found a clever workaround for court orders" doesn't work around that bit.

OK, now tell me how you can remove this file from BitTorrent (it's Fedora 18 KDE)


I'll wait

It's not about whether the removal is logistically possible, it's about whether a court can punish someone for failing to carry out the removal.

Even when the former is actually impossible, a court could still punish for the latter. "Ha ha ha I use technology to cleverly show how futile your orders are" is not the kind of thing you want to say to a court with broad contempt powers.

The court can't punish you for not being able to do the impossible. That's ludicrous. "We have shut down all of our servers, yes. We can't stop people from downloading this, no"

That's because all laws make sense and all people who enforce and judge them are understand this.

Pay damages, then.

Pay damages because someone else uploaded something by accident and you can't fix it? It doesn't work like that.

It only doesn't work like that in the context of safe harbor laws.

If the safe harbor law protection doesn't apply, and the defendant is responsible for the illegal behavior, the defendant can absolutely be held legally liable and pay the legally-appropriate punishment.

Why should you pay the damages for something that's not on your server?

If the "forbidden" action was previous to proceedings and carried out in good faith by unknown parties, it would be very hard to sanction anyone.

Just live outside of the United States, and you'll be fine.

Yeah, there's literally no other courts outside the USA.

Yeah, because the US doesn't have treaties with most of the world...

Would that work if you did it before the court order?


that's why we'll kurtzweill ourselves into the computers that can't be shut down!

Or something like http://ipfs.io/

IFPS is cool, however pretty far away from being usable as a package management system... Some package management system could use it as a backend, though.

In fact, gx[0] is such a package management system.

[0]: https://github.com/whyrusleeping/gx

Yep, but someone should implement that first. Package repositories have pretty much centralized control still, and will have for foreseeable future.

npm already replicates to hundreds of other servers. Right now, it is practically infeasible to actually remove packages permanently.

That's why I'm looking into IPFS(https://ipfs.io) as part of my infrastructure. How that would look then, with IPFS...

> "So what you're saying is, your computers cannot possibly not continue damaging the plaintiff's interests." "That's correct."

> "You're being honest with me." "Yes, your Honor."

> "Will the computers continue harming the plaintiff's interests if shut off?" "No it wouldn't, your Honor.".....

And suddenly things like NPM can transfer the data to other machines, and those machines themselves can also provide to others. Deletions are impossible if people still want the content.

And IPFS guarantees that if a single node has the data, then any node can download it and also be part of the cloud that provides the data. Once it's out, it's impossible to retract.

> The legal system interprets attempts to route around it as damage and responds to damage with overwhelming force.

In other words, Hulk Hogan vs Gawker.

They would force the provider to facilitate the removal.

What if such a system was implemented using IPFS[0] (or similar) for storage?

[0] https://github.com/ipfs/ipfs

I'm surprised all package managers don't use an IPFS-like system that uses immutable state with mutable labels and namespaces. Now that IPFS exists, and provides distributed hosting, it's even easier.

As much as I agree, IPFS is still very much under construction and I don't think any known package managers got started after IPFS was reliable.

You can experiment with ipfs-backed git remotes though. That's already possible.

gx is a generic package manager on top of IPFS that uses git-style hooks for adding per-language support. It's already being used to manage dependencies on the go-ipfs project: https://github.com/whyrusleeping/gx

Bonus: there's also a IPFS git remote implementation! https://github.com/cryptix/git-remote-ipfs

Yes the IPFS implementation might change but not the content multihash addressing. Linking to data with those addresses is the generic 'package management' that solves all these problems (references to mutable data at mutable origins, circular dependencies, data caching, namespace conflicts). The specifics of resolving links will hopefully be something we don't think about much.

I've played around with ipfs.js for resolving links into eval'd js at runtime and imagine a npm replacement would be pretty trivial. The IPFS peer to peer swarm seems stable to me but you could also dump all your hash-named files into a s3 bucket or something as a fallback repo.

No signatures, no (at least!) an IPFS mirror as a backup option - how can one trust NPM or the likes?!

NPM doesn't even have immutable versions. Many would love to see this improved.

What you mean by that? It used to be possible to republish a version (it broke our build when a dep was republished with a breaking change, that's how I learnt about it) but this was fixed some 2-3 years ago IIRC

Somewhat related, I just coincidentally stumbled upon https://github.com/alexanderGugel/ied . "Upcomming Nix-inspired features", to paraphrase their README, could well prevent this debacle.

(And btw, We Nix users very much do hope to start using IPFS :).)

There’s already apt-get over Freenet:

Git has crypto for a reason. Every package manager must have it too.

There is already gx package manager: https://github.com/whyrusleeping/gx

Looks like the npm team will not be removing the ability to unpublish packages - see reply by core committer "othiym23" on https://github.com/npm/npm/pull/12017


There are a lot of problems with NuGet, but they got this right. I do wish there was a way to mark a package as deprecated, though. For ages, there was an unofficial JQuery package that was years out of date.

`npm help deprecate`

Yep, unfortunately the same does not exist for NuGet.

Read the whole thread. Rather concerned by the final one. "Locking temporarily" to get away from the discussion?

That feels sort of like the online discussion equivalent of sticking your fingers in your ears and going "la la la I'm not listening".

I don't expect someone in their position to be unable to ignore a conversation and "take a break" but I would expect them to be capable of doing so without resorting to "suppressing" the ongoing group discussion.

Or it's a "we're discussing internally, and would rather not deal with the shit-show that Github issues becomes once the issue becomes politicized and rampant misinformation and misguided activism take over."[1] There will be plenty of time for people to froth at the mouth and complain that they chose one way or the other once they've made a clear decision, which as of the locking the thread to collaborators, they have not (the current thinking has been outlined, but they said they are thinking about it).

1: See the recent systemd efivarfs issue at https://github.com/systemd/systemd/issues/2402 and associated HN discussions, which was solved through a kernel fix. Pitchforks abound.

I suspect your right, but honestly... His choice of language sounded much less like 'were thinking as a team', and much more like 'your all talking too loudly, you've given me a headache, so i'm going to shut you all up for a while'.

You mean the response that says, verbatim: "I'm thinking about the points that have been made, and I'm sure that we as a team will consider them going forward" ? Sure, he also says for now the behavior won't change, but that's the sane thing to do with the errors are rare, as changing something too quickly may introduce new bugs or unforeseen problems. Honestly, your interpretation of that comment is the exact reason why it's good to shut it down for a little while. The conversation gets so charged that even a "we need time to think about it" response is viewed negatively.

GPG isn't strictly necessary if you trust NPM's authentication system (of course, that's a big "if" for many folks).

Publish forever (barring requests to remove for legal compliance or whatever) is a good idea. Or at the very least, it should be a default option. And if you install a dependency that isn't "publish forever", you should get a warning.

This is what happens with Clojars. It is possible to remove packages, but it requires a manual email to the admins, along with an explanation, e.g. published internal lib accidentally. This prevents scenarios like this, but also cases where people want to 'clean up' things they no longer need, even though others are depending on them.

I think I'd just want to add that namepsacing by author doesn't entirely fix the problem. For the fewer instances where there is a collision, we still have this issue with lawyers asserting trademarks.

"Would the real Slim Shady please stand up?"

We want multiple 'kik's and multiple Shady's simultaneously. So record the gpg sig of the author in package.json, and filter the name + semver against just their published modules when updating.

Depending on how unique you need to be:

npm install <module> --save

npm install <author>/<module> --save

npm install <gpg>/<module> --save

On a side note, npm-search really sucks. It lacks a lot of fine-grained filtering. I'd love to be able to search by tags, or exclude results with an incompatible license, or even prioritize results by specified dependencies. npm-search needs love.

That's a good idea, but what if Kik lawyers come knocking on your door saying that you're breaking the law and this package cannot stay there forever, or for any moment longer?

Well it shouldn't be hard...oh wait. This is javascript. Good luck then. :)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact