I've seen a lot of people criticise npm and their policies but I've never come across a solution. Npm has its flaws and while there are such abuses like everything package, is-odd, left-pad, etc there are also many useful packages like vue, sortable, etc without which development will be a huge pain.
So not asking rhetorically, if we had all the insight and knowledge we have now, how would you make it different?
I know it's five hours later and this question has already spawned dozens of responses, but it's worth thinking and speaking clearly if we're trying to arrive at a solution for something. We can start by saying exactly what we're talking about—how do you make what different? Because you mention npm "and their policies" but then switch gears and talk about "is-odd", which is not a policy issue. It's rather something else entirely.
If you want answers, state clearly what _specific_ problem you're trying to solve. Whatever the solution to it might be, vague and fuzzy questions—while magnets for chatter since they can stand in for whatever someone wants to read out of them—are not the way to get there.
(You could say that this is needlessly tedious because everyone already knows what we're talking about, but that this isn't true is exactly my position. It's certain that something like half the people reading, thinking, and writing are have in mind one thing, while the other half are thinking of another—and the third half are thinking about something different from either of those. We're also programmers, so dealing with tedium and the constraints of having to be explicit should be second nature.)
> Whatever the solution to it might be, vague and fuzzy questions—while magnets for chatter since they can stand in for whatever someone wants to read out of them—are not the way to get there.
This is a great line. If HN had a quote of the month or something, this should be nominated for that.
You design a language for a purpose (which could be anything) you develop and mature it's features to better fit it's use case.
html, a crappy defective xml implementation refuses to grow up, js, while great for little html tweaks is not adopting any of the useful features found in popular npm packages. It was actively developed for 2 weeks. Ripping off it's head (nodejs) gave us a poor sailor jargon ~ but without the boats!
Therefore there is nothing wrong with npm, she is a fine ship. The harbor doesn't want to take it's much desired cargo, it must sail the 7 seas forever mon capitaine!
HTML came before XML. Also, how is HTML crappy when it is used by millions of web sites and is one of the most successful technologies in the past 35 years?
Does this mean it is perfect? No. Is it "crappy"? Nope.
Also, while JavaScript had a rushed development cycle, it has grown over the past 20-30 years and you can clearly write some great programs in it. Also, it has some very good features. My favorite is you can pass functions as variable arguments. It got this before a lot of other mainstream languages.
The core of the problem is micro dependencies. It seems in the Javascript ecosystem, developers have no awareness of costs of complexity.
When you wonder whether to add a dependency, you should ask yourself: What are the upsides and downsides of adding this dependency. One downside is always that by adding a dependency, you add a potential security problem, a potential point of breakage, and more complexity.
There are situations where these are well justified. If your dependency is stable, from a trustworthy source, and if it is a functionality that you cannot quickly implement yourself. But if you include a dependency that is effectively one line of code, the question answers itself: The costs of adding a dependency is completely unreasonable. It your list of dependencies grows into the 100s, you're doing something wrong.
Devs working with core developers to create more 1st party packages would be a good start. I don't need 12 different implementations for sorting on Vue/React/[insert spa framework of the month]. I just need 1 really good sorting library. With it, we can move to less overall dependencies on random packages.
There are two massive reasons why js got here, with a million packages for tiny things and a culture of using them: browser cross-compatibility requiring complicated workarounds for easy-seeming tasks, and the introduction of promises + async/await to node.js after the standard library already used callbacks.
When you combine those together you end up with a situation where "normal" js code not from a library can't be trusted on the front end because it won't work for x% of your users, and offers a clumsy API on the backend that you'd prefer be wrapped in a helper. Developers learnt that they should reach for a library to e.g. deal with localstorage (because on Safari in private mode the built-in calls throw an error instead of recording your data and discarding it after the tab closes) or make a HTTP request (because node doesn't support fetch yet and you don't want to use the complicated streams and callbacks API from the standard lib) and they propelled that culture forward until everyone was doing it.
Modern JavaScript reminds me a lot of BASIC, Pascal and other 70s and 80s languages. Even C pre-ANSI.
We’ve been blessed in recent years that either languages are fully open source and come with a reference implementation, or a standards body governs the implementation detail. Sometimes even both.
Whereas JavaScript is really more a group of languages, each with their own implementation quirks.
ECMA was intended to bring some sanity to all of this. And it’s definitely better than it was in the JScript days of IE vs Netscape. But there isn’t anything (that I’m aware of) that defines What should be a part of the JavaScript standard library.
Wouldn’t it be great if there were a libc in the JS world. Something portable and standardised.
> I don't need 12 different implementations for sorting on Vue/React/[insert spa framework of the month].
This feels like a bit of a strawman, since sorting is already in the standard library and there aren’t in fact popular sorting packages for each framework (that would in fact be ridiculous).
If you want to start a real debate though, bring up date/time pickers.
There are multiple date picker, time picker and datetime picker packages for each framework, and there are debates with good points on all sides about whether the browser-provided pickers are sufficient, or whether this is an area where a level of customization is needed and what that level is keeps changing as people discover new ways of designing date/time pickers and new use cases arise that require different tradeoffs. It’s both really frustrating but also kind of understandable.
There are still so many basic things that aren't in the JS stdlib, though. A good example is Map - if you need to use a tuple of two values as a key, you're SOL because there's no way to customize key comparisons. Hopefully we'll get https://tc39.es/proposal-record-tuple/ eventually, but meanwhile languages ranging from C++ to Java to Python have had some sensible way to do this for over 20 years now.
const idx = [1,2]
const m = new Map
m.set(idx,"hi!")
console.log(m.get(idx)) // outputs "hi!"
console.log(m.get([1,2]) // outputs undefined
That last line has created a new array object, and Map is made to be fast so checks equality by reference. Ah, which is what you to be able to change. I guess you would want to pass a new map a comparator function, so that it does a deep equal. That would be faster than what you would have to do now:
const idx2 = String([1,2])
m.set(idx2, "yo")
console.log(m.get(idx2)) // yo
console.log(m.get(String([1,2])) // yo
That is precisely a key comparison issue. That is why I spoke about a tuple of two values; tuples by definition don't have a meaningful identity, so reference comparison is utterly meaningless for them.
Stringification is a very crude hack, and it doesn't even work in all cases - e.g. not if a part of your key still needs to be compared by reference. Say, it's a tuple of two object references, describing the association between those two objects.
Either way, the point is that this is really basic stuff that has been around in mainstream PLs for literally many decades now (you could do this in Smalltalk in 1980!).
There is a trivial way to have custom key comparisons: write a function that returns the key you want. You can implement equals() using some kind of serialization, or using a lookup table of references - whatever you want!
Of course, Records and Tuples would greatly simplify the process.
Writing a key comparison function is not a problem. The problem is that Map does not have any way to use such a thing; it always compares using a predefined equality algorithm that is by-reference for all aggregate data types.
I didn't say that you could pass a key comparison function directly into Map.
What I meant was that it's possible to emulate a custom key comparison predicate.
You just have to have a function that returns value for each input that behaves the way you want under strict equality comparison.
Implementing an `equals` function that returns a boolean is more convenient, sure.
Serializing (for plain objects with only JSON-serializable values that would be JSON.stringify) to strings or other primitives would of course be possible with object keys, too. But that's probably what you want for "record"-like objects, right?
And if you want better performance or compare non-primitive values, you'd have to do something more complex, that's what I meant by the lookup table.
But I imagine if you deep-compare large Record objects a lot, the performance wouldn't be any better, because the engine still has to do a deep comparison.
If I am not mistaken, Records/Tuples are in fact strictly limited to this case:
So basically there is no difference to having a function serialize() that just stringifies your plain object, maybe with a caching layer in between (WeakMap?)
OK, thinking about it, the proposal really would help to avoid footgun code where performance optimizations are lacking, and too many deep comparisons or too many serializations are performed.
> There are multiple date picker, time picker and datetime picker packages for each framework, and there are debates with good points on all sides about whether the browser-provided pickers are sufficient,
Safari (iOS and MacOS) still doesn't have full support for the date time picker, which is why there are so many alternatives.
That’s a really good way to stagnate imho. I’d rather have 10 sorting libraries that each specialize or make different trade-offs than one library that tries to do everything.
That said, you can still have a core set of “blessed” packages that serve the common needs.
I don't see how creating a definitive sorting library is stagnation compared to having 10 mediocre libraries that are all missing some sort of critical functionality.
Your argument seems to be "just write good code instead of bad code". My argument is "the best way for good code to exist is to enable and support multiple options". Because if you have only one option and it's bad then you're screwed with no recourse. C++ and Python have, imho, many horrible API designs and we're stuck with them forever. This is stagnation.
Rust has a good standard library and also a large community of libraries. Sometimes those community libraries get promoted to std because they're strictly better. Sometimes the std version of hashmap is slow because std insists on using a crytographically secure hash when 99.99% of use cases would be better served with a less secure but faster hashing algorithm.
Like many things in life the ideal scenario is a benevolent dictator that only makes good choices. In practice the best way to get something good is to allow for multiple choices.
<insert parable of pottery class graded on quality vs quantity>
The problem with this argument is that JS also has many horrible API designs. That seem to be replaced by equally horrible API designs, just with a faster churn.
Meanwhile, the horrible C++ and Python API designs at least offer the needed functionality, even if the code looks ugly.
Forgive me for nitpicking your Rust example but you can define your own hashmap that inherits from the standard hashmap, and give it a different hash function. I have done it.
Right. I'd be very surprised if anyone looks at languages with strong standard libs and says "I wish they had the kind of sorting I can pull in from npm"
Because who is going to bother working on any packages if they risk rejection in the end? Have fun with your ______ package because it's never going to improve.
There’s plenty of languages with vast standard libraries and which also have 3rd party libraries that offer the same feature as something in stdlib but more enhanced against a specific metric.
A well designed JS standard library that also includes a set of protocols (interfaces) would make such a huge difference in QoL. It would also likely be the biggest contributor to reducing bundle sizes. The protocols (iterable, async iterable etc) will ensure that the rest of the ecosystem can also innovate and participate at a similar level of ergonomics by implementing them
While I agree here, you also have to remember that additions to the JavaScript standard also increase the amount of time / effort for new browsers to enter the space.
The JavaScript standard (the web APIs, mainly) are already very complex, with Web Workers, Push Notifications, Media Streams, etc. that additions to it should be made cautiously -- once an API is implemented, it's there forever, so the bar for quality is much greater than that of some NPM library.
A JS standard library would be a drop in the bucket compared to the size and complexity of the DOM libraries and implementing a usably performant JS engine.
Yes it should be done carefully. There are also plenty of examples of how this can be done well, done by experienced engineers. For example, the Dart starndard library (https://dart.dev/libraries - core [1], collection [2] and async [3] in particular) is a very good model that would fit JS fairly well too (with some tweaks and removals)
> A JS standard library would be a drop in the bucket compared to the size and complexity of the DOM libraries and implementing a usably performant JS engine.
It's still a nonzero amount of complexity. I see a lot of "v8 is really hard to compete with" comments on here so this feels very pertinent to mention. You can't have it both ways.
> Yes it should be done carefully. There are also plenty of examples of how this can be done well, done by experienced engineers. For example, the Dart starndard library (https://dart.dev/libraries - core [1], collection [2] and async [3] in particular) is a very good model that would fit JS fairly well too (with some tweaks and removals)
>
> [1]: https://api.dart.dev/stable/3.2.4/dart-core/dart-core-librar...
This one, at least, looks somewhat inspired by JavaScript.
There's a difference between features that need to be implemented as part of the engine such as Web Workers and those that can be implemented as a library, such as sorting; the latter can be shared between implementations much easier.
If that standard library would be written in JS, a new browser (or rather a new JS engine being a part of the browser) could just use some existing implementation (a reference implementation maybe?), no need to reinvent the wheel in every part of the browser.
> If that standard library would be written in JS, a new browser (or rather a new JS engine being a part of the browser) could just use some existing implementation
That sounds great, but I'm doubtful of the simplicity behind this approach.
If my understanding is correct, v8 has transitioned to C++[0] and Torque[1] code to implement the standard library, as opposed to running hard-coded JavaScript on setting up a new context.
I suspect this decision was made as a performance optimization, as there would obviously be a non-zero cost to parsing arbitrary JavaScript. Therefore, I doubt a JavaScript-based standard library would be an acceptable solution here.
> Hasn't everyone pretty much given up on making a new (standards compliant) browsers after Microsoft gave up?
There's plenty of competition, even if the current projects are in a beta (or even alpha) state. Consider the LadyBird browser developed by SerenityOS, or Servo.
I don't know, I think "batteries included" standard libraries got a bad reputation because Python's standard library is so full of crap, so lots of people thought the whole idea was fundamentally bad. But I think the correct conclusion is just that Python's standard library is bad.
Go has a big standard library too and it's mostly very well designed, useful and avoids fragmentation.
I think a similar thing happened with compiler warnings and C/C++. The language is error prone so people want warnings but a lot of the warnings don't have good solutions (e.g. signedness mismatches) so people tend to ignore them. Also they aren't easy to control, e.g. from third party dependencies.
So some people got the idea that warnings are fundamentally wrong and e.g. Go doesn't have them. But my experience of Rust warnings is that they are totally fine if done right.
Don't allow un-publishing package versions. If they are literally malware, they can be manually removed by npm admins. If a court orders a takedown due to copyright, that's also something npm admins can handle. If you want to be able to un-publish, then just publish on your own server (or github etc).
If analyzing the dependencies for showing in the NPM web UI, while analyzing, as you exceed 40 direct or transitive dependencies, abort and highlight this package in red, for having excessive dependencies.
If installing locally, you get what you get, don't install random or crazy packages, stick to well known high-quality minimal-dependencies packages. nodejs does include file reading and writing, http server, http client, json ... that will take you pretty far. Master the basics before getting too fancy. And remember, you don't need some company's client package just to make some http requests to their API.
I’ve started thinking package management has too much trust now. Ideally, but probably unpractically, projects should check in their packages like they used to under /lib or /third_party, and be much more suspicious of new package dependencies.
Basically, you would need to start accepting that you are responsible for any dependencies you choose to include. Any upstream changes you would need to evaluate and bring in or patch yourself.
Definitely an impossible task given how broad and deep modern package dependencies are, but at least you’d start feeling the insanity of having all if them in the first place :P.
If NPM made some tweaks this might become trivial. Keep a node_modules/packagefiles with all the .zips that you commit to your source control. The expanded files can be kept out as they are now recoverable just using zip!
> Keep a node_modules/packagefiles with all the .zips that you commit to your source control. The expanded files can be kept out as they are now recoverable just using zip!
Wouldn’t the opposite be better? I’m not sure you could take advantage of the vast majority of files in the zip files being unchanged if you kept compressed archives.
Not sure what you mean but typically you don’t need to track changes of libraries to that level. At least not in the context of a repo using those libraries. I am thinking of treating them like binary .dlls.
Source control is really good at compressing text files as they evolve over time, but isn’t optimized to handle binary assets. Since a single-line patch changes the entire zip archive, you’d risk growing the size of the repository based on the number of patches.
IIRC, the Maven crowd was criticizing npm's decisions from the get go because they chose to ignore many of the problems the Java community already solved a decade before.
One issue is npm will allow arbitrary code to execute as part of an install script for a package, which allows a class of attacks that aren't possible in the maven world.
Namespaces in Maven seem like they're clunky. The pseudo-DNS thing where the first section is an actual domain but the second is whatever you want is quite janky, as is not matching the namespace to the package. Plus domains are transferable themselves and it seems like a bad idea to use them as identifiers.
Not to say that npm shouldn't have had namespaces by default, but I think there's good reason not to blindly do everything the way the Java community did.
> Not to say that npm shouldn't have had namespaces by default, but I think there's good reason not to blindly do everything the way the Java community did.
I'm not saying that they should have blindly copied Java, but they should have had something.
The entire reason this is a big deal is that people don't know what their dependencies are. The left-pad incident wasn't a big deal because it was pulled, it was a big deal because no one could easily fix their builds and didn't even know they were depending on it, because it was a dependency of a dependency of a dependency.
While it's ridiculous to expect that people will audit every single dependency and sub-dependency, it's not ridiculous to expect tooling to do the same.
Packages should be given an overall quality rating (and honestly it might be great for an ecosystem as large, diverse, and welcoming-to-beginners as JS/TS), part of the score comes from the number of different dependencies/sub-dependencies -- a social package score if you will. If a package causes the dependency graph to explode, give a warning before installing it.
Then, if you're NPM, you don't need all of these convoluted and exploitable policies around un-publishing.
Whether you're storing your own copy of a given dependency and whether you've done code review for it are orthogonal concepts. (You can check it in and perform the same amount of review that people do when deferring to `npm install` for late fetching, i.e. none.)
Conflating these two not-unrelated-but-still-distinct concepts is a big contributor to why the current state of the art is so fraught.
I'd be called insane if I suggested it.
I work with dotnet and I'd rather not add all the code in newtonsoft json and manually review each line.
I mean where does it stop?
Why not have everyone in the world code review asp.net and dot net libraries for every single website project at that rate?
Rust’s Cargo vet offers an answer to that question.
You can import a list of audits from trusted auditors, which should cover all popular packages. Now you have to audit dependencies that aren’t well-known in the community, which really is the set of dependencies that you should take an extra look at. The big popular JSON libraries can be audited by either Microsoft or some of the other large projects that are using them.
You’d explicitly share your trust list in your audit file, and anything (updates or new packages) that isn’t trusted by you or one of your listed auditors is flagged for auditing.
Personally I would like it and the ecosystem to just cease to exist overnight. Nothing on earth has caused so much pain, misery, suffering and agony, apart from possibly PHP.
Our devops guys scream from the seething pain whenever the have to debug some pile of shit that decides it won't build unless all the runes are aligned precisely and all the RAM in the universe is available on the build runners. And pushing this to the developers results in importing more packages thus adding to the burning tyre fire.
And after several hours of builds and 9000 layers of packages you wake up one morning and in that 50 meg chunk of javascript that is excreted from the process, someone managed to inject a "Slava Ukraini!" banner into your web app.
Over-reliance on third party dependencies is a choice. One could argue that it's unreasonable not to do it if you want to stay competitive but good luck changing human nature then. If there are shortcuts, they will be taken.
While this is true, when the shipped standard library by NodeJS lacks so many VERY BASIC features that every other language has, OF COURSE developers choosing/being instructed to use JS so that frontend and backend languages are synced are going to reach for packages to solve whatever functionality should already exist in code like: "Jim".leftPad(4)
The one thing clear in JavaScript is that if some developers think there is a better way, they will develop it and use it.
That hasn't happened with NPM because its about as useful as it conceivably could be.
The criticisms really amount to nothing, and tend to come from developers who don't even write JavaScript.
It doesn't have to be as painful as C++ without package managers, but should make every developer spend about 5~10 minutes labor work for adding each direct dependency, or one minute for each new dependency in the dependency closure.
When you use 'go get' to add a package to your Go project, it actually fetches the code through a Google proxy which saves a snapshot of the commit in question. Even if the original source goes away, they should have a copy of every version of the library ever fetched via their tool, and devs can continue to build existing stuff.
(If you don't want Google to see what packages you're fetching, you can also turn this off with an environment variable.)
I don't know that there's a solution because the fundamental cause of the problem is that Javascript has a huge dev base and everybody wants to have at least one active NPM package they maintain for their resume. Nobody ever asked me as a Perl programmer what CPAN packages I've created because very few Perl programmers made them, but hiring managers will look at Javascript devs' NPM footprint.
This problem is a symptom of "move fast, break things" mentality that pervades the JS (and, more broadly, the web) ecosystem. The result is an ecosystem that is specifically optimized for moving fast and breaking things - which is a lot easier when the stable core is tiny.
The solution is a proper deprecation mechanism with a grace period for migration. Restricting removal or allowing instant removal are the extremes that cause trouble.
Have smarter users. If your package breaks because it depends on trivial code which got deleted, you shouldn't have depended on that in the first place.
Preventing people from deleting their code -- always, or even just sometimes -- was never the right solution.
I know "have smarter users" sounds like a joke but a lot of the problem really is cultural. In most languages you would write a two line leftpad function, in the js world everyone will tell you you're doing it wrong and should use a library.
It's personal tastes perhaps, but I don't find the appeal of packages management in Golang. I find PIP, NPM, RubyGems, Nuget, Cargo,… easier to work with. The go.mod syntax is what it is, and doing updates or fixing conflicts isn't easy.
Not having a registry is neat, but I'm also unsure of what is going to happen over time as dependencies may be moved or removed. You can see that with old Maven pom.xml where some dependencies do not resolve anymore.
> I'm also unsure of what is going to happen over time as dependencies may be moved or removed
That's what the Go module proxy is for. The authors can move or remove their repositories as much as they want, I as a dependent am not bothered by it. They would have to go through an official vetted process to get it removed from the proxy.
Oh like this. I’m not sure I would enjoy overloading my git repository and merge requests with the dependencies. I was thinking about having a proxy or forking all the dependencies.
The thing is though that those dependencies are part of your code. Seeing how they're changing through PRs and commits is actually a feature IMHO and not a bug.
Soft deletes. You can delete a package and it stops being advertised but a shadow copy of referenced versions are kept for anything that depends on it. NPM spews warnings when this happens.
Once the referencing packages are updated are deleted or modified the shadow versions can be dropped.
Programming is, fundamentally, the imposition of a chosen order upon the world. You can easily distinguish somebody who's new to programming by the lack of choosing or the lack of effective order, and I think it quite fair to call them "not yet a programmer" while they're in that phase.
Sure, theoretically web devs can be programmers. But in practice, choice is overwhelmed by happenstance, and/or order does not follow from the choice that is made. And this isn't just on random websites (after all, 90% of everything is crap), but even for core tooling.
This is a vague statement filled with poetic language that conveys very little useful information. I can't imagine trying to parse this as a non-native English speaker and extract any sort of meaningful information from this comment.
* if you just bash on the keyboard randomly until you get a result, or just copy-paste from StackOverflow, or just let an LLVM spew something, that's not programming - there's no choice.
* if you do deliberately choose something, but your choice fails to produce a meaningful result, that's not programming - there's no order.
Ok...and how exactly does this invalidate client side programming?
> * if you just bash on the keyboard randomly until you get a result, or just copy-paste from StackOverflow, or just let an LLVM spew something, that's not programming - there's no choice.
To add a more substantive example to the conversation: do you really think the developer of Photopea built an entire Photoshop clone in the browser by mashing keys on his keyboard randomly? You think there was no choice in the development of a project like that?
> * if you do deliberately choose something, but your choice fails to produce a meaningful result, that's not programming - there's no order.
Do you think the client side developers behind something like Google Docs have failed to "produce a meaningful result"?
You've come up with an interesting set of criteria, but you have nothing to apply it to. That's why your original comment was flagged, and referred to as posturing.
I’m blown away by the reception of this article. It’s wildly low quality, generated SEO spam.
> It was removed, but then reemerged under a different scope with over 33,000 sub-packages. It's like playing whack-a-mole with npm packages!
> This whole saga is more than just a digital prank. It highlights the ongoing challenges in package management within the npm ecosystem. For developers, it's a reminder of the cascading effects of dependencies and the importance of mindful package creation, maintenance, and consumption.
> As we navigate the open source world, incidents like the everything package remind us of the delicate balance between freedom and responsibility in open-source software.
The "delicate balance between X and Y" is an LLM tic[0]. Especially llama -based language models have a habit of ending any longer piece of text with a phrase like that.
Source: have done a bunch of AI-assisted writing to develop my own skills and the tics and specific turns of phrases really pop out to me.
Ironically, the most common place I read the tic of ending a piece of persuasive with a deliberate, unconnected conclusion that doesn't persuade and instead equivocates or states a trivialism ... is in student papers or similarly graded-like-assignments rote work.
Could be that there's a lot of that out there such that it's heavily represented in training data. Could just be a person doing a not-great writing job.
"accidentally broke NPM and all I got was this sweet permanent banner all over my Github
(thats impossible to remove since they probably had to code it up last minute before removing the org/repo)"
When I was consulting for an R&D lab at eBay, we open sourced a bunch of our work in a GitHub org. It was sanctioned by eBay's OSPO; they even linked to it from their main open source directory.
7 years later, long after the team disbanded, someone in eBay's current legal team decided that the (now archival) org violated eBay's trademarks. For the last year+, every time I've opened GitHub, I've been met with the same undismissable banner.
Since the only choice they give you is to contact support, I did. Unfortunately, their support team is not responsive, and has a completely separate notifications system. It took an inordinately long time for them to respond. (I have poor reception here so I can't check, but I think it was months.) Since I'm not in the habit of checking GitHub Support for new messages, when they eventually replied, I missed it. I had to start a whole new ticket. That too was months ago, and I still haven't heard back.
So because I did some work for a skunkworks eBay team in 2015, the top 150px of my GitHub are unusable, and there's apparently nothing I can do about it until some call center decides to write me back.
You could probably code up a simple browser extension to hide the banner via CSS if it bothers you a lot. Still only a bad fix that shouldn't be necessary
I think there's a point when you're trying to do something really stupid and hack around the defences (e.g. rate limits and package JSON file sizes) that it's no longer an accident.
Just as a side note about the screenshot at the end. I think it's from this socket thing, but the supply chain security of a package that depends on literally anything on npm having a score of almost 50 really makes me think if that score is just artificially inflated on every other package. Can you even reach a score below 47?
Founder of socket here. npm has since unpublished the chunk packages that the 'everything' package depends on (or perhaps made them private), so those packages are no longer being taken into account in the package score.
You're right that a package that depends on literally everything would absolutely have a score of 0 in our system.
I'm not saying that other package managers handle it better - if authors wilfully misrepresent the state of their software, it is indeed not the remit of the package manager to correct them. If you started down that road, you'd probably end up with a library of tests (executed in the package manager's registry) to guarantee a non-breaking change, and at that point you have to trust the package author that the tests are indeed accurate, which is basically equivalent to trusting them to write the correct `version` string (unless you auto-generate the tests, which is an interesting idea but probably impractical).
I'm saying that the fact that it is (apparently) the norm in JavaScript-world that authors will regularly publish breaking changes that are not advertised as such, and that that is just an acceptable everyday uncommentworthy inconvenience, is surprising to me. How do y'all get anything done if you can't even trust SemVer enough to automatically pull in minor/patch updates to dependencies!?
It's not common at all. It can happen, but it's very rare. And it's basically never intentional.
In my experience the most common cause of breaking changes is accidentally breaking on older versions of the runtime, because the project is only running tests on the last version or two. Aside from that, the only notable example I can think of in the last year was a pretty subtle bug in what was supposed to be a pure performance optimization in a query language [1]. I think these are pretty representative, and not meaningfully worse than the experience in other languages.
Huh. I have got the wrong impression, then, from various blogs/articles which suggest never relying on SemVer because it's regarded as as-good-as-useless. Thanks for setting me straight!
And on my team we pin exact versions and use semver to inform the level of scrutiny when we manually update packages. Probably hasn't prevented any issues, but it helps folks sleep at night knowing our code doesn't change unless we tell it to.
I believe there is no process or tool that could reliably do so (see sibling comment[0]). Indeed, at some point you need to trust an author that what they are publishing is what they say they are publishing, and authors being fallible means that mistakes _might_ slip by.
What I'm surprised by is the apparent cultural norm that this is just a regular everyday occurrence which entirely erodes any faith in the meaning of SemVer. Sure, we cannot 100% trust SemVer (because humans are fallible) - but there is a world of difference between trusting it ~99.9% and 0%. The JavaScript community (from the outside! I could be wrong!) seems to have simply accepted the 0% situation, and all the extra toil that goes along with it, rather than trying to raise the bar of its contributors to be better.
I don’t think this is quite true. I can expect semver to work correctly in about 70% of all instances (working with JS/TS every day).
Biggest issues are authors that keep their libraries at 0.x forever (every minor chance can be a breaking one) and the ones that release a new major version every other week.
The times I do a minor update and something breaks are generally regarded as a bug by authors too.
Pinning to a specific version doesn't protect against the author unpublishing that version.
The problem with the `*` bug is that it means you can stop anyone from unpublishing future versions of their package by simply creating a package that depends on it with a `*` identifier and publishing that to the registry.
The way unpublishing works is broken. It would be better if unpublish would just hide the version. Then it would not matter if someone unpublished something with dependencies.
The article is totally misleading there is no storage space running out and system resource exhaustion. btw the total size is around 30MB or less than 50
The only thing is no one can unpublish the npm package because npm have policy if one package is depend on your package you can't unpublish it
> The "everything" package, with its 5 sub-packages and thousands of dependencies, has essentially locked down the ability for authors to unpublish their packages. This situation is due to npm's policy shift following the infamous "left-pad" incident in 2016, where a popular package left-pad was removed, grinding development to a halt across much of the developer world. In response, npm tightened its rules around unpublishing, specifically preventing the unpublishing of any package that is used by another package.
Has no one thought of that? It seems like it should have been obvious that such an absolute rule could be easily abused to troll the system at scale.
Not sure if it's a problem though, perhaps all unpublishing requests should be reviewed by someone at the registry (and granted only when it makes sense).
Go is a little different here- originally it was totally decentralized, there was no central registry, just URLs. So you could depend on everything, no effect on other packages. No rules on publishing or unpublishing either, because you just get to run your repo the way you want.
At some point, Russ Cox got the Fear about this, and now https://proxy.golang.org/ is an on by default, caching proxy in the middle. You can still delete your packages whenever you want to though.
NPM as a soulless entity is easy to bash, while the creator of the package is a popular tech influencer so naturally has the support of the masses. If you’re going to complain about NPM, describe how you would solve it in their shoes.
Switch to a "softer" interpretation of "*" that rather than blocking unpublishing of every version, allows unpublishing as long as one version is still available.
Most articles say the page includes a Skyrim meme, but no one says what the meme is and I can't find anything relating to Skyrim on everything.npm.lol. This is very confusing to me.
He didn't kill someone's puppy, he just published some interesting data that others' code struggled to cope with. He was irresponsible perhaps, but I don't think he foresaw doing any real damage, and I don't think he needs to be especially sorry for it.
I haven’t compared it with killing someone’s puppy. I just think that it would genuinely be helpful to explain a little bit of the rationale and any insight gained from the totally predictable but apparently “unforeseen issues” [1].
It’s the world of worse is better and they’re going for the widest possible area of effect. Should we crucify these guys? 100% not. Part of this is on npm’s design and implementation. Part of it is cultural.
But these guys owe the people who were needlessly “inconvenienced” a little more than just the word “apologize”. Not their first born but some rationale which justifies or reveals that they realise it was a bit pointless or stupid.
Perhaps NPM should apologize for shifting blame and failing to address the root cause.
The wildcard "any version of dependency" preventing unpublish is clearly flawed. The "everything" package folks had no malicious intentions, and nobody would benefit from a long-winded, ashamed apology. If not for NPM's flawed unpublish policy the everything team would've unpublished to resolve the issue.
Do you think he should be ashamed? Granted I may have overlooked something, but as far as I can tell it wasn’t an intentionally malicious act, it was a bit of a curious experiment. Seems rather inline with the HN values to me.
Shame is a spectrum. I don’t think he should flagellate himself until the end of time. I think that they should be a little bit embarrassed that they haven’t published what they believed the risk of the everything package was.
Upon rereading the article I can see that the word “unintended” is actually not Patrick’s but the author of the recap’s word.
Beyond that you seem to be ascribing benign intent. Reading it from the horses mouth [1] it doesn’t seems like they had any intent other than trying to find out if it could be done. In a world of worse is better creating the largest possible area of effect for your experiment seems to be a pretty easy way to amp up the consequences of your actions regardless of the risk.
Why does he even need to apologize? If anything, npm should apologize and thank him for revealing a huge issue in their unpublishing policies unmaliciously.
I think there's a point when you're trying to do something really stupid and hack around the defences (e.g. rate limits and package JSON file sizes) that it's no longer an accident.
Yeah. I’m honestly not sure where any of this “package chaos” actually exists. I mean… there’s incompetence everywhere in every language, so yes there. But I’ve yet to run into a friend of a friend who has a horror story about these dependencies.
So not asking rhetorically, if we had all the insight and knowledge we have now, how would you make it different?