EDIT: my favourite part, is-even depends on is-odd.
Utilities like that are a function of using a weakly typed language. "Just use TypeScript" is an alternative solution, but even in TS you should still check the number is within the bounds of isSafeInteger.
"2.2 % 2 === 1" works just as well, so why is the isInteger check needed? Why is the isSafeInteger check needed? And "a % 2" is already NaN (and thus false), although an explicit error is arguably better than silently eating it.
They shouldn't. It's silly. You're right that there should be "math", but while those things aren't part of the main JS ecosystem people write utility libraries and publish them.
 There is, it's https://mathjs.org/. More people should use it.
For more complex micro-libraries, perhaps they could be collated into a single, larger high quality library.
> I created this in 2014, when I was learning how to program.
Someone probably just did this for the fun of learning and somehow it got adopted elsewhere.
I... don't understand...
A lot of JS devs wouldn't bother check those things, so really a lot of code is improved by this library. It's kind of a shame that so many developers just skip over checking inputs really. That's the real problem.
Because, indeed, no-one should be rewriting obvious code like isArray, leftPad, isEven and so on.
Yes, that might cause some overhead in some situations (when you need isOdd, but not leftPad). But I'm certain that overhead is minimal. In practice, however, we currently see dependency-trees where these "should-be-stdlib" modules like "leftPad" or "isArray" are repeated numerous times in one project.
Any stdlib could even be optimized by browsers and engines, turning that "overhead" into a net benefit.
isOdd('') => false
isOdd(null) => false
isOdd(2n) => TypeError: Cannot convert a BigInt value to a number
You are not supposed to reinvent the wheel. Somebody has already gone through the trouble of implementing the is-even algorithm, and it would be a waste of time to re-write it for no purpose. You may think that it is a simple algorithm, but I doubt you could implement it better than the people who uploaded these files to npm. Notice that the algorithm has many obscure corner-cases that you are likely to miss on your first implementation. Fortunately, it is already written and packaged; just use it.
Why does he first use Math.abs on the parameter and then type check the result of that? I'd think if you do an argument type check, you'd do it before using it. Just to make it not throw on null? I don't see the sense in that...
I.e., `Math.abs("-27")` yields `27`.
It might be the case that Math.abs does other *-to-number conversions also, so maybe this method is trying to take advantage of that. (Math.abs is usually implemented as a native function so without digging more deeply it's not obvious to me what the actual implementation does.)
UPDATE: Curiously, `Math.abs(null)` yields `0`, so a null argument passed to is-even would yield `true` I guess.
Also whatever the logic is for that conversion is it is not the same as the built-in `parseInt` or `parseFloat`. `parseInt("27 meters")` yields `27` but `Math.abs("27 meters")` yields `NaN`.
I think you're getting some down votes as you missed a closing tag.
That's 120k instances of people or scripts doing `npm install is-even` or the equivalent.
* Lodash - a general purpose set of library/utility functions by the way - has ~26M weekly downloads.
* request - an HTTP client library - has ~19M weekly downloads
Probably because it just does
I’d also say that implying JS developers inherently have a wider range of experience and knowledge compared to developers of coreutils is flat out absurd.
I was way too ambiguous, so I see how you understood it that way. I meant in the downward direction. There are more packages from more people in JS, so you end up with lots of first projects, small experiments, etc. So you have 100% of coreutils being decent to amazing, while it's probably closer to 1% with NPM. Both ways have their advantages, but it's easier to footgun with the highly inclusive NPM approach.
From the short dependents lists, it looks like neither is frequently used directly. Edits to one or two dependent projects would likely be enough to largely eliminate them from the ecosystem.
Well, Deno (https://en.wikipedia.org/wiki/Deno_(software)) has that as one of its explicit goals. As well as fixing a lot of the other grown idiosyncrasies.
Another thing is anyone can roll most of these tools into standard libraries if the source license allows them to. So maybe some "rollup" packages are in order. You would get more than you need, but if you setup any sort of tree-shaking (which has been the focus of many build systems lately) then you can only bundle what you used in the end. One other thing that may be an issue is if you have two major libraries and each one is missing one function that you need - then you need both...
Just doesn't have any further dependencies.
Then again I don’t think even a library the size of std would change this picture much. Most of the dependencies in npm are too specific I think.
It does, if all you care about is mainly DOM stuff. That usecases is taken out of the table once you adopt any framework such as React, Vue, Angular, etc.
So basically if I don't agree on your way of doing things, easy, I will just create a new module and publish it.
Things get real messy when hierarchy of modules use different modules.
Having basic functionality may fix it in long term but it will bring a huge rift in js community.
If TypeScript had a reasonable standard library (at least the size of the Java/.NET libraries) and these were tree-shaken down when compiling, wouldn’t that go
a long way?
But now I strongly encourage every developer to look at .net core and asp.net for web development.
The entry is a slightly steep learning curve, but its well worth it. Especially if you are a lone developer, deploying to a vm in the cloud, the cost savings due to the enhanced throughput itself is substantial.
Creating a web api is a breeze.
Standard websites with a few pages and some forms can take advantage of razor pages.
And Blazor is an awesome alternative to React/Angular/Vue etc.
Edit : Also, the full tech stack, including developer tools are open source, as in with proper licensing. And the open source dev tools experience is also very very good. VS Code is an awesome IDE.
 - https://intercoolerjs.org
 - https://htmx.org/
 - https://github.com/turbolinks/turbolinks
But we can't just blame this on environmental factors - a lot of this feels very much self inflicted. For example: node provides a file system utility for but wepack is using a different one. Maybe the fs utility can't solve all of their problems and they need an extended one. Instead of just using the standard file system and dealing with the quirks of it, we have to write a new, slightly better one and use that instead. Locally, it might make sense for webpack to do that, but when you do it in aggregate across the industry, you end up with the death by a thousand cuts situation we find ourselves in.
how? cultural expectations around transitive dep size?
I'm certainly making decisions about graph complexity when designing software in house (avoid circular import deps in code, be careful about call depth in service graph)
But I've given up doing this for 3rd party deps in JS, except maybe at the point where `npm install` starts to bog down and I'm like 'this one isn't for me'
Is it just the standard library falling short?
Which is, to be fair, exactly the reason you'd use a library like this.
There is a crucial nuance here:
Which is, to be fair, exactly the reason you'd use a library.
With which I mean to emphasize that -indeed- libraries are crucial. But nano-libraries, like nano-services are an antipattern.
This could be a stdlib (ideally), a bundle (like Lodash) or even a numberUtils. Having a gazillion packages that have a rediculous biolerplate-to-code ratio, contain just one (well evolved!) function and so on, is doing more harm than good.
A library "like this" is harmful. If only because it gives people an argument, or just the idea that libraries are stupid and should be avoided.
Don’t use Babel if you don’t want a gazillion dependencies.
With most modern bundlers, bundle size is rarely the problem anymore. NPM audit helps detecting and fixing security vulnerabilities of dependencies. And build process breaking because of dependencies is rather a sign of having a bad CI pipeline.
I am not saying it is great to have many dependencies, but it's really not THAT bad.
Sure, big projects (hopefully!) watch their dependencies, but you auditing 13k packages is simply impossible. And, given how much these numbers explode, it unfortunately seems like people simply add a dependency without doing their due diligence.
Besides, my guess is that the vast majority of developers do not verify their dependencies in the first place. No matter if it's number is 5 or 5000.
In the unlikely scenario that npm it gone forever, you can still get the dependencies from a previous build or a random developer laptop, or Github.
If you must build and deploy a version of your software while npm is down, which is unlikely but may happen, well you may have to skip the CI and build from a developer laptop.
If you look at the eventual package-lock.json and filter it to just lines containing “resolved”, it’s only 756 lines, because it does plenty of deduplication. I don’t have the time to waste on installing it all myself to check, but I think it fails to deduplicate some that could theoretically be deduplicated, because of incompatible versions: I think that if you have dependencies a → x@1, b → x@2, c → x@1 and d → x@2, it’ll pick one of those versions of x (no idea how it chooses—first it encounters, perhaps?) to sit at the top-level node_modules, and any packages that need another version will install their own version, so that in this situation you might get node_modules/x (version 2, used by b and d), node_modules/a/node_modules/x (version 1) and node_modules/c/node_modules/x (version 1, again). I say this based upon very vague recollections of things I read and interacted with years ago, and the structure of package-lock.json; I may be wrong in the details of it.
This way of having multiple copies of the same version of the package is the difference between 756 and 691—there are 65 exact duplicates.
For example, you get debug-2.6.9 at the top level, and then within other dependencies, you get three copies of debug-3.2.6, and five of debug-4.1.1. That’s just one example. There are eight copies of four different versions of kind-of. After excluding these exact duplicates, there are then another 67 cases of multiple versions of the same package being installed (kind-of’s four versions is the most).
A few days ago I looked at a case that was double the size of all this: https://news.ycombinator.com/item?id=23488713#23490055.
When you get duplicates with incompatible versions like this, it strongly implies unmaintained, or occasionally incorrectly maintained, software. If they all got their act together and simply updated to the latest version of all their dependencies, the number of packages you’d install would not exceed 624.
Look, it’s still a lot, and I scorn many of them as unnecessary and oft counterproductive frivolities, and there’s way too much overlap in many of them; but 13,000 is just a shock number that doesn’t represent what people expect it to represent, or match what they’re concerned about.
(Also this number doesn’t mean you’re taking code from over six hundred sources; some things are just split up into multiple packages because they genuinely are separate concerns; for example, there are 93 packages named @babel/*, indicating first-party code from Babel.)
Do you trust every employee working on the Intel CPU microcode? Every maintainer of the Linux kernel? The people who maintain the glibc? The developers of V8 and nodejs? Do you do the same for your database? Your cloud provider? The codebase of your business partners?
I would guess you don't, despite most of what I cited being highly critical in terms of security, and some being written in memory unsafe programming languages with tons of critical issues all the time.
But what will most likely happen : a maintainer will fix the issue. If the maintainer is in jail because s•he killed someone, well, someone else can still maintain it or someone else will fork it and other maintainers will use the fork.
In practice when a NPM dependencie of your project has a security issue, all you have to do is to accept a pull request from a robot on Github.
I know it's not perfect, but it's really not that bad.
We as an industry need to put work into reviewing, simplifying, and increasing visibility at all levels of the stack, especially firmware. We're building high and fast and while standing on the shoulders of giants is a great place to be, we need to make sure the giant is more than just a house of cards.
It's not a minor problem. It's a huge structural and operational problem that nearly renders the whole stack unsalveagble, if not for all the infrastructure and legacy.
Do you really care if some website uses a dependency from npm or some copy pasted code ? What does it change for you as an user?
and i am not even talking about HLS streaming or anything like that here, just pulling mp4 file