Hacker News new | comments | show | ask | jobs | submit login

I applaud this action and while I'd like to point the finger at NPM, there's no real other method to fix historical package versions that depend on this.

It is worth pointing to the silly state of NPM packages: Who decided that an external dependency was necessary for a module that is 17 lines of code?

  module.exports = leftpad;
  
  function leftpad (str, len, ch) {
    str = String(str);
  
    var i = -1;
  
    if (!ch && ch !== 0) ch = ' ';
  
    len = len - str.length;
  
    while (++i < len) {
      str = ch + str;
    }
  
    return str;
  }
Developers: less dependencies is better, especially when they're so simple!

You know what's also awesome? The caret semver specifier[1]. You could install a new, broken version of a dependency doing that-- especially when other packages using peerDependencies rely on specific versions and you've used a caret semver specifier.

[1] https://github.com/lydell/line-numbers/pull/3/files




> Developers: less dependencies is better, especially when they're so simple!

No! The opposite of that. Lots of little µframeworks, defining composable and generic types, is much better than a giant monolith.

The Swift Package Manager is taking this approach, and I think it's great: https://github.com/apple/swift-package-manager#modules

The caret character doesn't appear anywhere in the semver spec, so whatever that does, it's non-standard: http://semver.org/

If your modules are small and well-defined, they probably won't need many versions anyways - they might just stay on 1.0.x forever. If you want to do something different, it might make more sense to just write another module.


Less dependencies are better. This job is to communicate everything you do in a software project: its reasons and its purposes for every piece. For example, application code that will never be in a public API does not need anything as complex and thoroughly considered than a generic library. When it comes to writing a 10 line function that is represented in an existing module, the most likely reason is that I am writing it for my own purposes and without having to explain why I brought in a package to answer a very small problem.

I implemented KeyMirror from NPM once. It's a simple array->object transformation. It's been in production for months without issue. But, I initially got guff from my boss over it for not using the package. If anything, the package is just an example proof-of-concept of an extremely simple idea. But, carrying the bloat of another package next to more relevant packages seems to be more important here than just merely owning a simple piece of code like this.


The caret character is a specification in NPM, not semver. It's designed to work within the semantic versioning rules to ensure you get the latest version which includes bug fixes, but also won't include breaking changes.

For example, ^1.3.2 will allow anything greater than 1.3.2 but not 2.0.0. It also has special behaviour that makes it more strict for projects with a major version of 0. If your dependencies follow semver then you'll get bug fixes and security updates without having to do anything or worry about breaking changes.

More info: https://nodesource.com/blog/semver-tilde-and-caret/


This dichotomy is silly. You should write as little code as possible to do the job required, and should use only the dependencies required. This might be none, or nearly everything, depending on what your app or library is supposed to do.


How do you read an article like the one this thread belongs to and come away with "Seems reasonable, I need more of that"?

Trivial dependencies are a code smell, a liability, and an operational headache.


...and I am in my little python world with "batteries included"...


And I in Java and .Net world, where it is more like "nuclear reactor included..."


...Only that, in Java, it's not exactly the nuclear reactor you want, which forces you to use someone else's solar power plant, and then they deprecate the original nuclear reactor (because it's known to leak uranium) in favor of a second nuclear reactor, which still doesn't work that well because it's the wrong polarity, so now you have three energy sources available in your project and if you plug your lamp into the wrong outlet everything explodes (see the Date/Calendar mess).


Having a multitude of small utilities like this is a great thing with many advantages.

It may seem simple to write leftpad, but if 1000 projects that need it all write their own version, there will be at least 2000 more software bugs out there in the wild because of it. If you think that's rediculous, you're not being realistic about the huge disparity in skill levels of industry programmers as well as the considerable rush under which many projects are done.

Also important is that every time I can use a public ally available utility instead of writing it myself, it's one less thing I have to think about. The benefit of making less decisions really adds up, saving a ton of mental capacity to focus on the more important parts of my project. Even the simplest methods require some thought to design.

I know there are disadvantages (such as what happened as the topic of this post), but there are also ways to mitigate them. As far as having many versions that all do the same thing, there is usually winners and losers over time. Because of this I believe that eventually the dependency graph shrinks overall.

Note that I wouldn't advocate creating utilities for things that are not very generalizable.


It's not one less thing to think about which is exactly the point. If you install 1 dep and it has 100 deps you now have 101 things to think about.


Or, you could look at it, it's 101 fewer things to think about. Besides, we're talking about a package that doesn't have any dependencies.


It doesn't have any dependencies yet -- what if leftpad decides it should depend on "pad", which depends on "stringutil" which depends on "unicodeutil" and "mathutil", etc.


I've never used npm, but doesn't it take at least as long to find, evaluate, and install a package like left-pad as it would to just write the function yourself when you find you need it?


No. I could find, evaluate and install that package quicker than I could write the code that carefully. And the second time I need it, it's just "remember, install". Also, keeps my code small and focused.


Do you not read the code of packages you're including in your projects? I usually do at least browse through the "beef" of the code for anything I consider adding as a dependency. And try to figure out what's the level of maintenance I can expect, history of bugs, sometimes try to contact the authors, etc.

In short: it would take me a whole lot more time to evaluate whether or not to depend on something as trivial as leftpad than to write it myself. I'm pretty confident I can do that in a minute or two and I trust my collaborators to not break it.


Personally, no, but even if it did, what if a bug is found in the future? The community fixes the bug, not necessarily you!


The possibility of having bugs in code you don't control (that usually has a clause for no warranties) is an argument for implementing it yourself, not against it. Don't forget how hard it is to get a maintainer even agree on whether something is 1. a bug 2. that needs to be fixed.


The reality, however, is that if you took this point of view, you will spend your time reinventing the wheel, introducing bugs and wasting resources. That's how it works in real life.


If someone already wrote the base code, we can always fork it and fix a bug or add a feature ourselves if it runs contrary to what the original authors desires.


Even getting a response just so you can know what the original author desires can take a long time and there is no warranties or guarantees that you will even get any response. To me, all the downsides that come with dependencies are not even close to worth it for saving 15 seconds.


Who cares what the original author desires?

If you fixed the bad behavior you're experiencing, and the original author's effort saved you hours or days of coding, what's the downside?

Perhaps I'm not arguing for 15 second long code changes. But other than typing a single if statement, what takes literally less than one minute to securely change in any partially-complex project?


Fair point. One does run the risk of having bugs in code out of their control by using a package manager such as NPM, but one gains a swath of other programmers inspecting the modules for bugs. And in module repositories for interpreted languages, its very much in your control to fix any bugs you might find, regardless of what the maintainer might say about it.


Would simply copying something into your project as small as 17 lines make for a good compromise?


No - then you won't get updates easily and everyone reading your project would have to make sure that your copy of the module hasn't diverged from the original module before working with it, especially if it's a larger module that has full documentation and a community of people who know how to work on it.


I've never seen so many programmers advocate copy/pasting code before... it's really surprising!

Something about javascript makes people crazy...


That's what I thought, but what concerns me with 100's or even 1000's of dependencies is managing them. Things like figuring out which ones have security issues without reading the issue tracker for each.


I'm curious since it strikes me as a hard problem to solve: How do you resolve having to deal with security issues with tens or hundreds of dependencies (and their dependencies)? How do you even know whether they have a security issue or a version bump is just a bug fix without digging into each one on a regular basis?


How do you know that you, as a lone developer, aren't writing insecure, unperformant, buggy code?


That's a fair point. But what would concern me, as a lone developer, is liability if you get hacked due to a known vulnerability in an npm module. If the company is looking for a head to roll and someone points out it was a known and resolved issue in later versions that could be a problem for me.

Does npm let you tag releases as security fixes? That would make automation to discover it possible.


NPM itself is clearly faulty, but I don't think the concept of outsourcing logic to dependencies is. If something is complex enough to have a legit security vulnerability, it's probably the sort of thing I don't really want to write myself. And yeah, that comes with the responsibility to stay up-to-date. But pretty sure my head would rightfully roll anyway if I wrote my own bootleg SSH protocol and got my company exploited.


> As far as having many versions that all do the same thing, there is usually winners and losers over time. Because of this I believe that eventually the dependency graph shrinks overall.

That's... very naive. No one goes back and rewrites perfectly working code just to change a library. If it works don't touch it. Computers don't care, and if you rewrite it, you're introducing a bug. Also, there's plenty of new code to write! And oh yeah you have a billion little libraries, all used by an engineering culture constantly distracted by the new shinny, so you're going to be stuck with libraries that that haven't updated.

You're gonna have a bad time.


Personally i'm going to use an installable module for something even that small, because i can, and it works.

The benefits from an install registry don't go away just because the module is very tiny...

Why would i spend my time re-inventing the wheel for every little thing i do? And if i'm not reinventing, then i'd be copy/pasting which is much worse. At best that's a waste of time and effort to properly document the source, and at worst it's stealing or license violations.

I don't care if a module is a single line, if it does what i need it to and is well tested, then i'll use it. That might seem silly, but the fact is that it's pretty much no overhead, and no software is immune from bugs (even a 16 line function), so updates might be useful in the future.

Yeah, there is a chance that stuff like this can happen, but within an hour there were several alternatives to solve issues with installs, i'd say the system is working pretty well. Plus with proper software development techniques (like vendoring your dependencies) this wouldn't even be a problem at all.


The overhead is in your management of your dependencies. The size of the module isn't the problem, it's the fact that you end up using so many of them (especially recursively).

Consider this specific case. This author moved all their modules from one hosted location to another. Now, if you want to use these modules from that author, you need to update the scripts and configs that install them (some package.json files in this case). In a better world, like the C or Python world, you might need to update one or two urls which point to a couple of this author's popular libraries (maybe one you use directly, and one used by one of your handful of direct dependencies).

In this crazy npm world, this author has 272 modules. Maybe 20 are in widespread use ... it's already a lot of work to figure that out. Maybe you use a couple directly, and your dependencies have private recursive sub-dependencies on additional copies or versions of these or other of this author's modules! Maybe you have to cut your own versions of some of your dependencies just to change their package.json to refer to the new URLs! Anyway, you probably have to sift through hundreds of your dependencies and sub-dependencies to see if any of them are included in these 272 moved modules.

I've seen npm dependency trees with over 2000 modules (not all unique of course). That's totally unmanageable. I think that's why privately versioned sub-dependencies is a big feature in nodejs: so you can try to ignore the problem of an unmanageable dependency tree. But if you need to make reliable software, at some point you need to manage your dependencies.


I agree that NPM needs to push namespacing much harder, as that would make the whole process much easier.

Also a "provides" field could go a long way into stopping issues like this. Allow packages to say that they provide a package in them that is compatible with another in these version ranges.

That would let "API compatible" packages be dropped in to replace even deeply nested packages easily, and would allow easy "bundling" in big libraries while still allowing easy creation and access to "micro libs".

I really believe that composing tons of small libraries is the way to go, but there needs to be better tooling to make it work. In my (admittedly not extremely expirenced) opinion, bundling many small libs into one big package to make it manageable is a symptom of a problem, not its resolution.


This will become easier with rollup, webpack@2, and so on, which can effectively reduce the penalty of including large modules like lodash by tree-shaking out all of the parts you don't use. I would expect many more utility libraries to then be aggregated into a single package/repo and for users to simply pick and choose functions at will.


By this logic, every Stack Overflow snippet should be a module. I'm almost hesitant to suggest this since many people who read this will be capable of building such a thing.


I'm not saying that everything should be a module, but that well designed, well tested bits of code should be modules.

These 17 lines had 100% test coverage and were used by a stupidly large amount of people (read: battle tested), why not use it?

As is pointed out elsewhere in this thread, echo.c is roughly the same size, does that mean it's not a worthy program?


"echo" is not versioned and delivered on its own. It's part of gnu coreutils (which contains ~ 100 utilities), or part of various BSD core distributions (more than 100 utilities, plus the kernel and libc), and also built-in to shells.


IMO that doesn't change anything.

The fact that in JS land it would be it's a standalone module means you get more choice in what you need (no need to pull down 100 programs if you only need 1 or 2).


You have the same amount of choice. There's no reason that you have to use the other hundred pieces of the package. In the Unix world, there's nothing precluding you from deciding to use the FreeBSD version of tar but keeping the rest of the GNU utilities there.


I guess it's a philosophical difference.

But to be fair this would have the same outcome if left-pad were part of a library that included another 50+ libs (that he also wrote and published, and subsequently un-published today).


More choice, but now you need 50 different modules from 50 different authors to duplicate what would be in one good standard library, any of which could have bugs or be pulled out from under you for a multitude of reasons that are beyond your control.

Choice can be a bad thing too - when there are 10 different modules for doing a moderately complex thing, you have to figure out which one is best for your project, and whether it's still actively maintained, bugs are fixed, how do they feel about making breaking changes, etc.


>now you need 50 different modules from 50 different authors

Not necessarily, take a look at lodash and friends. There is nothing stopping bundling of tiny modules into big "libraries" to be used.

As for the rest, you need to do that validation anyway. But if it were bundled in a large library there is MUCH more code that you need to review.

with something like the module "left-pad", it's a no brainer to use the library. I know that it's under an EXTREMELY open license, the code is really small, and by vendoring your dependencies (you are vendoring your dependencies right?) this whole issue would have been a 5-10 minute fix that only needed to be done when you want to upgrade your dependencies next time.


But also, if you're shipping things to users browsers, please cut out all the stuff you don't use. I don't want to download 100 extra module's worth of JS code because it was bundled.


Why not just put it into the core? Why should it even be a module at this point?


Because in JavaScript there are many different implementations of engines, and they run on all kinds of stuff. Adding something to the standard is not a small task, and it means that it's now extra code that needs to be installed on practically every single PC.

And that doesn't remove the need for a library like this (or your own implementation) for a long time because you can't rely on a brand new release to be available for everyone.


Realistically 17 lines of code is total overkill for this function. In many cases you could achieve the same thing more efficiently in a single line.


Feel free to show a smaller implementation that's more efficient.

I've seen several "one liners" in this thread already, and most of them either blow up when something that's not a string is passed in (regardless of how you view strict typing, js doesn't have it and this shouldn't happen), or are extremely slow comparatively (most of them creating and destroying an array every time they are called).

Plus this has 100% test coverage (even as trivial as it is, it still counts), and is "battle tested" (something like 2.5 million installs per month counts for something).

Sorry, but i'll stick to left-pad vs 20-seconds of thought one-liner.


> Feel free to show a smaller implementation that's more efficient.

How's this:

  function leftpad (str, len, ch) {
    ch = (len -= str.length) <= 0 ? '' : !ch && ch !== 0 ? ' ' : String(ch);

    while (--len > 0) ch += ch[0];
 
    return ch + String(str);
  }
No local variables, less manipulation of the input string, the string grows at the tail which is more efficient, and the code is much shorter.

(With a bit of work you can use the longer ch string that is built to reduce the number of string appends even more by appending multiple characters at once. Although probably not worth it for this function.)


No offense, but that code is much more difficult to understand. If your goal is to minimize the amount of lines, then you succeeded. If the goal is to produce both correct and readable code, then there's room for improvement.


> No offense, but that code is much more difficult to understand.

I strongly disagree. My code has no magic initializers (the -1 in the original) and a simple linear code path, with no branching. It's very easy to read and understand.

The ternary operator at the top is simply read from left to right, it's not complicated.

> If your goal is to minimize the amount of lines, then you succeeded.

My goal was to maximize efficiency. Often that means less lines, but that was not the overt goal. And in fact this version runs faster, and uses less memory.

> If the goal is to produce both correct and readable code, then there's room for improvement.

You think so?

Then now it's your turn - rewrite this (or the original) to make it as readable as possible. I think you will find that a: mine is more readable than the original, and b: you won't be able to (need to) change much except to lift the len initializer out of the ternary operator in the first line onto its own line.


If you are talking about a module that everyone just includes and expects to work, then I'd imagine the goal would be a combination of correct and efficient, not readable.


I wasn't planning on passing a non-string to my function. The type-checking only needs to happen because this is a needlessly generic function


Okay, but then when you pass a string that's just `12` in and get `16` instead of ` 12` don't blame javascript...


I didn't intend to, although since I was going to prepend a string to the start that would be a rather surprising result


Re-inventing the wheel in 17 lines of code is called programming.


Someone with more JS experience can chime in, but isn't this really inefficient in JavaScript? Wouldn't appending rather than pre-pending be better due to the way strings and memory are handled? Or at the bare minimum create the left padding in the loop and tack on str after? Can you use ch.repeat(len) + str; yet in node or if not just do the same idea of doubling in size ch until len is satisfied?

    while (++i < len) {
      str = ch + str;
    }
And isn't this a bug?

leftpad("x", 2, '00') will return "00x"


Essentially all JS engines implement string concatenation lazily as ropes so there isn't much difference.


Well, you can do it in O(log(n)) time instead of O(n) time. But n is unlikely to be large enough for this to even matter a little bit.


"A little copying is better than a little dependency" - Rob Pike



In this case the npm ecosystem is providing more of a surrogate standard library. Imagine if there were no libc, for example, and so people had to reimplement all those functions; would you really want one package per function because of how "Unix philosophy" it would be?

This is where the JavaScript ecosystem is right now -- JS doesn't have the kind of robust standard library other languages take for granted, so you end up with a lot of these things that look like they should be stdlib functions, but are instead third-party packages the entire world has to depend on for a usable programming environment.


I don't understand the standard library argument at all. Standard library is always limited, there is always something that's not included - then what?

This has nothing to do with the JavaScript standard library but the insanity of NPM. There are modules that have even more dependents than `pad-left` despite being included in the standard library (e.g. https://www.npmjs.com/package/date-now, https://www.npmjs.com/package/is-positive, https://www.npmjs.com/package/is-lower-case)


The argument is that things like formatting a string -- including padding it -- should not be add-ons, they should be built into the core distribution you get as part of the language.


I didn't say it shouldn't be part of standard library, I am saying that there is always something that isn't part of the standard library. My point is: if this module was something that shouldn't be part of standard library, what argument would you then use?


I think you're assuming libc is more robust and useful than it actually is. Libc is extremely minimal (and is full of pitfalls as well).

JS has an extremely large and robust standard library in comparison.


Which is useful for binaries. Libraries, there tends to be a lot more aggregation. If this wasn't the case, you'd see libc spread over dozens and dozens of libraries like libmalloc librandom libstring libarithmetic libsocket libprocess libfilesystem libafilesystem (hey, async ops have a different api) libtime, etc.


Why would this be a bad thing? I don't need a random number generator if I just want to allocate memory.


Because a lot of little libraries makes namespaces more complicated, makes security auditing more difficult, makes troubleshooting more difficult as you have to start digging through compatibility of a ton more libraries, makes loading slower, because you have to fopen() a ton more files and parse their contents, etc. Add on top of that those little libraries needing other, probably redundant little libraries, and you can start to see how this can turn the structure of your code into mush really quickly.

At this point, optimizing runtimes to only include necessary functions, and optimizing files to only include necessary functions are both things that are pretty much a solved problem. For example, azer has a random-color library, and an rng library. Having both of those as azer-random or something means that someone automatically gets all the dependencies, without having to make many requests to the server. This makes build times shorter and a lot easier.

Sometimes, in order to best optimize for small, you have to have some parts that are big. Libraries tend to be one of those things where a few good, big libraries lead to a smaller footprint than many tiny libraries.


Not to mention maintenance. Low-hanging fruit here: left-pad, depended on by fricking everyone, <em>is owned by one guy</em>. That is not how you build a serious utility library that people should actually be using in real products. (You also probably shouldn't license it under the WTFPL, but that's another thing). When you aggregate under a common banner, you get maintainers for free and your project might have a bus factor better than 1.


Some of this things you mention are true, but:

> makes security auditing more difficult

What? If you go all the way, you just review all dependencies too. And if they have a good API, it's actually much easier. For example if your only source of filesystem access is libfilesystem, you can quickly list all modules which have any permanent local state.

Splitting huge libraries into well designed categories would make a lot of reviews easier.

> Having both of those as azer-random or something means that someone automatically gets all the dependencies, without having to make many requests to the server.

Also disagree. One-off builds shouldn't make a real difference. Continuous builds should have both local mirror and local caches.


Yeah but that's not the world of NPM. It's a clusterfuck of a maze of near duplicate dependencies with no hierarchy or anything. There's no organization or thought. It's just a bunch of crap tossed together to encourage cargo cults programming.


Think about the test matrix.


Taking an idea to the logical extreme is an effective means of invalidating said idea. How many UNIX utilities are 17 silly lines long?

A bit of code duplication would go a long way towards bringing sanity to JS land.


Quite a bit are pretty small.



Erm, that's not a very good example. You're pointing out some ancient source file from back when unix had no package management. These days echo.c is part of coreutils, a large package which is economic to manage dependencies for at scale.

It's interesting to think about the distinction between promiscuous dependencies (as pioneered by Gemfile) and the Unix way. I like the latter and loathe the former, but maybe I'm wrong. Can someone think of a better example? Is there a modern dpkg/rpm/yum package with <50 LoC of payload?

Edit: Incidentally, I just checked and echo.c in coreutils 8.25 weighs in at 194 LoC (excluding comments, empty lines and just curly braces). And that doesn't even include other files that are needed to build echo. Back in 2013 I did a similar analysis for cat, and found that it required 36 thousand lines of code (just .c and .h files). It's my favorite example of the rot at Linux's core. There's many complaints you can make about Unix package management, but 'too many tiny dependencies' seems unlikely to be one of them.


What on earth is cat doing that it needs 36KLoC (with dependencies)?

I'm starting to see where http://suckless.org/philosophy and http://landley.net/aboriginal/ are coming from (watch Rob's talks, they're very opinionated but very enjoyable).


Yup! In fact the only other data point I have on cat is http://landley.net/aboriginal/history.html which complains about cat being over 800 lines long back in 2002 :)

I didn't dig too deeply in 2013, but I did notice that about 2/3rds of the LoC were in headers.


>"When I looked at the gnu implementation of the "cat" command and found out its source file was 833 lines of C code (just to implement _cat_), I decided the FSF sucked at this whole "software" thing. (Ok, I discovered that reading the gcc source at Rutgers back in 1993, but at the time I thought only GCC was a horrible bloated mass of conflicting #ifdefs, not everything the FSF had ever touched. Back then I didn't know that the "Cathedral" in the original Cathedral and the Bazaar paper was specifically referring to the GNU project.)"


The version of cat.c here: http://www.scs.stanford.edu/histar/src/pkg/cat/cat.c has 255 lines in the source, and brings in these headers:

#include <sys/param.h> #include <sys/stat.h> #include <ctype.h> #include <err.h> #include <errno.h> #include <fcntl.h> #include <locale.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h>

which are needed for things like memory allocation, filesystem access, io, and so on.

One can imagine alternative implementations of cat that are full of #ifdefs to handle all the glorious varieties of unix that have ever been out there.


The main point in support of your argument is the fact that Unix utils are bundled in larger packages instead of shipping in single-command packages. Think of Fileutils, Shellutils, and Textutils... which got combined to form Coreutils! Ha! That leaves only Diffutils, for the time being anyway.


But none of Fileutils, Shellutils and Textutils was ever as tiny as many npm or gem modules.

I thought how commands are bundled into packages was the entirety of what we were discussing. That was my interpretation of larkinrichards's comment (way) up above. Packages are the unit of installation, not use. Packages are all we're arguing about.

My position: don't put words in God's mouth :) The unix way is commands that do one thing and do it well. But the unix way is silent on the right way to disseminate said commands. There you're on your own.


Underscore/lodash is a great bundle of functions that do simple things well. And it's a tiny enough library that there is really no need to split it into 270 modules.

I support packages of utility functions. Distributing them individually is a waste of resources when you have tree shaking.

I trust a dependency on lodash. I don't trust a dependency on a single 17 line function.


While I agree with you, tree shaking is relatively new in the JavaScript world thanks to Webpack 2 and Rollup.js before that, if you had a dependency you brought it's whole lib into your project whether you used one method or all of them. So just including Lodash wasn't an option for people who cared about loading times for their users. A 17 line module was.


Closure Compiler's advanced mode (tree shaking) has been around since 2010.


Lodash has been modular since 2013.


yes, let's blow up the entire concept that's worked fine for the ~5 years of node's existence because one dude did something extreme.


Woah woah. Hold on there. Lets not throw around strong words like "worked", "concept", "entire", "fine", "did" when discussing NPM.


This. I'm on Windows. Npm never worked for me, like not at all. Npm has cost me lots of wasted time, I know I should be thankful for this free product, but, but, Grrrrr...


Windows has always been a problem for Nodeland, thankfully Microsoft is working on making that better.


is unpublishing a module extreme?


I wonder how long /usr/bin/true source is.



On Linux, it's 30ish lines, with half of those there to make false.c able to reuse some code. (I know, it's stupid.)

In OpenBSD, it's 3 LoC IIRC.


On at least one OS I worked with, it was 0 bytes long because /bin/sh on an empty file returns true. (I think that was IRIX 4.x.) OTOH, that's slower than the executable from a 3 LoC do-nothing program.


Let it suffice to say that Linux distributions (and some closed OSes) try very hard to prevent package fragmentation. Experience has shown many times that excessive fragmentation leads to proliferation of unmaintained libraries.


The solution (IMO) is between those: Use lodash's padding functions. It's modular by default so you don't bring in the whole library, JDD is a performance junkie, and it's lodash so it's not going to get unpublished.

If not, writing it yourself works too. Or advocate for a better string stdlib.


I think you're mistaken about the caret semver specifier. Using the caret for versions less than 0.1.0 offers no flexibility at all.

For 0.0.x versions of packages the caret means "this version and this version only", so it won't break anything here...

Source: https://docs.npmjs.com/misc/semver#caret-ranges-123-025-004


Less dependencies is definitely better, but that doesn't mean "write the code because we can't install it via npm.

A developer could come along, find this library, and hard-copy it into their source repo. The tested source code is there, and they don't have a dependency on npm.

This wouldn't work quite so well for large packages (chance of bugs is high and so patches are important), but for something like this? Just ditch npm.



> Developers: less dependencies is better, especially when they're so simple!

I tend to agree, but this is conflating the issue of having dependencies with delivery.

It's perfectly ok to build small and compose large, with some of the smaller constituents being external dependencies, but the problem here is that the delivery of the packages isn't immutable and static. When you publish a package to npm, you don't publish it along with a copy of all your dependencies (by default, there are mechanisms to do this however.) The external dependencies will be pulled in separately when people install your package. What you're suggesting could still be done with an external dependency, just by making sure you it's only external at development time, but at publish it is truly bundled along with your package. This obviously comes with other costs, like the inability to dedupe packages.


Who decided that an external dependency was necessary for a module that is 17 lines of code?

This is an advantage of language concision. In coffeescript this isn't even a function, it's just a one-line idiom:

  (ch for [0...len]).join('')[str.length..] + str




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: