Hacker News new | comments | ask | show | jobs | submit login
NPM and Left-Pad: Have We Forgotten How to Program? (haneycodes.net)
1725 points by df07 on Mar 23, 2016 | hide | past | web | favorite | 854 comments



Holy moly-- is-positive-integer/index.js:

  var passAll = require('101/pass-all')
  var isPositive = require('is-positive')
  var isInteger = require('is-integer')

  module.exports = passAll(isPositive, isInteger)
I retract my previous statements that Javascript programmers are going down the same enterprise-y mess that Java programmers went down a decade ago.

They've already taken it to an entirely different level of insanity.


This comes up a lot when people discuss anything related to npm modules. It's easy to simply dismiss these trivial one-line modules as "insanity" and move on, but there's actually plenty of good reasons as to why many prefer to work with multiple small modules in this manner. This GitHub comment by Sindre Sorhus (author of over 600 modules on npm) is my favorite writeup on the topic:

https://github.com/sindresorhus/ama/issues/10#issuecomment-1...

TL;DR: Small modules are easy to reason about, and encourage code reuse and sharing across the entire community. This allows these small modules to get a tremendous amount of real world testing under all sorts of use cases, which can uncover many corner cases that an alternative naive inlined solution would never have covered (until it shows up as a bug in production). The entire community benefits from the collective testing and improvements made to these modules.

I also wanted to add that widespread use of these small modules over inlining everything makes the new module-level tree-shaking algorithms (that have been gaining traction since the advent of ES6 modules) much more effective in reducing overall code size, which is an important consideration in production web applications.


Small modules are easy to reason about

Yes they are, in the same way that a book in which every page consists of a single word is easier to understand than one with more content per page.

By focusing on the small-scale complexity to such an extreme, you've managed to make the whole system much harder to understand, and understanding the big picture is vital to things like debugging and making systems which are efficient overall.

IMHO this hyperabstraction and hypermodularisation (I just made these terms up, but I think they should be used more) is a symptom of a community that has mainly abandoned real thought and replaced it with dogmatic cargo-cult adherence to "best practices" which they think will somehow magically make their software awesome if taken to extremes. It's easy to see how advice like "keep functions short" and "don't implement anything yourself" could lead to such absurdity when taken to their logical conclusions. The same mentality with "more OOP is better" is what lead to Enterprise Java.

Related article that explains this phenomenon in more detail: http://countercomplex.blogspot.ca/2014/08/the-resource-leak-...


Perhaps another reason for this is that Javascript is inherently unsafe. By relying on these small rigorously tested libraries they are avoiding the need to test their own code, and thus avoiding basic null check or conversion errors.

Other languages handle this with compilers, may be strongly typed, and/or have features that don't allow nulls. Javascript doesn't exactly have that luxury. So maybe it makes sense in Javascript, where it wouldn't in other languages (though one could argue this is a flaw in the language design).

edit: to be clear I'm not really defending the practice, but trying to give a little perspective.


> By relying on these small rigorously tested libraries they are avoiding the need to test their own code, and thus avoiding basic null check or conversion errors.

In practice, many, if not most, of these one-line modules don't even work correctly, and it is difficult to get people to care to collaborate on fixing them instead of just replacing them with a new one-line module that works slightly better as the concept of complex collaboration to perfect one line of code is so foreign to a generation of developers who would rather just throw code to the world and never look back (an issue made all the worse as tiny bugs in these one-liners can almost seem dangerous to fix when you aren't really sure how they are being used: maybe it actually is better to not fix anything and just have people rely on the replacement better module :/).


Odd, the ones I use all have tests, thousands of downloads, and active bug trackers. If I reinvented the wheel, or copy pasted, I wouldn't get those things.


Doing a for loop is not equal to re-inventing the wheel.

Words and phrases have lost their meaning nowadays.

By the JS community's standards if I wanted code to capitalize the 3rd or sometimes 4th letters of a string, they would be 2 different npm modules.

Maybe they didn't forget how to program, as the article implies; maybe they never knew how to program in the first place.


Writing your own solution, when you could use a common widely known solution, exactly matches both writing your own left pad and inventing your own wheel.

> By the JS community's standards if I wanted code to capitalize the 3rd or sometimes 4th letters of a string, they would be 2 different npm modules.

If there was, it would have been published and people would be depending on it. Instead, people use the stdlib first, then big lego bricks like underscore, then (when they don't want the bulk of big lego bricks) smaller lego bricks. A single capitaliseMap() might be in there but it's a much more specific case than padding.


A 1-line of code module matches your needs exactly just for today. When you have 100 such modules which don't quite work and don't quite need after a month(quite a long time for a JS project to exist untouched), then your dependency hell shows its ugly head.

If you had years of experience in a wide array of technologies and languages, you'd be aware of that which is, and has been, obvious to the rest of us for the past 2-3 decades.


> A 1-line of code module matches your needs exactly just for today.

Nothing about function length determines utility. There are many one lines that match many people's needs repeatedly.

> When you have 100 such modules which don't quite work

This is worrying. How are you picking the modules you use? 100x popular modules from npm - with git repos, unit tests, READMEs, and hundreds of other users - beat 100x functions implemented in-house for NIH (or more likely, 100x functions copy pasted from Stack Overflow).

> If you had years of experience in a wide array of technologies and languages

Please don't assume things about other people. It's very rude, and it makes you look bad when you're wrong.


Worry not my friend. I'll take the risk of being wrong. Up to now, it seems that there's consensus on the fact that the JS/frontend world is the worst offender when it comes to engineering robust software.

At least there are JS programmers that seem aware of it and agree that this has to change. So there's still hope I guess.


> JS/frontend world is the worst offender when it comes to engineering robust software

I completely agree. I still can't understand why the language that we depend on for increasingly large-scale front-end work still works like this:

[1, 10, 2, 21].sort() // returns: [1, 10, 2, 21]

It's 2016, software engineering is purported to be a profession, and we still have the worst tools imaginable.


Yep. And very large software developers have created languages that compile to javascript ... it is madness but it solves their problems.


Try

.sort(sorts.numerical)

Using npms 'sorts' module


I think the only thing you highlighted is why weakly typed languages have issues with readability. Looks like it sorted it fine to me.


I think there are people who are aware of the problem, but that problem is people reinventing the wheel a thousand times, and relying on copypasta and needless work rather than package management.


But surely the really concerning part is despite their very wide usage, these dependecies turned out in some cases to be poorly coded functions indeed. Throughout this whole thread are examples where these one or two line "modules" have poor runtime performance, miss key cases in the one single thing they claim to do, and in the very worst cases, those small "modules" themselves are depending on many more "modules".

Diseases turtles all the way down.


> Throughout this whole thread are examples where these one or two line "modules" have poor runtime performance, miss key cases in the one single thing they claim to do, and in the very worst cases, those small "modules" themselves are depending on many more "modules".

Where? I've only seen the exact opposite: folk who don't believe in package management write their own naive 'reinvent the wheel functions' with those issues. Nobody I can see is quoting actual npm modules.


The OP's example ( is-positive-integer) has tests, and is only three lines long, and is on major release version 3.X, because despite all that, it had serious problems in versions 1 and 2.


"Serious problems in versions 1 and 2" suggests that it might be a good idea not to write your own version 1.

But to be explicit about the history: version one[0] had dependencies. Version two[1] had no dependencies, and a comment on github suggests that it might be wrong but I can't immediately see how. Version three[2] is a different implementation, and adds another export.

[0] https://github.com/tjmehta/is-positive-integer/commit/3003f9...

[1] https://github.com/tjmehta/is-positive-integer/commit/b47e90... - it should return false for 0, so I'm not sure what the commenter is getting at

[2] https://github.com/tjmehta/is-positive-integer/commit/3cdfe1...


Case in point: the left-pad package has quadratic complexity.


Depends on the JS implementation in question. And I think node uses one that optimizes string concatenation, possibly making that the fastest way to do it.


Yeah, as far as I know in modern JS engines += is actually faster than using an array.


[citation needed]

It's really nice how we once again depend on "I think some guys do it like this" for performance. What happened to standards?

What's next? SEO for code?


The equivalent in C would then be replacing

#include <math.h> and "-lm"

with

#include <sin.h> #include <cos.h> #include <tan.h> #include <sinh.h> …

and "-lsin -lcos -ltan -lsinh …"

Which is nuts no matter what language you are coding in.


The difference with C is that with a good linker, functions you don't call are removed from the resulting binary (or are maybe not even in the binary if you are dynamically linking to the c runtime).

With javascript however, if you import a hypothetical "math.js" instead of just "sin.js", "cos.js" or "tan.js", then you'll need to download and evaluate a whole bunch of javascript that you might not need.

I'm not defending javascript, because I dislike it and avoid it where possible, but I can understand some of the reasons why short, specific modules came about.


Webpack 2 (probably the closest thing to a JS linker) can do that same tree shaking.


what you describe is the "-flto" option in GCC and I believe the equivalent in JS is donr by Google Closure compiler.


He described not Link Time Optimization (-flto), but the very basic linker functionality: to only include required functions. C/C++ has a weird compilation model where each source file is translated to machine code separately, with placeholders for unknown function addresses. Thus it is trivial to take only required functions.

-flto, on the other hand, allows to reverse this process to allow interprocedural optimization across different translation units.


@TickleSteve below me: "without this, removal is only performed within the compilation-unit"

Not quite. When you link to a static library (.a) with lots of .o object files in it, only those object files will be linked that are actually used by your program.

I first learned about this when I looked at the source of dietlibc, and wondered about every function being in a separate file. That enables the aforementioned trivial optimization, even without -flto.


I saw that discussion years ago about the G++ String implementation. That was a big .o file.


Thanks for the correction. Didn't know that.


He's describing unused-function removal... which "-flto" is the correct option to use (on GCC). like you say, without this, removal is only performed within the compilation-unit, but my statement was quite correct.

@majewsky: what you're describing is more akin to "-ffunction-sections" and "-gc-sections" which will split ewach function into a separate section and garbage-collect those sections.


> #include <sin.h> #include <cos.h> #include <tan.h> #include <sinh.h>

I've written worse - at least those cover multiple variations each (or overloads in C++ for float/double/std::complex?/...)

While I'm not a fan of the enforced java route of 1 file = 1 class, I do trend towards 1 file ~ 1 (main, public) thing - which might be a single function with no overloads. #include <string_left_pad.h>? Better than #include <everything.h>, which I see far too often...

I don't have to figure out which grouping of things my coworkers decided to throw something into if I can just #include by name - whereas aggregation headers more often than not will trigger a full project file-system search.

Unnecessary #include s don't accumulate nearly so much when you don't have to audit 100 functions to see if any of them use features from said header.

I don't trigger a rebuild of everything when the only things that #include stuff actually need it.

Lots of benefits!

> and "-lsin -lcos -ltan -lsinh …"

Or sin.o, cos.o, tan.o, sinh.o... which suddenly sounds a lot more reasonable. I wouldn't bat an eye at string_left_pad.o either.

Sure, I want to use tools to simplify the use of those micro-modules by aggregating them into libraries for convenience. But that's not a knock against micro-modules - just their packaging and tooling. Rewind time back far enough and I would've been manually specifying .o files on the command line and kvetching too...


Java doesn't enforce 1 file = 1 class but rather 1 file = 1 public class, which is exactly what you asked for. You can put as many private classes in the file as you want.


This isn't accurate either. Java does not enforce 1 file = 1 public class but rather 1 file = 1 public top-level class.

For example this is totally legit:

  // ClassA.java
  public class ClassA {

    public static class ClassA_Inner_Public_Static {

    }

    public class ClassA_Inner_Public {

    }

  }

  // ClassB.java
  public class ClassB {

    ClassA classa = new ClassA();
    ClassA_Inner_Public classA_Inner_Public = new ClassA().new ClassA_Inner_Public();
    ClassA_Inner_Public_Static classA_Inner_Public_Static = new ClassA_Inner_Public_Static();

  }


Not exactly - I didn't ask for enforcement, and the irritation usually arises from wanting the occasional secondary type - such as an enum - to be part of the public interface (thus forcing the type itself to be public, naturally) without either:

1) moving it into a separate file, for the simplest two value one liner

or

2) making it interior to the other class, with all the verbose lengthy names (and semantic implications) that this might entail.

I want clear and concise names, so I lean towards #1. And if an enumeration gets more shared between multiple classes, or collects enough values and comments to merit it, sure, I'll move it into it's own file. But forcing me to shove it in a separate file before I've even decided on the best name (meaning I need to rename the file repeatedly as well), and before I'm even certain I'll keep that specific enumeration, is just meaningless friction.


So have libmath depend on libsin, libcos, libtan and libsinh. People who want the kitchen-sink version can get it. People who want just one specific submodule can depend on that. What's not to like?


Having a full set of math functions isn't kitchen sink, it's the right level of granularity for a module. If I want math functions I really want them to all be written by the same author and kept in lock-step as a unit.

Why don't you want the whole module? Because it's bloat? Surely it's far less bloat than one submodule per function?


A lot of javascript is going to be deployed to a website.

Every line of code that isn't used is bytes you're sending to every user unnecessarily

> Surely it's far less bloat than one submodule per function?

How is that bloat? It's the same LoC whether it's 1000 modules or 1 after it's been compiled/minified


Bloat in the source code, build system, dependency management, overheads per submodule, complexity.

I'm not arguing that there shouldn't be a math module but that you shouldn't split it up into such (imho) crazy small parts.

Closure Compiler can remove any unused functions.


It's not bloat in the source code - if you don't care which pieces you depend on you declare a dependency on math, if you do care you list the specific pieces.

It shouldn't be (impactful) bloat or overhead in the build system or dependency management. That stuff should just handle it.

Most of the time you don't care and can just declare the dependency on math and let the closure compiler handle it. But sometimes it might be really important that a project only used cos and not sinh, in which case you want to enforce that.


Ooops. Upvoted you by mistake. Why would it be so important?


Maybe you're running on an embedded processor that can do sin/cos/tan with high performance but sinh performance is bad. Maybe part of your project needs to depend on an old version of sinh with a correctness bug so that you can reproduce existing calculations, so you don't want any other parts of the code depending on sinh. Maybe your legal team doesn't want you using sinh because they believe the implementation might infringe some patent.


If basic, bog-standard functionality was built into the standard library, then it's not bloat that you have to deal with. 500 kb or a meg or two for a decent, full-featured standard javascript library isn't going to even make a dent in the install sizes of web browsers, even on awful mobile devices.


In the compiled artifact there might not be a difference. But developers tend to ignore the issues outside their scope. So yes, for a developer it might not make much difference. For people who need to make sure that you can compile and deploy, it's much more complexity.


There's a cognitive load to every additional thing you need to know to use a module. At a certain point, added flexibility becomes more trouble than it's worth.


But there is no cognitive load to having hundreds of modules in a crazy tree of dependencies?


What's not to like? The fact that every one of those dependencies is an attack vector when one of those package's maintainers gets compromised / project hijacked / bought off by some black hat operator.

It's easier to keep an eye on a small number of trusted parties than 200 random strangers.

You thought this SNAFU was bad...


That has nothing to do with how granular packages are. The npm is broken as it allows such things. Look at this spectacular example of providing high security for users: https://www.npmjs.com/package/kik

Someone has to do this manually?! If the package is not popular, no one cares? What happens if I send them an email and provide the same package with the same API (not trademarked and MIT licensed) but break it a bit on every update?

No one knows.


when those packages are not under your control, it has everything to do with how granular they are and by extension how many you depend on and thus have to trust/verify.

when was the last time you rechecked the entire source of a package you depend on after updating it?


> What's not to like?

That depends; in the case of mathematics libraries they have to be bundled together because of functional dependencies. It's either that or ignoring DRY principles -- which if you do that, you're ignoring the entire point of what breaking things into submodules is intended to do.


Separate interface and implementation. cos and sin might well use common underlying code (by depending on the same module), but they should expose orthogonal interfaces.


Oh, but javascript is dynamically typed, so it doesn't matter if your sin-module works with a different kind of arbitrary-precision decimal number module than your geolocation module, your cosine module or your openg-draw-triangle or opengl-draw-circle modules... /sarcasm


It's not sarcasm if it's true!


OT but apt to this community - That's what I thought, but after a lifetime of 'correcting' people, I looked up the definition of sarcasm. Turns out that the sarcastic-sounding truths I'd been deploying for decades were indeed by-the-book sarcasm rather than 'I can't believe it's not sarcasm'.

Key quote from brilliant word-talking-guy Henry Fowler: "Sarcasm does not necessarily involve irony [but it often does] ... The essence of sarcasm is the intention of giving pain by (ironical or other) bitter words."

So, we circle back to Node.js by way of pain and bitterness. Sounds about right.


> basic null check

What's funny to me is that leftpad(null, 6, ' ') outputs " null".


Am I right that len(" null") here is 5, so it's working incorreclly?


I think the joke here is that javascript has a long history of thinking that this:

    null
is the same as this:

    "null"
...Which it isn't.


Thought it was a typo on my part... but HN seems to be trimming running spaces.


HN isn't: your browser is. Runs of whitespace are collapsed in HTML to single spaces unless specifically styled otherwise. You can verify that they're there by viewing the page source.


I like to use non-breaking spaces for that kind of thing, although they still might collapse that? At least it doesn't wrap non-breaking spaces.


No, they wouldn't collapse that: non-breaking spaces aren't considered whitespace in HTML. The side effect of that is that if you copy some text with non-breaking spaces, you'll get non-breaking spaces, so something that looks as if it contains spaces won't necessarily behave as such. In HN, if you need to quote something where space is significant, you're best off formatting it as a preformatted block by indenting it:

    Here is    some    preformatted text.
It might break up the flow of the text, but if something like whitespace is significant, that's probably a good thing.


I think it's a very valid (and concerning) point. JavaScript is made in way that one fears to implement isPositive or isArray.


> Yes they are, in the same way that a book in which every page consists of a single word is easier to understand than one with more content per page.

A more apt metaphor would be separating a book into many small paragraphs that each serves a single purpose makes the book easier to understand. Regardless, the book metaphor misses a lot of the nuance.

Of course taking the approach to an extreme would be detrimental. However, a vast majority of these small modules that are actually widely used consist of functions that may seem trivial at first sight, but actually contain a lot of corner cases and special considerations that a naive inline implementation could miss.

Sure, there are also plenty of trivial one-line modules on npm that don't fit into a such a description, but those are simply side effects of the unprecedented popularity of the platform and its completely open nature, and shouldn't be used to infer any kind of general trend towards "hypermodularisation" because very few reasonable developers would ever import them into their projects.


> A more apt metaphor would be separating a book into many small paragraphs that each serves a single purpose makes the book easier to understand

No, that would not be an apt metaphor for the problem he is describing.

> Regardless, the book metaphor misses a lot of the nuance.

Not if you are trying to understand the point he is trying to make.

Is there any cost to creating modules, uploading them to npm and using them in other projects ? Clearly there is, as is shown by the world-wide breakage from yesterday. This is probably only one of such failure modes.

The point is, breaking everything into micro-modules can have benefits and costs. Ultimately, it is an engineering trade-off.

Now, it is possible that npm modules are at the right level of abstraction, even though other module systems in the world are not this granular. If this is due to the special nature of JavaScript then the point must be argued from that perspective.


> No, that would not be an apt metaphor for the problem he is describing.

That's because I don't agree that the problem he is describing exists, or at least not to the degree he's describing, which was full of hyperbole.

> Is there any cost to creating modules, uploading them to npm and using them in other projects ? Clearly there is, as is shown by the world-wide breakage from yesterday. This is probably only one of such failure modes.

This was a failure on npm's side by including a functionality that allows users to trivially remove packages from a package management system that is used by hundreds of other packages, something that most major package management systems have decided was a bad idea.

> The point is, breaking everything into micro-modules can have benefits and costs. Ultimately, it is an engineering trade-off.

Agreed. And I don't think nearly as many people are erring on the extreme side of this tradeoff to the degree that he's describing.


> This was a failure on npm's side by including a functionality that allows users to trivially remove packages from a package management system that is used by hundreds of other packages, something that most major package management systems have decided was a bad idea.

That is emphatically not the problem. The author of those modules could have just as easily modified the code instead of deleting it:

    function leftpad(str, len, ch) {
        // NPM won't let me delete modules, so here you go
        return "";
    }
Now you'd have an even harder time figuring out what happened to your code that you did if the module just disappeared. What you're asking for is basically for the repository to adopt a policy of "all maintainers must be given free rein to get their modules correct and keep them maintained, but they shouldn't have the ability to do anything that causes problems downstream" which is impossible and frankly silly.

The problem is the dependency, not the precise mechanism by which that dependency screws you when it goes bad.


You can't republish with the same version number, so you would be protected if you're pinned to a specific release.


At which point you don't get bug fixes. This is an old problem and there is no good solution.


You get bug fixes when you upgrade, deliberately.


This is what everyone should do ideally, but that means your solution is conscientiousness, and good luck spreading that across all your dependencies.


> for the problem he is describing.

Because it is a strawman.


> By focusing on the small-scale complexity to such an extreme, you've managed to make the whole system much harder to understand

Well put. I've noticed a curious blind spot in how people account for complexity: we count the code that 'does' things much more than the code that glues those things together. This distorts our thinking about system complexity. A cost/benefit analysis that doesn't consider all the costs isn't worth much.

An example is when people factor complex code into many small functions, then say it's much simpler because smaller functions are easier to understand. In fact this may or may not be true, and often isn't. To get a good answer you must consider the complexity you've added—in this case, that of the new function declarations and that of the calls to them—not just the complexity you've removed. But it's hard to consider what you don't see. Why is it so easy not to see things like this? I think it's our ideas about programming, especially our unquestioned assumptions.

The complexity of glue code goes from being overlooked to positively invisible when it gets moved into things like configuration files. Those are no longer seen as part of the system at all. But of course they should be.


I just saw this insanity a couple of days ago:

https://youtu.be/l1Efy4RB_kw?t=343

He refactored a Java application into 1400 classes!


And to add to the mess, until npm introduced deduplication every dependency recursively included it's own dependencies as sub folders, so you get a bajillion tiny dependencies over and over (and sometimes with different versions to boot).

Frankly some of the comments I'm seeing really do reinforce his point: people really don't know how to program. I suspect this is it the flipside of Javascript being popular and accessible - people can churn out products without really knowing what they are doing...


Sorry for the nitpick, but "hyperabstraction"?

left-pad isn't even slightly abstract.


It's not.

The hyperabstraction is in how even tiny functions like this, isPositive, isArray, etc. are being abstracted and turned into individual modules.


Again, maybe I'm nitpicking, but turning something into modules doesn't seem like abstraction... you could call it encapsulation, or modularisation, or maybe extraction.

But abstraction? I don't see how that word is connected to what's happening. Maybe you can explain it.


    Small modules are easy to reason about
They really aren't, when I'm reading is-positive-integer(x) and wonder if 0 is positive I need to hunt down the definition of positive through two packages and as many files. And it gets wrose if both your code and one of your dependencies required 'is-positive-integer' and I have to also figure out which version each part of the code base is using.

If you had written (x > 0) I would have known immediately, it also wouldn't be doing the same thing as is-positive-integer(x) but how many calls to is-positive-integer are actually correct in all the corner cases that is-positive-integer covers?

And then there's the other problem with dependencies: you are trusting some unknown internet person to not push a minor version that breaks your build because you and Dr. is-positive-integer had different definitions of 'backwards compatibility'.


JS does not have an Integer type, so (x > 0) does not work. Now if you add all those checks necessary to see whether you're actually dealing with an integer here, you get to a point where you cannot be sure anymore whether there aren't any weird type coercion issues there and need to start writing tests. Suddenly your whole positive integer check took 1h to write and you still cannot be sure whether there's something you didn't think of. I'd rather use an external library for that that's properly unit tested.


A module doesn’t solve that kind of type confusion, though. Static typing does. The next best thing in JavaScript is passing consistent types of values to your functions, so just write

  function isPositiveInteger(x) {
      return x > 0 && Number.isInteger(x);
  }
and always pass it a number. (Of course, this shouldn’t even be a function, because JavaScript is confused about the definition of “integer” – maybe you really want `x >>> 0 === x` – and apparently is-positive-integer is confused about the definition of “positive”, so it’ll be clearer to just put things inline.)


> A module doesn’t solve that kind of type confusion, though. Static typing does.

That's the root of all the problems, though, isn't it? JavaScript is just a terrible language.

Actually, that's not really fair. It's a great little language for writing late-90s Dynamic HTML, maybe just a little too powerful. And that additional power enables people to build cathedrals of cruft in order to Get Things Done.

I don't like Scheme (Lisp is IMHO better for real-world, non-academic work), but I still wonder how much things would have been if Eich had been allowed (or had chosen) to implement a small Scheme instead of JavaScript. The world of web software engineering would be so profoundly better.

Maybe WebAssembly will save us all.


There are already some quite advanced compilers that treat JavaScript itself as just a web assembly language, you don't technically have to wait for WebAssembly. TypeScript (some type safety) PureScript (Lots of type safety) and GHCJS (MOUNTAINS of type power) all work right now, though the latter is still experimental grade.

But I don't think an initial choice of implementing a Scheme would have helped. The idea of +0 vs 0 vs -0 could just have easily happened in a Scheme, same too for the reliance on stringy types. Those are symptoms of a rushed design and an unwillingness to accept the temporary sharper pain of invalidating existing bad code to produce a new better standard (the exact same tendency - to dig in deeper rather than risk the pain of turning back - is literally how spies managed to convince people to commit treason).

Then of course there's also the great risk that, just like Scheme not-in-the-browser, Scheme-in-the-browser might never have widely caught on.


> There are already some quite advanced compilers that treat JavaScript itself as just a web assembly language, you don't technically have to wait for WebAssembly.

Yeah, there's even a Common Lisp which compiles to JavaScript …

> The idea of +0 vs 0 vs -0 could just have easily happened in a Scheme, same too for the reliance on stringy types.

I don't necessarily know about these specific examples: the Scheme standards have been quite clear about their numeric tower and equality standards.

I think your general point about the hackiness which was the web in the 90s, and the unwillingness to break stuff by fixing things holds, though. And of course it wasn't just the web: I recall that the Makefile syntax was realised to be a mistake weeks into its lifetime, but they didn't want to fix it for fear of inconveniencing a dozen users (details fuzzy).

> Then of course there's also the great risk that, just like Scheme not-in-the-browser, Scheme-in-the-browser might never have widely caught on.

I dunno — would a prototypal language have ever caught on were it not for the fact that JavaScript is deployed everywhere? I can imagine a world where everyone just used it, because it was what there was to use.

And honestly, as much as I dislike Scheme, it would have fit in _really_ well with JavaScript's original use case and feel. And if the world had had a chance to get used to a sane, homomorphic programming language then maybe it might have graduated to a mature, industrial-grade, sane, homomorphic language.

But alas it never happened.


It's easy to say things like this. It's a lot harder to actually suck it up and live with what happened. Try to make progress, remember that's what all of the offenders in this story are trying to do. Whether or not it is perceived like that by the community.


Only comment worth up-voting I could find.


I would swap the expressions on each side of the '&&' around. While JS doesn't care (as it'll apply '>' to just about anything, even in strict mode), this is perhaps a stronger statement of intent:

    return Number.isInteger(x) && x > 0;
Thus the more general assertion (is it an integer) guards the more specific specific one (is it greater than zero).


Another question is whether you'd want to include negative zero as a positive integer.


x >>> 0 changes a negative integer to an unsigned (positive) 32-bit integer. You can have integers greater than 32 bits. They're represented internally as floats. Bitwise operators make 32-bit assumptions, with the top bit as a sign bit, unless you do x >>> 0, in which case you can revert a sign bit overflow back to a positive integer. If you have a positive integer that's greater than 32 bits, x >>> 0 === x fails. The expression (84294967295 >>> 0 === 84294967295) is false.

Don't even get me started on negative zero.


I thought all numbers in Javascript were represented as 64-bit floats. Therefore you can do correct integer math up to 53 bits without getting into too much weirdness (bigger than that and the weirdness is guaranteed).

I didn't know the >>> operator converts to 32-bit. How does it do that? Mask off the top bits? Or give you a NaN if it's bigger?

But (and someone please correct me if I'm wrong), at the end of the line it's still represented as a 64-bit float internally and accurately.


The module checks that the positive integer is less than Number.MAX_SAFE_INTEGER constant, while yours would lead to unsafe comparisons if we trust its validity as a number.


The name of the function, though, isn’t `isPositiveSafeInteger`; it matches `Number.isInteger` instead.

Or maybe you don’t just want safe integers, maybe you want `x >>> 0 === x`, or `x > 0 && (x | 0) === x`, or…. This is what I meant about JavaScript being confused about the definition of “integer”.


    JS does not have an Integer type, so (x > 0) does not work
It depends on what the surrounding code actually does wheter (x > 0) works or not. Depending on the version of is-positive-integer you are using a positive instance of the class Number may return true, false or cause an error.

PS. never mind that the implementation of is-positive-integer is extremely inefficient.


>TL;DR: Small modules are easy to reason about, and encourage code reuse and sharing across the entire community.

How does that even remotely applies to the "is positive integer" test, and even more so to "it's positive" and "it's integer"?

What's next? is-bigger-than-5? word-starts-with-capital-letter? add-1-to-a-number?


Not defending javascript because I dislike it immensely, but with javascript being a dynamically typed language, and with the way it handles implicit conversions between types, and with trickiness around NaN, Infinity and null objects, there are sufficient edge cases that writing a correct function to test if some variable contains a positive integer is not as trivial as it is with more sane languages, and is likely to trip up even experienced developers if they miss one or more of those edge cases.

Having pre-written modules that account for all edge cases that have been battle-tested by hundreds, thousands, or even millions of users does have merit.

The main problem here is with the Javascript language, which makes code evolve to things like 'is-positive-integer'.


(That's the whole point of using a dynamic language, what '>' actually does depends on the surrounding code. If you want '>' to type check you should be using a type safe language to begin with.)

The issue I have with this is readability. Type '>' and I know exactly what it does, I know what implicit conversions are involved, and how it would react to a null object. Type 'isPositiveInteger' and I need to check. I can not read your code fluently anymore.


What if sometimes I want the overloaded, type relevant '>', and sometimes I want to do '>' if the inputs are positive integers, and raise an exception otherwise?


If you want to build is-bigger-than-5 yourself, I recommend pulling in fivejs[1] as a dependency.

[1] https://github.com/jackdcrawford/five


five.js was missing something. I fixed it. https://github.com/jackdcrawford/five/pull/234


I laughed way too hard at this :)

Well played.


Did you add dist/five.js to commit by mistake?


Nope, that was on purpose.


No way!


Lucky for you, I just made yeonatural.js, project template generator that can instanciate fully amd compatible <n>.js.

We're doing a GSoC for efficient templating of real number though.


This module is a joke, right? Right?


I would have bet on "yes" until a day ago.


Yes it's a joke. In fact it's a joke on this very matter.


Why is English the only language in which "Five" is capitalized?


Five would only be capitalized at the beginning of a sentence.

You wouldn't write: "I saw Five of my friends last night."


add-1-to-a-number

This seems to be a very common example in educational materials about how to create a function in some programming language. Perhaps they should all be prefaced with "NOTE: you should not actually make such a function, because its triviality means that every time you use it you will be increasing the complexity of your code", or a longer discussion on when abstraction in general is appropriate and when it is not.

Ditto for any other abstraction --- I've found there's too much of an "abstraction is good so use it whenever you can" theme in a lot of programming/software design books and not enough "why abstraction can be bad" sort of discussion, which then leads to the "you can never have too much" mentality and the resultant explosion of monstrous complexity. Apparently, trusting the reader to think about the negatives and exercise discretion didn't work so well...


> Apparently, trusting the reader to think about the negatives and exercise discretion didn't work so well...

That's the whole point. If you use is-positive-integer, you won't have to think about the negatives!


It doesn't.

Of course we have a number of packages that are truly trivial and should probably be inlined. Npm is a completely open platform. Anyone can upload any module they feel like. That doesn't mean anyone else will feel the need to use them.

I think you'll find that the vast majority of small modules that are widely used are ones that also cover obscure corner cases and do benefit from widespread use and testing.


You don't have to be widely used to be widely used, though. I bet there are tons of developers who would never dream of using left-pad or is-positive-integer, but nevertheless have copies of them (multiple copies?) on their computers due to transitive dependencies.


No, you see "is-bigger-than-5" clearly does 4 different things. You need these modules:

module.exports = exec("===");

module.exports = bigger(a, b){return a > b};

module.exports = () => return "than";

module.exports = (a) => a.search('ABCD...');

.. or something like that. These are all plugins of course.


Do you seriously think that "word starts with a capital letter" is an easy function to write?

I feel like you haven't spent enough time with Unicode.


return string[0] === string[0].toUpperCase();

You're welcome!


WRONG. Typical junior developer mistake. Even if we disregard trivial problems such as non-letter input or empty input, this will only work with English and a few other languages. This will TOTALLY fail with e.g. Turkish input (I ı vs İ i).


Not really. (string[0] == string[0].toUpperCase() && string[0] != string[0].toLowerCase()) is the correct way to approach this problem. If toUpperCase() and toLowerCase() don't handle Turkish letters, then that's a BUG in those methods, which should be reported and someone should freaking take responsibility for them and fix them.

Adding another module with a method does not fix the original problem, it just creates more problems for other people to solve.


> var isCapital function(s) { return s[0] === s[0].toUpperCase(); };

> isCapital("שלום");

true

> isCapital("1");

true

> isCapital('\uD83D\uDE00'); // Smiling emoji

true

> isCapital(\u279B); // Right-facing arrow

true

> isCapital("\u061Casd"); //Bidirectional control character

true

> isCapital(" ");

true


If "true" is not a valid answer, what would've been one? Similar code in C# returns the same. E.g. Console.WriteLine("שלום".ToUpperInvariant()=="שלום") returns true.


Hebrew doesn't have upper and lower case, so the question "is this hebrew character capital" is meaningless. So, the function in question should not return just a boolean value; it should have a way to return a third option. (Whether it's nil, a C-style return code, an exception, an enum or something else is irrelevant here.)

Actually, it just means that if you're wondering "if this word starts with a capital", you're asking a wrong question. Instead, you should be asking "if this word is a valid generic name", or "is this word a start of a new sentence", and implement these semantic queries in a language-specific way.


You have final-forms in Hebrew, although probably not at the same level of support for checking as you'd get with a script like Arabic.


That's true, but I don't think that sofits should be viewed as capital/not-capital letters: they're not semantically altering the meaning of the word in wider context, like capital letters do.


This will give

  startsWithCapitalLetter("!") == true
which is not what you want.


TIL 1 is a capital letter.


string[0] === string[0].toUpperCase() && string[0].toUpperCase() != string[0].toLowerCase();


typeof string == "string" && string.length && string[0] == string[0].toUpperCase() && string[0].toUpperCase() != string[0].toLowerCase();


Man, Javascript is AWESOME.


/\p{Lt}/.test(str) would be much more compact, but Unicode properties still aren't available in Javascript regular expressions. It doesn't look like they will be anytime soon. I guess they have some staging system and it's still at stage 0 (https://github.com/tc39/ecma262/blob/master/stage0.md), grouped with lookbehind assertions, named captures, and mode modifiers.


Not that I agree with micro modules (I would rather see a set of js core libs), but your code fails with empty strings.


That code fails in a whole host of cases, kind of proving that sometimes trivial "just one line" functions aren't actual that trivial to get right.

edit: fortunately there's an npm modules you can included to fix it...


Maybe F1axs knows that at this particular spot the string will never be empty? There are two philosophies in libraries; one is to check for every possible error in the lib function, the other is to state the pre-conditions in the documentation.


> Not that I agree with micro modules (I would rather see a set of js core libs)

Why not both? If you just want one part of a module, you can just import the dependency directly, if you want a whole core lib, you can do that.

Some people really like underscore and import that. I use ES6/7 functions for the most part, but underscore has some useful things I can import on their own.


string[0] isn't necessarily one code point.


The three cases you state aren't nearly as hairy as determining whether something is a positive integer.

Someone might do something like ~~n to truncate a float. That works fine until you go above 32 bits. Math.trunc should be used instead. Someone might do something like n >>> 0 === n, and using any bitwise operators will always bake in the assumption that the number is 32 bits. Do you treat negative 0 as a negative integer? Detecting negative 0 is hairy. So, to avoid bad patterns, it makes sense.

For is-bigger-than-5(x), x > 5 is not hairy. For add-1-to-a-number(n), n + 1 is not hairy.

For word-starts-with-capital-letter(word)? That one is actually pretty hairy. There are programmers that would write a regular expression /^[A-Z]/, or check the charCode of the first character being within the A-Z range, amongst other solutions. An appropriate solution, however, would be (typeof word == "string" && word.length && word[0] == word[0].toUpperCase() && word[0].toUpperCase() != word[0].toLowerCase()), because the toUpperCase and toLowerCase methods are unicode-aware, and presently you can't access unicode properties using Javascript's flavor of regular expressions.


People have started to point out the existence of modules in this very vein. And, of course, write them. With two such together, you can probably get is-maybe-5 .

* https://news.ycombinator.com/item?id=11351905

* https://news.ycombinator.com/item?id=11359302


If those functions are reusable or complex enough (partly due to deficiencies in javascript the language, sure) then why not?


>add-1-to-a-number

i++


You'd have a good point, if all of those tiny but probably useful modules were given the use you're describing.

Discoverability though is so poor that most of those modules are most likely just used by the author and the author's co-workers.

If a typical npm user writes hundreds of packages, how the hell am I supposed to make use of them, when I can't even find them? Npm's search is horrendous, and is far from useful when trying to get to "the best/most supported module that does X" (assuming that random programmers rely on popularity to make their choice, which in itself is another problem...).


The isArray package has 72 dependent NPM packages, it's certainly not undiscoverable. The leftpad package gets 5000 downloads a month, that's quite a bit of testing for edge cases, compared to the none that I would have gotten had I implemented this myself.

Intuitively, this thread wouldn't exist if your assertion were correct.


Edge cases for a padding function? Your comments make me think that the author's point is valid.


Edge case for left padding - when you provide empty string as padding filler. It can go into infinite loop if not written correctly.


Write a damn unit test for your padding function. We should share unit tests, not one-line libs. Libs with unit tests can be one-liners and be as fine-grained as we like, since it's something your users don't have to download


> We should share unit tests, not one-line libs.

Most of the small libs I've shared are mainly sharing unit tests.

Here's a function I've shared on npm:

  var kind = function(item) {
    var getPrototype = function(item) {
      return Object.prototype.toString.call(item).slice(8, -1);
    };
    var kind, Undefined;
    if (item === null ) {
      kind = 'null';
    } else {
      if ( item === Undefined ) {
        kind = 'undefined';
      } else {
        var prototype = getPrototype(item);
        if ( ( prototype === 'Number' ) && isNaN(item) ) {
          kind = 'NaN';
        } else {
          kind = prototype;
        }
      }
    }
    return kind;
  };
The tests:

  suite('kind', function(){
    test('shows number-like things as numbers', function(){
      assert(avkind(37) === 'Number');
      assert(avkind(3.14) === 'Number');
      assert(avkind(Math.LN2) === 'Number');
      assert(avkind(Infinity) === 'Number');
      assert(avkind(Number(1)) === 'Number');
      assert(avkind(new Number(1)) === 'Number');
    });
    test('shows NaN as NaN', function(){
      assert(avkind(NaN) === 'NaN');
    });
    test('Shows strings as strings', function(){
      assert(avkind('') === 'String');
      assert(avkind('bla') === 'String');
      assert(avkind(String("abc")) === 'String');
      assert(avkind(new String("abc")) === 'String');
    });
    test('shows strings accurately', function(){
      assert(avkind(true) === 'Boolean');
      assert(avkind(false) === 'Boolean');
      assert(avkind(new Boolean(true)) === 'Boolean');
    });
    test('shows arrays accurately', function(){
      assert(avkind([1, 2, 4]) === 'Array');
      assert(avkind(new Array(1, 2, 3)) === 'Array');
    });
    test('shows objects accurately', function(){
      assert(avkind({a:1}) === 'Object');
      assert(avkind(new Object()) === 'Object');
    });
    test('shows dates accurately', function(){
      assert(avkind(new Date()) === 'Date');
    });
    test('loves Functions too', function(){
      assert(avkind(function(){}) === 'Function');
      assert(avkind(new Function("console.log(arguments)")) === 'Function');
      assert(avkind(Math.sin) === 'Function');
    });
    test('shows undefined accurately', function(){
      assert(avkind(undefined) === 'undefined');
    });
    test('shows null accurately', function(){
      assert(avkind(null) === 'null');
    });
  });


If you've called it with an empty string there's probably a bug in your calling code which you'll want to test for anyway.


And when a bug is found, instead of fixing it in one place once, you now have to hope that all of your deps and all of their deps update themselves appropriately.


Which is great if you are one person and you fix the bug in your own code. What you're ignoring is that if everyone writes their own version, then the same problem exists. That bug has to be fixed across every (buggy) implementation. A well-defined dependency system where it is easy to discover and update to a new version isn't a matter of hope.


Remember that download counts for package indexes are often misleading. Between scrapers, deployments and continuous-integration systems (all of which download a package -- in the case of CI, sometimes it downloads on every single test run), hitting the thousands-of-downloads level is not hard at all.


Discoverability though is... poor seems like an argument for improving discoverability, not against having small modules.


Doesn't apply to Javascript, but something like Haskell's hoogle (stackage.org/hoogle could help with a small module approach. It lets you query for functions matching a certain type signature, like this: https://www.stackage.org/lts-5.9/hoogle?q=%5Ba%5D+-%3E+a+-%3...


Discoverability happens with standardization.

Why search through millions of unknown packages when a standard library would have it all right there?


Because the standard library is where modules go to die. Slaving the release cycle of a library to the release cycle of the language is bad for the health of that library, especially in the case of javascript where code often has to be written to run on old browsers with old versions of the standard library.

Now having a metalibrary that depends on a curated collection of popular/useful utilities is probably a good idea - but isn't that exactly what many of the libraries that broke due to depending on left-pad were?


I've wondered why the official committee doesn't have a recommended list of libraries for specific purposes.

That way I can get a curated list of great functions I can trust.

Essentially a standard library for all intents and purposes.


Yeah, I think the natural question here is: why doesn't TC39 round up all these basic packages and add them to the standard library? I've seen other languages criticized for having too large a standard library, but if this is the alternative...

left-pad was released under WTFPL, so in this particular case there'd be no legal barriers to it. (And I'd assume that, for any libraries with a restrictive enough license, it wouldn't be a hard sell on TC39's part -- if they put out an announcement that they were going to do that, I'd go panning for street cred, and I wouldn't be the only one.)

An alternative could be to pull all this stuff together into one big bucket of Legos and package it with a code analysis tool at the beginning of your build process to strip out everything you don't need from the bucket... but I'd guess that's either already been done or a stupid idea for reasons I don't know yet.


Its not about legal barriers, its about the incredible amount of work to precisely specify everything, which is required for any web standard.


Can we ever achieve this?

We can't exactly search the code to find out what it does, otherwise you'd basically be reading all the code to discover it's true behavior, which negates the usefulness of modules in the first place...

Any search will have to rely on developer-made documentation, and/or meta data. This is great in theory, but documentation is rarely well maintained, and/or someone changes the code but neglects to update the documentation.

This leaves us with the situation we have today. A search that somewhat works kindof, and mostly you rely on googling the feature you need and looking for what seems to be the most popular choice.

I'm not sure how this situation can be made better, especially if we continue down a path of having all these "one-liner" libraries/modules that anyone can make and stick out there into the unsuspecting world. When I need a square root function, and my search returns 800 one-liner modules, how am I supposed to pick one that I know is reasonably well done, and does what it says it will do, without looking under the hood - you'll end up just picking whatever seems to be popular...


In most languages, things as common as square root would be part of a standard library which everyone uses, so there wouldn't be any question. That also means it would obviously be well-tested and vetted, because everyone depends on it. Perhaps the solution is avoiding one-liners and moving towards more comprehensive libraries, but that doesn't seem to be what npm devs want (not a JavaScript guy so I'm just observing from the outside).


> Any search will have to rely on developer-made documentation, and/or meta data. This is great in theory, but documentation is rarely well maintained, and/or someone changes the code but neglects to update the documentation.

This is why types are better than textual documentation - the system enforces that they're kept up to date.


We could have something similar with full dependent types a la Idris: you could write a proof and search for a function that satisfies it. If such a thing were popular and huge amounts of Idris code were written, you could write only proofs for your program and the Idris system could go download and piece together your program!

That would be very cool, but I'm not sure how much easier it would actually turn out to be. Also to do anything useful you'd probably have to restrict the language to be non-Turing-complete.


Similar idea but what if you were to write your tests first and then upload them to a site that pieces together the required modules to pass them and generates an API against the modules.


Because finding code that passes arbitrary tests is undecidable in the general case.

(Same reason the pseudo-Idris language would have to be non-Turing-complete)


I was joking clearly. Thought the ridiculousness of the idea make that clear.


You're totally right. Most developers don't develop this way, and that makes the benefits of this approach far less pronounced than they would be if they did. It also doesn't negate the benefits entirely, of course.


Another benefit of small well understood components is that they are easier to write tests for. Do these npm modules have good test coverage? Did the leftpad module have a unit test that would have caught the issue?


No, leftpad has four test cases and they are all obvious, given the leftpad API.

Would I write tests for my own leftpad implementation if I were not farming it out to a dependency? Its possible. More likely I would want to understand the intent behind using leftpad in my application or library, and have test for "formatTicket" or whatever it is that I'm actually doing. But for all the talk about tiny modules that cover surprisingly tricky edge cases, this is not one of them.


which issue? I thought we were talking about the random removal of left-pad and a couple hundred others.


> This GitHub comment by Sindre Sorhus (author of over 600 modules on npm) is my favorite writeup on the topic:

https://github.com/sindresorhus/ama/issues/10#issuecomment-1....

If anything looking at sindresorhus's activity feed: (https://github.com/sindresorhus) perfectly supports the author's point. Maybe some people have so little to do that they can author or find a relevant package for every single ~10 line function they need to use in their code and then spend countless commits bumping project-versions and updating package.json files. I have no idea how they get work done though..


I think the gist of this whole discussion ( at least the OMG WHY?!?! ) part, can be easily explained by an excerpt from your example comment that sums up in a nutshell the all too pervasive mindset I've seen over the years:

"...LOC is pretty much irrelevant. It doesn't matter if the module is one line or hundreds. It's all about containing complexity. Think of node modules as lego blocks. You don't necessarily care about the details of how it's made. All you need to know is how to use the lego blocks to build your lego castle. By making small focused modules you can easily build large complex systems without having to know every single detail of how everything works..."

By LOC he's referencing 'ye ole Lines of Code paradigm, and trying to make the point that in the end it just doesn't measure up to the prime directive of "containing complexity".

... and that's where I beg to differ.

What I think is being completely overlooked here is the net cost of abstracting away all that complexity... It's performance.

Every extra line of code, extraneous function call, unnecessary abstraction ( I'm looking at you promise ), every lazy inclusion of an entire library for the use of a shortcut to document.getElementById -- these all add unnecessary costs from the network down to the cpu.

And who pays these costs?

The end users of your slow, memory hogging, less-than-ideal set of instructions to the cpu that you call an application... but hey, it's easier for you, so there's that.


> ye ole Lines of Code paradigm

Little-known fact: the *y in old signs is actually a 'þ', which is an Old English letter named 'thorn' and pronounced like modern 'th.' Thus, the old books & signs were really just saying, 'the.' Incidentally, þ is still used in modern Icelandic.

Completely off-topic, but I þought you might be interested.


holy shit, I had no idea. That's cool.

edit: and I double checked, because the internet :)

http://www.etymonline.com/index.php?term=ye

it's true! that's awesome.


I really þink ðat we ought to use þ and ð in modern English, but oddly enough no-one agrees wiþ me:-)


Promise was clearly necessary, because without it we had hundreds of other less principled ways to implement async control flow :)


We had, and still prefer (looking at npm stats) the async module. Eg:

    promise.then(function(){}).then(function(){}) 
is not substantially different than:

    async.waterfall([function(){}, function(){}])
Promise advocates kept pretending async didn't exist though, and everybody was using callback hell.


There's a lot of old cruft using async (my current workplace being one of them) - bluebird also has 4x the downloads of async.

async is pretty terrible though, you cannot pass the results from one execution to another, i.e. passing results from one function in async.series to another, which results in developers tending to pollute variable scope by defining above and filling it in inside each callback. This prevents the ability to write clean isolated testable functions.


You can return the first from a function and it will be a promise. Promises are reusable. Also they contain exceptions in their context. Those can be handled at any point of the promise-chain.

So promises are way more composable than callback based solutions.


Nobody pretended that async didn't exist, we just knew its the best "solution" that ignored the problem.

Which was: throwing away the entire language's compositional features, including the concept of input arguments and return values, results with... poor compositionality, of course.


I thought the problem was callback hell. I've sat through a bunch of promise talks and the problem was never 'JS should be more compositional'.


Yes, uncompositionality of callbacks leads to callback hell. Or to reinventing every single thing but for callbacks. Like array.map (which works with promises) or array.forEach (also works with promises) or every single synchronous function (they all work when passed to promises).


If you solve callback hell for a series of sequential functions by using an actual series of sequential functions, you don't have nested callbacks and you didn't require Promises or composition - just a basic understanding of data structures and first class functions.

It seems you're defining callback hell as 'whatever promises solves' rather than it's common definition of over-nesting.


Callback hell is a "loss of composition" problem. Most languages, including JavaScript, have things called expressions. Expressions rely on functions returning their output. They become useless when functions give their output through a callback passed as their input: they don't compose with that frankenstein.

Node style callbacks are a prime example of how in the quest for "simplicity" and hate for abstractions its easy to build a monster / frankenstein like caolan's async. Node callbacks were a self-contradicting philosophy: it abandoned all thats already in the language in the quest to avoid an abstraction that was not already in the language :)


Hrm, it's always seemed natural (and not frankenstein) that a function would be input as much as a number or string is. 'My name is Steve, my age is 7, and here's what do to one the database has saved'.

That said I see how the inconsistency you're talking about with how expressions return values with callbacks and you've certainly put the point very well.


> Maybe some people have so little to do that they can author or find a relevant package for every single ~10 line function they need to use in their code and then spend countless commits bumping project-versions and updating package.json files. I have no idea how they get work done though...

Rather than copy paste someone else's unmaintained thing and handle all the bugs, or write their own code and unit tests, they use the same library as a few thousand other people?


If you don't know how to write a function to left-pad a string without copy & pasting, you have other problems..


What makes you think people who don't write their own left pad function can't write a left pad function?


How is that superior to copypasting from a stackoverflow answer?

If it's a popular issue, lots of people had the same issue, many will be nice enough to add their edge cases and make the answer better, most will not. Same goes for contributing to a package.

With a package you would be able to update when someone adds an edge case, but it might break your existent code and that edge case may be something that is not particular to your system.

If you don't want to get too deep in the issue, you can copy paste from SO, just the same you can just add a package.

If you want to understand the problem, you can read the answers, comments, etc. With the package you rely on reading code, I don't know how well those small packages are documented but I wouldn't count on it.

The only arguments that stands are code reuse and testability. But code reuse at the cost of the complexity the dependencies add, which IMO is not worth the time it'll take you to copy and paste some code from SO. Testability is cool but with an endless spiral of dependencies that quite often use one or more of the different (task|package|build) (managers|tools) that the node ecosystem has, I find it hard to justify adding a dependency for something trivial.


How is it superior? Simple.

How do I systematically make sure that I have the latest version of every stackoverflow code snippet? If it's a new post, it may not have all the edge cases fixed yet. So now I have to check back on each of the X number of snippets I've copied.

In the npm approach, I can easily tell if there's a new version. For prod, I can lock to a specific version, but in my test environment, I can use ^ to get newer versions and test those before I put them in production.

If the edge case of new version of a package breaks my code, I've learned that I'm missing a unit test. Plus, the question isn't whether this bad thing might happen on occasion, the question is whether this approach is, on balance, superior to cutting and pasting random code snippets into my code. I think the downside of the npm approach is less than the downside of the copypasting from stackoverflow approach.

And every moderately useful npm package I've looked at has very good to great documenation.


The simple rebuttal to that is that modules that are collections of small functions are easy to reason about as well, and don't the downside of increasing the metadata management to useful function ratio nearly as much. Why have just an average (mean) function, when it makes sense to provide a median and mode as well? Even then, you might find that there's a bunch of extra math operations that are useful, and you might want to include some of those.


With the advent of ES6 modules's selective imports and tree-shaking, that's definitely quickly becoming a better approach. With the old CommonJS modules, you need to be concerned about overall code size, which is where small, single purpose modules excel, and why this approach has proliferated to this degree.


I've been reading about tree shaking. I'm not at my laptop at the moment, so I can't settle this question by testing it. I'll toss it to the community:

https://news.ycombinator.com/item?id=11349606

Basically, how does tree shaking deal with dynamic inclusions? Are dynamic inclusions simply not allowed? But in that case, what about eval? Is eval just not allowed to import anything?

I've been reading posts like these, but they are pretty unsatisfying regarding technical detail: https://medium.com/@Rich_Harris/tree-shaking-versus-dead-cod...

Maybe someone else was wondering the same thing, so I decided to post it here before wandering off to the ES6 spec to figure it out.


> Are dynamic inclusions simply not allowed?

Correct. One of the motivating factors for ES6 modules was to create a format that can be statically analyzed and optimized.

> Is eval just not allowed to import anything?

Correct.

See this excellent writeup for more details regarding ES6 modules: http://www.2ality.com/2014/09/es6-modules-final.html


Doea code size really matter to node.js ? And how common was commonjs (no pun intended) on the client before ES6 ? Also doesn't commonjs bundling add a significant overhead when talking 5 line function modules ?


Quite common actually (see Browserify)! In fact, the increasingly widespread use of npm and CommonJS on the client is one of the factors that motivated the npm team to transition to flat, auto-deduplicated modules in version 3.


There is little reason to bundle node.js code. It's an optimization, and a dubious one. In my experience, speed of execution isn't impacted at all. I haven't tested the memory footprint, but it seems vanishingly unlikely that dead code elimination would have any substantial effect.

There's probably not any overhead in bundling, though. Not in speed or memory, at least. The overhead is in the complexity: the programmer now has one more place to check for faults, and debugging stack traces now point to a bundled version of code instead of the original sources.

The case where none of this is true is when require() ends up being called thousands of times. Startup time will be correspondingly slow, and bundling is the cure. But that should only be done as a last resort, not a preemptive measure.


That entirely depends on your bundler. Newer bundlers like http://rollupjs.org/ combine everything to avoid this.


Just sugar-coated kool aid I'm hearing. Community benefits? First of all, I'm coding to get paid and this recent madness proved that JS ecosystem is semi-garbage. Back to the original question - do people really can't program that they need packages like left-pad or is-integer which had their own dependencies? Before writing those cool toolchains (which would likely work in a specific machine with a specific setting with all the real world testing the community has) can we at least pretend that we know the computer science basics?


People need modules like leftpad to tick the "maintains moderately popular open source project" checkbox. Instant hirability.

I don't want to claim that it would be a directly calculated career move, more like starting a blog: you admire a good blog, you want to be more like the blogger, you start your own one. On the dim chance that it will become both good and not abandoned after the third post. Nanomodules can be just like that, the air guitar of would-be rockstar coders. This is certainly not the only reason for their existence (and even the air-guitar aspect has a positive: low barrier of entry that might lead to more serious contribution), but the discussion would be incomplete if it was ignored.


Yep, I think this is a huge, HUGE part of it.

Step 1: Create a culture in which "having open source contributions" is a requirement to entering said culture.

Step 2: Remove all friction from introducing open source contributions into the culture.

Step 3: Watch the Cambrian explosion.

Step 4: (Two years later) Point to the Cambrian collapse and how the new hot thing will solve everything.

I don't know what sort of shit show Step 4 will turn into, but it will also definitely be the result of folks taking a simple and good rule of thumb (this time, it won't be "small and composable"), and implementing it without ever stopping to think what problems it solves and doesn't solve.


That scenario in the last paragraph is an uncomfortably convincing view of the future. Do you think it could be avoided by an additional step like the following?

Step 5: a mechanism for curated metamodules

This would not have changed much for the current situation, but it might help cultivating a "too important to play it fast and loose" mindset that would.


No, probably not. On the contrary, what I'm suggesting is that the solution of Step 5 you suggest would be "the hot new thing" of Step 4.

It's about unforeseen consequences of a solution to a problem. That's what I mean by "I don't know what it will look like", because it will definitely solve the problem this thread is dissecting, and the problem it introduces will be a whole new vector.

But its selling point (the way it solves our current problem) will be a simple idea that is taken to its logical extreme by people who don't think critically, and then it will be time to solve that problem.

That is, I see the underlying recurring problem as stemming from the cultural forces –– how we educate people, how we make ourselves hireable –– that enable even very smart people to be shortsighted and immune to learning from the past.


I would give those candidates the FizzBuzz problem just to see if they can actually code from scratch or `require('fizz-buzz-solver');`. I care more about their competence than what 1 or 10-liner packages they maintain.

On the contrary, I would certainly mark "maintains moderately popular open source project" as a calculated career move, just like how prostitutes would dress up right for the cliente. Sorry for the crude example but I am not placing the blame on the devs here but the absurd notion of employers considering a GitHub repo the 'in' thing before actual capability. Candidate - Brah I got 10 of those packages with 10mil downloads in total but I can't code for shit. Hiring manager - Since we are suckers who look at stats, you're hired!

The problem with this nanomodules approach is that it results in complacent and naive developers that eschew the basics and just publish random crap. Anything can make it to be a NPM package and be judged by...popularity? Since when is code turning into pop music :)?


I'd hire the people who use fizz-buzz-solver and spends time doing original work rather than rewriting something for the sake of ego.


Here's where expectation should be set for both candidates and interviewers. The point of solving Fizz Buzz problems is to open up angles for discussion. What would you do differently with this problem, at this scale, for this integration etc etc. If someone writes crude code, I'd ask if they follow style guides or conventions. If someone writes overly fancy and clever stuff, I'd ask if they ever have to maintain shit. There is not much value in talking if all candidates would write "import fizz-buzz-solver from 'fizz-buzz-solver';".

You see, the whole Fizz Buzz is nothing more than a prop to allow me to find out what this candidate can actually do. Anyone can include a package and critical thinking differentiate smart ones from the pack. Heh, if there is a package for any and everything, most devs would be flipping burgers instead of doing original work.

Believe me, most problems you think of as original, aren't original. It's not about solving problems either, it's about repacking the solutions into something to sell, or making it popular enough so you can make money off it.


if leftpad were in the node stdlib, would you still be making the same point about knowing computer science and using leftpad?

in python: https://docs.python.org/3.3/library/stdtypes.html#str.ljust


Yes, until people stopped publishing isInteger and isArray as `packages`. Plus, stdlib is standard library. Can you cite one for JS? NPM is not.


They do NOT encourage code re-use because the effort required to understand your need, refrain from writing code, and hunt down a module in npm, far outweighs the effort to just write the code and stick it in your in-house utils library.


Yeah but, why do you need your own in-house utils library. Node needs batteries included!


Because writing code is what programmers (should) do. If something is simple, then it is simple and won't take up time and energy. If you don't know how to do it, then learning how to do it is worthwhile -- especially if it turns out to be simple. If you make a mistake, then someone on the team must look at the code and understand the mistake. This means that the team learns that things that seem simple are sometimes complex and they learn how to recognise their mistakes. When there is a mistake, someone has to fix that mistake. They learn how to fix the problem. Other people on the team must review that fix and they also learn how to fix the problem.

There is always a balance between getting some extra productivity by using somebody else's code, or getting the knowledge, expertise and control of writing it yourself. Far too often I see young programmer's choose something off the shelf so that they don't have to understand it. This is a recipe for disaster -- for your project, for your team and for yourself.

IMHO, as programmers, we should err on the side of writing code. Often it is obvious that you should use standard libraries, or a particular framework or or another reuse library. But where it is not really obvious, I would start with writing your own code and see where it gets you. You can always replace it later if it doesn't work out. I think you will find that you and your team will be miles ahead if you take this approach.


Never understood the fun in importing everything and chaining together someone else's code. Sometimes if I'm building something trivial, a bit of test code or something for fun I'll write up all of it, even if I have to reference others code. You learn so much more.


Well, it does encourage code re-use, but the issue is that the benefits of code re-use here are far outweighed by the downsides of just writing it yourself.


I think that there's a certain wishfulness bordering on naïveté to this pursuit. We tell ourselves that we are managing complexity by having small, well-tested units that are reused by an entire community. But software complexity continues to exist. We think we are mitigating the complexity of our own software. But the complexity really just shifts to the integration of components.

Anyone that has been around long enough has war stories about getting two relatively simple pieces of software working with each other. In my experience, integration problems are often the most difficult to deal with.


But this doesn't address the author's point that each module, each dependency is an additional point of failure.


There are definitely trade-offs involved when making decisions to inline vs import.

Strictly speaking, you're definitely right. Each dependency is an additional point of failure, but so is each additional line of code you inline.

The benefits of these small modules is that they're very thoroughly tested by the entire community. I'd say for most cases, they will be much more robust than any solution an individual developer can spontaneously whip up.

Of course, these modules don't exist in a vacuum, and infrastructure failures such as this one do pose another point of failure you wouldn't have with inlining code, but I think in this particular case it had more to do with the failure of npm to adhere to package management best practices to not include an unpublish feature.


>There are definitely trade-offs involved when making decisions to inline vs import.

If someone can't make the right trade-off regarding "should I import a module called "isArray" or "is-positive-integer", then they should not be programming...


I think a lot of the confusion here is that in most languages "isArray" or "is-positive-integer" are simple functions of simply build into the language.

Dynamic typing and diversity of devices and virtual machines mean the ability to simple tell if something is an Array could be multiple lines of code that could take a considerable amount of time to test.

Delegating this to the community is arguably the only sane choice, however strange that may be to someone come from another environment.


Yep. Everyone mocking NPM and the javascript community for this ought to retire, they clearly don't understand how to program in this day and age. Or is that not what you meant?


author of over 600 modules and 1200 lines of javascript.


Wow, you were not kidding. I chose a random package of his [1], and indeed: a grand total of 1 LOC...

This is madness.

[1] https://github.com/sindresorhus/float-equal/blob/master/inde...


and it uses another 1LOC package. https://github.com/sindresorhus/number-epsilon/blob/master/i...

and no, the edge cases argument does not apply to either of them. Wow.


Would you like to memorise that number yourself to use in your app?


What memorise? How about this for a dependency:

https://www.google.com.au/search?q=es6+epsilon


That module exists precisely because this is a new addition to the language, it's a polyfill for backwards compatibility.


> Small modules are easy to reason about

Flip-side: it isn't easy to reason about a large and complicated graph of dependencies.


I'm not at all clear why this blog post is touted as evidence that the tiny modules approach is correct. I think it might be all the people after it congratulating him.

"It's all about containing complexity." - this completely ignores the complexity of maintaining dependencies. The more dependencies I have the more complicated my project is to maintain.

Dependencies are pieces of software managed by separate entities. They have bugs and need updates. It's hard to keep up to date.

When I update a piece of software I read the CHANGELOG, how am I expected to read the CHANGELOG for 1,000 packages?

Depending on a bigger package (handled by the same entities, who write one changelog, in the same form) is more straight forward.

I'm not saying this is wrong - but there's a balance here, and you must not ignore the complexity of increasing your number of dependencies. It does make things harder.


My problem with this, as an occasional JavaScript developer, is "discoverability" (as many others have mentioned). If I decide I need a left-pad function, and search on NPM, how do I choose which one is best? The one with the most downloads? Not always the best indicator of quality; perhaps it's just the oldest.

Not to mention the cognitive overhead of stopping programming, going to NPM, searching/finding/installing the module, then reading the documentation to understand its API. Isn't it simpler to `while (str.length < endLength) str = padChar + str;`? How can there be a bug in that "alternative naive inlined solution"? Either it works or it doesn't!


I don't see how your linked comment brings more to the table than the basic arguments for code reuse.

But naturally, with any code reuse there's a benefit and a cost to instance of internal or external reuse.

The Benefits for external reuse include ideal reliability as your describe as well as not having to create the code. The costs for external reuse include having your code tied to not just an external object but also the individuals and organizations creating that object.

I think that means that unless someone takes their hundreds of modules from the same person or organization and is capable of monitoring that person, that someone is incorporating a layer of risk to their code that they don't anticipate at all.


Percentage of module owners who you can't trust to not screw up their module: H

Risk of indirectly hosing a project with N module owners providing dependencies: 1-((1-H)^N)

Let's say H is very small, like 0.05% of module owners being the type who'd hose their own packages.

3 module owners: 0.15% chance your own project gets hosed

30 module owners: 1.49% chance your own project gets hosed

300 module owners: 13.93% chance your own project gets hosed

Keep in mind it's not just your dependencies, but your entire dependency chain. And if you think a module owner might hose some modules but not others, maybe H is actually the number of modules in which case 300 starts getting pretty attainable.

Upshot:

Not everyone is trustworthy enough to hang your project on. The more packages you include, the more risk you incur. And the more module owners you include, definitely more risk.

The micromodule ecosystem is wonderful for all the reasons described, but it's terrible for optimizing against dependency risk.

Takeaways:

Host your own packages. That makes you the module owner for the purposes of your dependency chain.

If you're not going to do that, don't use modules dynamically from module owners you don't trust directly with the success of your project.

I love collaborative ecosystems, but some people suck and some perfectly non-sucky people make sucky decisions, at least from your perspective. The ecosystem has to accommodate that. Trust is great...in moderation.


I agree with you, besides the tree-shaking (nice word btw). It's like NPM shaking the Christmas tree and then say "Have fun cleaning up the floor". Remember that NPM is not like apt-get, where the packages are Managed for you by an expert. In NPM you have to manage the packages! And where you can't have NPM and build dependencies, like in production, maintenance now becomes much harder!


My problem is one of productivity. There's already a standard library, and if it's a language I've been using for a while, I probably remember most of it. I can pretty much go straight from my thought of what I want done to typing it out, much like typing English. If you force a 'cache miss' and force me out of my head and into a documentation search, well, that's going to have a significant effect on my performance. If the function is well-named, it has less of a cost in reading the code, but there's still a cost, because what if it's slightly different? I have a pretty good feel for the gotchas in much of the standard library of the languages I use. I have to stop and check what the gotchas of your function are.

Yes, at some point the complexity cost of gluing together the standard library functions to do something becomes greater than the lookup cost of finding a function that does what I want; but I am saying that adding more functions is not costless.


Small modules are also often the result of dealing with different javascript implementations over the years. I've recently seen a simpler version of the left pad that would not have worked on multiple Safari versions or <IE6

The derision is unwarranted, due to a failure in critical thinking from otherwise smart people.


It's interesting to me that people find this convincing. I find it to be complete insanity. People need their libraries, but putting everything in tiny buckets is just not working. Why aren't people working on good utility libraries instead?

There's even some guy calling for a "micro-lodash". To me, as a Python engineer, lodash [1] is already a tiny utility library.

I guess it's also about the fact that JS is a pretty bad language. That you need a one-line `isArray` dependency to `toString.call(arr) == '[object Array]'` is crazy.

[1] https://lodash.com/docs


There is a practical reason for tiny modules in client-side JS that doesn't exist with Python: page load times. If your base layer is going to have third-party dependencies, they better be tiny and do only and exactly what you need.

That said, lodash core is only 4KB, and lodash gives you the ability to build it with only the categories or exact functions you want, so I don't understand what the purpose of a "micro-lodash" would be.


There are tools to include only the stuff you actually use that you can stuff in your build process though.


I get that author's point, but do you REALLY need a dependency module that tells you the type of something?

That's not a Lego block; that's an excuse.


When using a language where

    isArray(arr)
turns into

    toString.call(arr) == '[object Array]'
...then I guess that's more reasonable than if you use something sane.


The alternative to small modules is not no modules at all, it's libraries.


Why dont they put everything together and call it "commons" like in Java? (commons-lang, commons-io, etc).


> author of over 600 modules on npm

Nope.


That argument sounds like an enterprise architect explaining why the AbstractReusableBeanFactoryGenerator is going to increase productivity.


it reminds an apt proverb: missing forest for the trees.

Too many modules do not necessarily become a good thing. They may appear to get rid of complexity but in reality, you will have to face the complexity some level above and in fact, the sheer number of small modules will most probably add more complexity of themselves.


The reasoning makes sense for small modules that might change in the future, but as he says himself, most of his modules are finished and will never change. That makes many arguments in his post moot and the modules should probably be snippets instead that are implemented directly.


Author of "is-positive-integer" here. I will admit the implementation is pretty funny, but I move-out all single-purpose utils out of projects to modules for a bunch of reasons. DRY is the most obvious one, but one that may be less obvious is for better testing.

I move out modules so I can write really nice tests for the independent of the projects I am using them in. Also, I tend to write projects w/ 100% test coverage, breaking out utils allows me to test projects easier and faster.

Also note, the implementation of this module changed a few times today. With it being open source and having the collaboration of other engineers, we ended up with a very performant version, and discovered interesting quirks about "safe integers" in JS.


"DRY is the most obvious one, but one that may be less obvious is for better testing."

Isn't that why we just write functions? Turning simple functions into entire modules just adds an unnecessary level of abstraction that helps nobody.


Of course that's why we write functions. But we can't share a function with others and across projects... so we make it into a module to do that.

It's like (micro)crowdsourcing (the smallest components of) the standard library that JavaScript never had.

Some bit of logic could go from being DRY in one project, to DRY in all of my projects, ... to eventually be DRY in all projects. It's globally DRY.


Two points:

- Breaking out is-positive-integer hasn't reduced the number of paths to test. You have not gained anything, you've added overhead.

- 100% test coverage is rarely a good thing. It is required for safety critical areas like avionics. I can guarantee that your JS code is not making into any safety critical environment!


>hasn't reduced the number of paths to test

But it has: it's now tested, and you don't need to write tests for it next time you want to use it.

>100% test coverage is rarely a good thing

Not sure what your argument is here. Sure, it may not be helpful but are you saying that one should strive for less than 100% coverage, because "it's rarely good"?


100% coverage is rarely worth the time, unless it's an engine controller or something else that needs the assurances.


If tjmehta likes to cover his open source code 100%, under whatever metric, by God let's encourage him in that and not start a discussion about the economic sense of it!


> 100% test coverage is rarely a good thing

Do you mean rarely useful, or actively harmful?


Both in a way.

What happens when you have a "100% test coverage" requirement is that people don't think about the tests, they simply make tests to force the code down every path without thinking whether it was intended to operate like that.

For example if the is-positive-integer had a silly test for "if(value==23) return false", a requirement for "100% test coverage" would simply result in someone creating a test for that condition instead of considering if it was actually a fault.

100% test coverage != no faults.

What you have done by generating 100% test coverage is effectively 'frozen' your code and made it harder to implement any changes.


I would say that what's most harmful is using code coverage as the primary measure of quality of your tests. It's that mindset that puts coders in a mode where they write tests which upon failure mean nothing significant (unless it finds a runtime error). It's a type of fallacy. Instead of considering if your tests verify your real-world requirements, you feel like your job is done because you have reached 100% line coverage.

It's like following someone's car and congratulating the driver that he drove correctly, without considering if he reached the correct destination.


Just to pick at a nit with you, it's a little meaningless to say "100% test coverage", without specifying whether you're talking about line coverage, branch coverage, path coverage...

This is especially true for one-liner modules in js, where any test at all might let you claim 100% statement coverage, without accounting for branches or loops within method calls.


With how weird javascript can be about numbers, it actually didn't surprise me that your module existed.


Actually, thats a good reason to use trivial functions like the one described. Hopefully the author has discovered all of the quirks in Javascript that might affect this. It will likely be a lot more tested than any version I would write.

As someone who spends 80% on the back end, I often get bit on the ass from JavaScripts quirks when I need to add some front end stuff.


That's interesting. JavaScript has enough quirks to warrant an is-positive-integer module.


See:

  isPositive = (x) => +x === x && x > 0
In which conditions does it return a wrong value? I haven't found any.


It would be really, really great if this function was not in its own module, but was part of a larger library in which all such functions of related modules were captured, without the (cognitive and other) overhead of the separate packaging.


A fun r/programming subthread about this package: https://www.reddit.com/r/programming/comments/4bjss2/an_11_l...

Those are just the three top-level dependencies. The package has 9 recursive dependencies.

There's also a nice table explaining how a package called "is-positive" managed to reach version 3.1.0.


This is just baffling...


Similarly, the `average` package on NPM is one that I came across:

https://www.npmjs.com/package/average

    var average = require('average');
    var result = average([2, 5, 0, 1, 25, 7, 3, 0, 0, 10]);
 
    console.log('The average for all the values is:', result);
It's hard to not stare at that in complete disbelief; someone thought that it was worthwhile to create a package for determining the mean of an array of numbers.


You know what's worse? Javascript numbers are all floating point numbers, which means integers are 53 bits long. So, you might think this library would try to address issues this can cause, but nope, this is the average statement you'd write if you didn't know was a mantissa was and had never heard of big.js, bignumber.js, decimal.js, crunch.js or even strint (which represents integers as strings because wtf not).


Not to mention the fact that adding many floating point numbers in a naive way results in a serious error accumulation, which is why things like Kahan summation algorithm[1] exist.

[1] - https://en.wikipedia.org/wiki/Kahan_summation_algorithm


Every once in a while, I come to HN, and inevitably find a post about some new JavaScript technology on the front page, wherein I find some comment that gives me a new appreciation for the web app language I use at work, which fortunately is not JavaScript.


I'd be interested to know what language it is.


It must be one of: Elm, Purescript, GHSJS, Faye or Haste


What is the web app language you use at work? I assume you're talking about something that runs on the client -- a web browser -- so it probably compiles down to JavaScript, right? JS literally has no numeric type that isn't a float, so how does your language escape this fundamental limitation?


Not OP, but I used Google's Long.js (among other things) in a project[0] to get around the limitation of not being able to do as in Python:

    import fractions
    print str(fractions.Fraction.from_float(0.3333).limit_denominator(10))
Almost anything that compiles to JS would be better than by-hand JS; I'd love to use ClojureScript with re-frame at work...

[0] http://www.thejach.com/pseudo-public/webgl_coursera/assignme...


asm.js is proof that a compile-to-JS language doesn't necessarily have to discard the correctness of its number types. A statically typed language can do a lot. Integer values can be can to int32 via `| 0`, longs can be emulated with two numbers (GWT does this).

The tradeoff is speed, especially for dynamically typed languages. Having to check your operand types in JS before doing work is a performance killer.


And WebAssembly means that soon it will be practical to use any language on the client or server side.


It already is on the server-side. Feel free to use C; at least it would be performant.


It's easy to write code that does the wrong thing quickly in any language, and IME that tends to be the usual result of choosing C.


ok, you are aware that the vast majority of the worlds software is written in C.... yes there are problems with it but it is still suitable for real-time, embedded, safety critical software of the sort I've been making quite successfully for close to 30 years.

Don't fall into the 'C is the devil' trap, any tool can be misused.


Straight-up C is not at all suitable for safety-critical software. C plus various bolt-on tools for static analysis and the like can be usable, but is always going to be less effective (IMO) than a unified language where every tool is working according to the same rules.

There might be a few legitimate use cases for C, but I've seen people pick it for the wrong reason so often (and using C because "it would be performant" is entirely invalid IME).


You would have to argue with the overwhelming majority of safety-critical software that is and has been for decades, written in C...

Of course, static analysis is always used in combination with proper coding style... but that is just the normal (professional) C development environment.


> You would have to argue with the overwhelming majority of safety-critical software that is and has been for decades, written in C...

> Of course, static analysis is always used in combination with proper coding style... but that is just the normal (professional) C development environment.

>> Straight-up C is not at all suitable for safety-critical software. C plus various bolt-on tools for static analysis and the like can be usable, but is always going to be less effective (IMO) than a unified language where every tool is working according to the same rules.

Pretty sure you've just restated GP's point in your second paragraph.


You just have to use the right libraries:

https://github.com/Adaptv/ribs2


That's not true. In most javascript implementations, integers will be represented by real 32 bit integers.


How does that help? Besides they overflow to doubles if they get bigger than 2^31-1 anyway.

The problem with a naive average like this is that if you're averaging a bunch of big numbers there is a very high chance of overflow when you're summing, so even if all the numbers fit in 32 or 53 bits your calculation will be off.

If you're not averaging thousands of large numbers, why are you even using a package instead of nums.reduce((a,b)=>a+b) / nums.length anyway?


How does storing integers in 32-bit values help the issue of integers being truncated to 53 bits?


53 bits that cannot overflow (it only becomes less accurate) is not enough but 64 bits are, even with the risk of overflow?


Who said anything about 64 bits?


The complaint was that JavaScript doesn't have 64-bit integers but only 53.


Clearly 2^32+2^32 < 2*53, so there is no problem ;-)


Source? I find this very hard to believe. 1. How does the engine know of it should be an integer or not? 2. Why would they have a special case for this, it won't benefit accuracy and treating them in a special way probably hurts performance more than just always keeping it as doubles.


As long as everything used is really an integer and that everyone you work with is careful to keep things as ints (oring with 0 and such)


Well a bunch of standard libraries have sum defined, right?

JavaScript has suffered from a lack of a standard library for a while. Having a small package like this means that (in theory) everyone is using the same bug free version of summing instead of writing their own.

Honestly having JS start building a standard library at this point would be wonderful.


For now I suggest using lodash functions exported as modules.

For example: https://www.npmjs.com/package/lodash.mean

Or, in light of `left-pad`: https://www.npmjs.com/package/lodash.padstart

You get the benefit of a "small, focused" module but still rely on a large and stable project with a great maintainer that cares about performance and testability.


There's already `Array.prototype.reduce()` in most browsers, which will do summing, string concatenation, and any other kind of aggregation:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

If you need something more convenient I highly recommend just adding lodash and getting a whole bag of rock-solid awesome tools.


reduce isn't a safe way to add floating-point numbers unless your accumulator is much more precise than your array values. You'll end up with what they call "catastrophic cancellation".

And they won't let you have fixed-point in JavaScript!


Doesn't a for loop have the same problem?


If you write the loop yourself, you can add everything to a double instead of a float, or a long double instead of a double, if you have those.

Oh, if all your numbers are positive you can also sort the array and it will minimize the error accumulated, but I'm not sure what to do with mixed signs.


here is the source for that npm package

https://github.com/bytespider/average/blob/master/src/averag...

and here is the code copied out in full

module.exports = function average(values) { for (var i = 0, sum = 0; i < values.length; ++i) { sum += values[i]; } return sum / values.length; };

is this worth adding a dependency to your build for?


Sounds like the author was aiming for something to put on his resume: e.g., "Author of 25 libraries on NPM, some with more than 500K downloads." etc...


Well if I was interviewing the guy, my second or third question would be "tell me about these packages you published on NPM"

It's going to be damn hard to make an average function sound impressive.


I think you're seriously overestimating quality of the average programmer candidate.

Someone who understands what NPM is, has written and published working code (even stupid code) is miles ahead of the curve already.

Most companies in the world, especially non-software companies, don't go for 5% hacker gurus. Attracting someone vaguely competent is already a blessing.


No, I don't think I am. I've conducted many dev interviews. Gurus can often be prima donnas, difficult to work with in a team, btw.

Interviewed enough "C++, 3 years" people who couldn't even describe a class, or give the most basic description of OO, to have no illusions of some of the standards out there. Similar for other languages. Similar for how much having a degree is worth. I used to be surprised such people tried it on, as there was no way they'd walk out of the hour looking anything but stupid, but it happened often enough to stop surprising. Of course I've also experienced code produced by such types, and worked with a couple.

If you're going to publish something then make the world a better place not pollute it with pointless stuff.

Spending more time publishing this package than most basically competent developers would spend writing the four lines is not going to mark you as "miles ahead of the curve" in my eyes. If you're an 18yo fresh out of school without degree, seeking trainee role, I'd look on it favourably.


In JavaScript, the API could be fairly complex. Thing is, JavaScript doesn't have arrays of numbers. So, what should the function do if the array passed in contains a null, an undefined, a string such as "7", "012", "008", or "2+3", a function, another array, a hash, etc?

I can easily see this grow into a function accepting options indicating such things as "null values are zeroes", "ignore null values", "evaluate string arguments", etc, or maybe into a factory that produces customized averaging functions.

For example, average([null,1,2,3],[4,5]) could either be 3 (the average of all numbers), 2.5 (sum of numbers divided by number of 'root' items), undefined (input format error), 3.25 (average of average of [1,2,3] and average of [4,5]), etc.

And what if it gets passed arrays of equal size? "average([1,2,3],[7,4,5])" could easily be interpreted as a vector average, and produce [4,3,4]

Silly? Yes, but if you want to write a robust library in JavaScript, you have to think about what to do with all these kinds of input.

And of course, there are the simple variants of harmonic mean, arithmetic mean, and geometric mean, and several others (https://en.m.wikipedia.org/wiki/Average)


...and this example is none of those things, and untouched in 2 years.

If an interviewee did respond with some or all of the points you raise he'd have turned a potential negative into a positive point. This instance is the simple arithmetic mean with no consideration of anything.

I stand by my original comment. Nine times out of ten an average is going to be resolutely unexceptional, one time in ten or less it's the core of a well-crafted solution with a decent reason for being.



IMO the behavior of "average" should be unspecified if it's passed anything but a single nonempty array of numbers. Making it even a tiny bit more robust is wasted work. Moreover, it's harmful because it encourages sloppy calling code.


The committee that designed SQL had a different opinion.


Right, that's a different philosophy. You can use JS as a dynamic language with DWIM qualities, like SQL. But I prefer to write JS code as if it were a typed language without nulls, and use tools to enforce that. I think that approach will win.


Im of the mind most employers are looking for "have you done stuff" "can you do stuff". Veey rarely are they looking for anyone extraordinary, simply a tool that works


I doubt something like 'wrote npm package that averages stuff' would be relevant without fluffing it up to impress the non-tech hiring manager. Still would ring alarm bells with devs though.


That plus perhaps some kind of dopamine kick these people get out of this whole charade... Sindre Sorhus keeps describing himself as an "Open Source addict", after all.


This sounds like a symptom of an inadequate standard library. I do expect to be able to call "average" on a list of numbers without writing it myself, but I expect that to be part of the language not a 3rd party package.


> I do expect to be able to call "average" on a list of numbers without writing it myself

Just out of interest, what kinds of functions would you expect to have to write yourself, if you're not happy about calculating the average of a list of numbers?


Alright, take a simple function like average. How might someone naively calculate the average?

    for n in list
        sum += n
    return sum / len(list)
Which will fail but will probably be caught in the code review. Then a cleverer developer might think to write

   l = len(list)
   for n in list
      sum += n / l
   return sum
Which will also fail but in more subtle ways, hopefully the senior dev will catch it in the code review.

Then they correct it to

    l = len(list)
    for n in list
        sum += n / l
        rem += n % l
        sum += rem / l
        rem = rem % l

    return sum
But this could further optimized and might still be dangerous. This one might not be fixed at all in the code review.

The best algorithm might be Knuth's which is fairly non-trivial and might not even be the fastest/most efficient given certain unique JS optimizations.

Do you want to do this for every 'trivial' function or would you rather import average where (in theory) someone knowledgeable has already done this work and benchmarks for you and you get the best performing algorithm basically for free?


1. The algorithm in question does not do any of the latter 'better' approaches.

2. The pursuit of all edge-cases is a form of hubris. Your first version is fine in many cases (it is the algorithm used by the package referenced [edit:] and by both the Python and .NET standard libraries) and can be written inline.

3. At the point you are concerned with specific edge-cases, and need the 'correct' algorithm, you need to have found that algorithm, understood it, and make sure that the package you're importing implements it. It's even worse for a builtin function: there's no guarantee that a language implementation of average does so, or that it has the correct tradeoffs for your code. You'd need to test it, code review it, and so on. Do you do that for every trivial function?

4. If you're really suggesting that you can't be trusted to calculate an average correctly, then how can you have the slightest confidence in anything you do that is more complex?


>It's even worse for a builtin function... You'd need to test it, code review it, and so on

If you can't trust built-in functions to do what they say on the standard library documentation, you're either paranoid or made a wrong choice of language. (Or you have some very compelling reason to be working in a language that's still rough, like the early days of Swift, which is an entirely different game).

>If you're really suggesting that you can't be trusted to calculate an average correctly

It's not about what you can be trusted to do, but what's worth spending your time on.


Which languages do you trust to use Knuth's algorithm (or something similarly comprehensive and exhaustive) without checking?

Not going to check many, but a quick look, since I've got them handy: Python fails Spivak's criteria, as does .NET.


I don't care. I expect it to work for the simple case, and I expect not to spend time on it unless there is a good reason (like I have very large numbers and must be careful not to overflow).


You responded to a point I made in response to Spivak. If you don't care about his argument, then it might be worth responding elsewhere.


No reason to turn something trivial into rocket science.

You examples does not mention the type of variables involved. And dependent on the type and the language your first example may be (a lot more) correct compared to your third example. Ie. negative floating point numbers.

But who says that a library implementation is a lot better than what you as a programmer can come up with?

In particular with all these "micro" libraries written by random people.

Are they actually smarter than you? Do they understand the "edge" cases as you do?

Most people don't bother with looking at the library source code they are importing. And I don't blame them because things are in general a mess and not only for JavaScript.

The solution, for me at least, is to import less and be more conversative with what you import.


Those that are unique to the problem I'm actually trying to solve, and not already in the standard libraries or well-maintained packages of mainstream languages.


Okay, but how long would you think it sensible to look for, test and assess a package for being 'well-maintained' before you'd consider it a better use of your time to average a list of numbers?

If you had to, say, add a list of numbers together, without averaging them, would your first thought be to go searching for a package, given that you know some languages have a 'sum' function? Some languages have an 'add' function (separate from the operator) - would you go looking for a package to supply an add function if you (say) needed to pass it to reduce?


>would your first thought be to go searching for a package

No, because if this isn't in the standard library (or a very simple one-liner from standard library functions like "fold +") then I don't want to be working in this language.

If I have to work in this language, and I'm allowed to bring in packages, I'd go look for the community's attempt at fixing the standard library, or at least a "math" package, particularly if there were lots of other basic math functions I'd have to do myself. If it's really just this, I'll probably paste a Stackoverflow answer.

Could I come up with it myself? Yes, but thinking through the pitfalls of the implementations of basic math operations is not a good use of time.

What would you do if your language didn't have exponentiation?


> If I have to work in this language, and I'm allowed to bring in packages, I'd go look for the community's attempt at fixing the standard library ... If it's really just this, I'll probably paste a Stackoverflow answer.

Sorry, can I get this straight, we're talking about sum() now, right? I'm genuinely amazed.

> thinking through the pitfalls of the implementations of basic math operations is not a good use of time

I suppose there's a trade off, at some point it is less time to find, check, and invest in a package rather than write its contents. For lots of cases it is definitely faster to use a package. But we're talking about averaging a list of numbers here, or adding them, right? Doesn't this strike you as rather bizarre to be having this discussion over things you should be able to write trivially?

> What would you do if your language didn't have exponentiation?

It depends what I need it for and what kind of exponentiation. I'd use inline exponentiation by small positive integers. I'd be concerned if my team were writing pow(x, 2) + pow(y, 2), for example. If I needed fractional exponentiation (e.g. pow(k, 3.567)), then I know that's beyond the realm of something that could be implemented in a couple of lines. If the language didn't have it, I might write it, certainly, especially if it wasn't part of a bigger suite of functionality I need.


I'd consider it a serious code smell for my codebase to be carrying around my personal attempt (or my team's attempt) at at half of the Python standard library. When it looks like we're heading in that direction, let's just use Python!

Sure, if it's really just arithmetic on lists, I won't be happy about it, but I'll write it. I've yet to meet a standard library which is wonderfully complete except for that one thing. If the abstractions of the environment I'm working in are that poor, there's going to be a thousand other little "trivial" things that I'm now responsible for maintaining, and some of them will turn out to be less trivial than imagined.

For example, it is "trivial" to write linked lists in C, but tracking down the one-character typo that causes them to nondeterministically explode is IMO a distraction from the project, not a valid part of it.

And what about the next project in that language? Wouldn't it be nice to factor out all that stuff and bring it with me? Well, now I am using a community standard library, but it's the community of "just me." Why not take one with some internet recommendations and an active Github?

My employer fortunately runs its own package mirrors, though, so we don't run the risk of packages just disappearing on someone else's whim.

I suppose our difference in values is that I consider each line of in-house code as much of a liability as you consider each line of other people's code.

My reaction to the article is very simple:

>What concerns me here is that so many packages took on a dependency for a simple left padding string function, rather than taking 2 minutes to write such a basic function themselves.

"So many people" writing a basic function themselves is an exactly wrong outcome IMO.


[Edit: Parent comment has been rewritten, see second response, below].

That you think averaging a list of numbers qualifies as a 'feature' you might add to a codebase is rather the surprising thing about your response. I'm aware I'm coming over rather arsey, and that isn't intended, I'm just surprised to find someone actually defend the attitude, generally, that says a trivial calculation should be either a) provided, or you're not going to use the language, or b) put in a package to mean you don't have to spend 'the time' writing it, while at the same time saying you don't care what is in those implementations.

I mean, it doesn't matter to me, obviously. You can choose to have whatever criteria for taking a job you like, and I can have whatever criteria I like for my hires. I'm just surprised. Sorry If I've come over argumentative or grandstanding.


It should, but the ECMAScript spec doesn't have it, and Node.js team prefers to keep the core libraries to a minimum and definitely do not want to modify prototypes. Thus we depend on userland to provide those features for us.


Thats why somebody should fork Node and start building a batteries included standard library.


It seems C++ approach is quite good: you have a very stable (if a bit too stale) standard library, and you have Boost, which is a cutting-edge batteries-included extension to stdlib curated by much the same people as in the standardization committee. It serves as a great _inclusive_ counterpart to the _exclusive_ stdlib and a sandbox for stdlib extension proposals.


I think the scarier thing there is this line: "0.0.1 is the latest of 2 releases"

I wonder if there will be a 3rd release with an updated averaging function.

Edit: to be fair, it was an optimization from reduce to for loop [https://github.com/bytespider/average/commit/7d1e2baa8b8304d...]


I'm thinking that someone wanted to learn about building and publishing a package and the ecosystem so they made this computationally trivial thing as a practical exercise.

Pretty much every package management system gets cruft in it like this. Example: for a long time someone had uploaded a random Wordpress core install into Bower.


And 54 people found it a worthwhile package to install _this week alone_.


That could be one dependent module that got installed in 54 places. More likely, it was mostly various bots that install everything on some semi-regular schedule.


Let's say these are just prescient trolls. For our sanity.


While it demonstrates the problem of npm lacking namespaces (such that the word "average" is wasted on such a trivial implementation)...it doesn't seem anyone was using that library


npm has namespaces. https://docs.npmjs.com/misc/scope


That package is, at best, average...

ducks

(Edit: typo)


Don't be .... Mean.


Yes, but it's MIT licensed. We also need average_GPL to go with it.


You bring up a good point with respect to copyright/licensing --- at what point does code stop being copyrightable and become public domain simply due to pure triviality? There are not very many sane ways to write an averaging function, and the most straightforward one (add up all the value, then divide) would probably be present in tons of other codebases.


It's a fucking nightmare, is what it is.

I wanted to use a javascript tool that would make my life easier, and when I looked at the npm dependency tree it had 200+ dependencies in total.

If I used that javascript tool, I'd be trusting hundreds of strangers, lots of which had absolutely no clout in github (low number of stars, single contributor projects) with my stuff.

And not just them, I'd be trusting that no one steals their github credentials and commits something harmful (again, these projects are not very popular).

It doesn't help that npm doesn't (AFAIK) implement code signing for packages which at least would let me manage who I choose to trust


YES YES YES!

In all the debate about this, why is the trust-dependency-fuck-show not getting more attention?

Every dependency you take is another degree of trust in someone else not getting compromised then suddenly finding all sorts of horribleness making it into your production environment.

It beggars belief!


This is more a reflection of how bad the JS language is than anything. Real programming languages have means of standardizing the most common UX and simple patterns into a standard library. Javascript is a consortium hell that never gets updated sufficiently, and has no good standard library, so NPM basically replaces the standard library with a thousand micropackages.


Also it is a lot easier to get it wrong in JS. Is it null? Is it undefined? Is it bird? Is it a plane? No it's a string!(but sometimes a number). Good programming languages make it easy to write is negative e.g. isNegative = (<0) where implicitly by the 0 and < it will take a Num and return a bool and this is type checked at compile time.


Use the null object pattern.

If you're checking a value to see if it's set by testing for null/undefined, you're doing it wrong.

This is good advice for any language, not just JS.

Besides, using null as the default for an undefined value is a mistake. Null should indicate a non-value not the absence of a value. Maybe one day the static OOP languages will die so devs have a chance to learn the difference.


My point is more that the compiler* can't detect these mistakes but in other languages you can. E.g. in Haskell you only allow Null on types when you want it (you do this by using the Maybe type). Objects where you probe for values that could be undefined or could be defined but null doesn't exist as a concept but if you badly wanted it you could use a dictionary. In Haskell if you use Maybe and you process a Maybe object you have to deal with the null and not null case otherwise you get a compiler error.

All these checks seem annoying but we've all seen the bar chart of when you discover the bug vs. the cost of that bug. The compiler finding the bug is cheap. Haskell is a cheap language to program in, compared to JS!

* I forgot there is no compiler in JS but let's say the linter for argument sake.


Maybe/Nothing is a perfect example of the null object pattern. Either provide a sane default or a typed instance with empty values rather than checking for null/undefined. Ie the 'null object' in 'null object pattern' doesn't mean using null to indicate the absence of a value.

Null isn't used in JS to mark undefined variables, that's what 'undefined' is for. Unlike static/OOP languages, null is specifically reserved for use cases where a null value is necessary. Which was the point of my comment.

If you try to access an undefined variable in 'strict mode' it throws an undefined runtime error.

JSLint does, in fact, check for undefined variables and/or globals that are accessed before they're defined in the local source (ie unspecified external globals).

So... There's that...

-----

Did you happen to notice how I didn't even remotely mention Haskell or anything related to functional programming but, please, I'd love to hear for the thousandth time how Haskell's purity is going to cure the world of all it's ills. As if the Haskell community hasn't been over-promising and failing to deliver production-ready applications for the past decade.

Unlike Haskell, JS source can be actively linted while you write it rather than requiring a separate compile/build step.

With ES6 on the client-side, modules can be hot reloaded as changes are made for instant feedback. The obvious downside being, fewer coffe breaks that can blamed on the compiler.

Have fun implementing trampolines once Haskell's 'recursion all the things' default approach to lazy-loading inevitably overflows the stack and crashes your app in production. Static type checking can't save you from logic errors. See also 'memory profiling' the phrase that shall not be uttered.

Source: http://steve-yegge.blogspot.com/2010/12/haskell-researchers-...


Purity won't cure the world of all it's ills but it may fix a bug or two before it hits ... well the developers consciousness let alone production.

I am genuinely interested in your point about trampolines and can you give an example? In Haskell foldl' for example allows you to process a big finite list quite efficiently without trampolines but yes the name of the function is a bit sucky I admit.

It's funny how you can criticize trampolines but then imperative code like this:

    var sum = 0;
    for( var i = 0; i < elmt.length; i++ ){
        sum += parseInt( elmt[i], 10 ); //don't forget to add the base
    }

    var avg = sum/elmt.length;

    document.write( "The sum of all the elements is: " + sum + " The average is: " + avg );
Is a tramp in disguise!


Yeah, there are a few shitty examples on npm. It's an open system and anyone can upload anything. The market speaks on how valuable those are. Cherry picking poor modules says nothing about the rest.

Plus, if you think that's too small, write your own broader module that does a bunch of stuff. If people find it valuable, they'll use it. If they find it more valuable than a bunch of smaller modules, you'll get 10,000 downloads and they'll get 10.

The module you roundly ridicule has had 86 downloads in the last month, 53 of which were today (at the time of this writing). I imagine most of those 53 were after you posted. So that's 40 downloads in a month, as compared to the express framework which has had 5,653,990 downloads in the last month.

The wailing and gnashing of teeth over this module is ridiculous.


DRY taken to the dogmatic extreme where everything is made up and the github stars are the only thing that matters.

This article touches on things that are wrong in the javascript culture. I always had this nagging feeling when working with NPM, this article brings it to light. For what it's worth I never felt this while writing Ruby, C# or Go.

It's the -culture- that needs to change here, not the tools.


The recursive folder structure in npm-modules was the first indication. At least Java had a single tree with com.bigco.division.application.framework.library.submodule.NIHObject.java


That recursive node_modules or whatever it is called was what made me hate this whole npm thing, specially because it is not centralized somewhere in my computer. And that means the same files a few times repeated on my drive just eating space. Being a Java developer I don't understand why the approach was not more like maven.


npm does a decent enough job deduping shared packages. The reason is different packages consume different versions of things. Java only lets you have a single version of a package. If the API changes, it can be a pain when your dependencies were written for different versions. That slows adoption since you have to wait until everyone you depend on has updated first.


Are you thinking of the new flat-tree npm? Because in my 1.3.10 installation, I see this:

    $ find node_modules/ -type f | wc -l
    56278
    $ find node_modules/ -type f | xargs md5sum | gawk '{print $1}' | sort | uniq | wc -l
    8545
That's a lot of duplicate files.


its a long standing issue, and gets worse when there is node_modules inside node_modules inside node_modules ... so on that someone suggested renaming node_modules to n_m . https://github.com/nodejs/node-v0.x-archive/issues/6960#issu...


I think this is some sort of cargo cult UNIX minimalism - do one thing and one thing only.


No, its a byproduct of programming-interview-via-GitHub-repo


For some reason I misread that as "do one thing and do one thing ugly". Either way, it applies.

Lodash does this (and versioning, and clean code, and tests) really well though.


require the ultimate


Philistine! You should just be doing i things.


"This isn't your average everyday darkness.... this is Advanced Darkness." - S. Squarepants


You should then go dive into the passAll function:

https://github.com/tjmehta/101/blob/master/pass-all.js

It’s written in such a way that every time you call...

    passAll(f1, f2, ..., fn)(args..)
... there are something like 5 + 2n attribute accesses, 5 + 3n function calls, 3 + n new functions created, as well as some packing and unpacking of arguments, not including the actual application of the functions to the arguments that we care about. That’s in addition to the several functions defined in the dependent submodules, which you only have to pay for constructing once.

[From my quick eyeball count. These numbers could be a bit off.]


I'm more disappointed that it doesn't short-circuit the operation at all. It applies all the functions, THEN it determines whether all of them passed. Even worse, it uses `every` (which does short-circuit) to determine that all the functions are, indeed, functions, but apparently the ability to use that function to determine whether every predicate passes was missed.


It could have been a simple for loop with a break whenever one evaluates false. Blazing fast in every javascript engine, and easy to see all the code in one place.

Instead, it’s a slow tower of bloat, which you need to read 5 files to reason about.

Javascript implementation:

    function passAll() {
      var fns = [].slice.call(arguments);
      return function() {
        for (var i = 0, len = fns.length; i < len; i++) {
          if (!fns[i].apply(null, arguments)) { return false; }
        }
        return true;
      };
    }
Or Coffeescript (including the function check):

    passAll = (fns...) ->
        for fn in fns then if typeof fn isnt 'function'
            throw new TypeError 'all funcs should be functions'
        (args...) ->
            (return false if not fn args...) for fn in fns
            true


Lol, nice, I wrote "pass-any" first. I then copied the code and replaced "or" w/ "and" to create "pass-all".

I will probably have go back and change this now that I know about it. In general though, not gonna a lie, I am not very concerned about micro performance optimizations.


If it were being used only in places in the code where it was rarely called, it would be fine to create a handful of extra functions, make a few dozen function calls, etc. In my opinion, anything that aspires to be a library of basic infrastructure type code should try to have efficient and simple code at the expense of being slightly syntactically longer than the highly abstracted version.

Because if people start using it for such things as an "is positive integer" check (Which, to be fair, they should not be doing. Nobody should be constructing 2 new javascript functions every time they want to check if an object is a positive integer. But apparently they are...), then it could easily make its way into somebody’s inner loops.

The end result of this general culture (not picking on anyone in particular) is that trivial Javascript programs end up having their performance tank, because every tiny bit of useful real work gets weighed down by two orders of magnitude of incidental busywork creating and tearing down abstractions built on towers of other abstractions. Because CPUs aren’t very well optimized for this kind of workload, which involves lots of branching and cache misses, it’s worse still, probably more like 4–5 orders of magnitude slower than optimized code written in a low-level language.

Ultimately every idle facebook page and gmail page and news article in some background browser tab ends up sucking up noticeable amounts of my laptop’s CPU time and battery, even when they’re doing nothing at all for me as the user.


Yeah, I thought the dependency thing was a joke (I mean, to have a package for is positive integer is already a joke, but come on)

Really


I think it actually is not. That's from some years ago and my memory of it is fuzzy, but at that time it was surprisingly hard to check whether a variable is a positive integer – maybe it was a negative one though and that was harder? You'd think it is just checking whether it is an integer and bigger than 0, or just checking whether it is bigger than 0. And it is. But to get that code to work reliably, regardless of whether it gets a string or float or an undefined, with the JS type system of that time and in multiple browsers, even the crappy ones, that took some time. There was one specific edge case involved.

Not that it was impossible, but I still remember having to search for it and being astonished that that was necessary.

Sure, should not apply anymore like that.


Be that as it may, is-positive doesn't handle any edge cases https://github.com/kevva/is-positive/blob/master/index.js

That's half the problem with these stupid micromodules; they're not abstracting complexity, they're obfuscating simplicity.


Don't know. If that approach works reliably, I'd see some value in it. Maybe this does not need any more code if you do it that way. If it does not work reliably, than you are absolutely right and that were useless.


Yea. I don't think many argue against abstracting complexity into more easily understood orthogonal modules. But some of these modules aren't abstracting complexity. They are ludicrous. They are nonsense. They are a symptom of a very real problem with how JS is programmed, how people expect to program in JS, and the difficulty people have with decomposing problems.

So many people on this page have written about how these are well tested, performant, and correct modules. But, the modules aren't even correct in many cases, let alone providing incomplete coverage over edge cases or the slow performance and horrendous dependencies of many of the modules.


it's a module with exactly one dependent module -- by the same author. Daily WTF material but who cares beyond that?


Formerly 9 dependent modules... but who cares maybe it was a homework assignment or a joke.

I don't use NPM so maybe it doesn't really matter aside from the level of abstraction being implemented being relatively ridiculous.

However, if my build system had to go out and grab build files for every X number of basic functions I need to use, grab Y number of dependencies for those functions, run X * Y number of tests for all those dependent packages, AND then also fell apart if someone threw a tantrum and removed any one of those packages basically shutting me down for a day... then I'd question every single thing about my decisions to use that technology.

[Quick Edit] Basically I'm saying "Get off my lawn ya kids!"


> I don't use NPM so maybe it doesn't really matter aside from the level of abstraction being implemented being relatively ridiculous.

Except, again, it's no more ridiculous than pick a random bit of code from anywhere and isn't any more emblematic than that random bit because, again, it's not used anywhere.

> [Quick Edit] Basically I'm saying "Get off my lawn ya kids!"

tl;dr: going for buzzfeed headlines but for comments, got it. Good job, I guess?


Trying to keep the conversation light-hearted with a closing joke is more like it since everybody takes these things so seriously-- sorry I pissed in your cheerios good chap!


This implementation reads almost as parody, although I don't suspect that the author meant it as such. If you didn't have a sense of what lurked behind the abstraction, it would be kinda beautiful.


I can't decide what's crazier to me: that such a package exists, or that JavaScript is such a ridiculously bad language that an "is positive integer" function is non-trivial to write.


I end up spending most of my working life working on other peoples code, rather than new features I end up debugging and fixing bad code. (I actually rather like it)

The majority of code I have ever seen is awful (20 years across large + small companies) but that is why I am hired to fix awful code so I am skewed. The amount of times I have seen people implement something simple, in a convoluted error prone was is unbelievable.

I know this seems ridiculous but when you see time and time again how people fail to do the simplest things it seems like a good idea.


I had several arguments about JS and I was shocked how many developers consider this platform great. I am not sure why these extremely bad practices are defended by devs, what are they getting out of it? I am only hoping we are moving towards more sensible development environment, there are many of them with better best practices and more sane libraries.


Sure, because just writing

  isPositive = (x) => +x === x && x > 0 
is boring. Need more requires to require the requires...

Keep in mind though that absolutely not all JS programmers are like that. Not everyone wants to be an aforementioned Dick-from-a-mountain.


> They've already taken it to an entirely different level of insanity.

Why is this insane? What alternatives would be better?


You create unit tested functions and expose them to client code with a factory pattern, how is that so ridiculous?


I have an extremely hard time believing this module is not a joke, ditto for many other NPM modules.


What I want to know is how many systems does the author of is-positive-integer have implicit root access to?

Sounds to me like publishing oneliners on NPM is a trivial way to build a botnet.


Seems like the answer is "none" by default, from https://docs.npmjs.com/misc/scripts#user

> If npm was invoked with root privileges, then it will change the uid to the user account or uid specified by the user config, which defaults to nobody. Set the unsafe-perm flag to run scripts with root privileges.


That's good, even if nobody is perfectly enough to a botnet. I thought more of the user running the node application.


I can't stop laughing. I think you have to admire the elegance of the concept as performance art though: this is cheap insanity. In fact, I've got to hand it to them, this is the most fun I've had looking at something programming related in a while. I recall the opening lines of SICP,

> I think that it's extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun. Of course, the paying customers got shafted every now and then, and after a while we began to take their complaints seriously. We began to feel as if we really were responsible for the successful, error-free perfect use of these machines. I don't think we are. I think we're responsible for stretching them, setting them off in new directions, and keeping fun in the house. I hope the field of computer science never loses its sense of fun. Above all, I hope we don't become missionaries. Don't feel as if you're Bible salesmen. The world has too many of those already. What you know about computing other people will learn. Don't feel as if the key to successful computing is only in your hands. What's in your hands, I think and hope, is intelligence: the ability to see the machine as more than when you were first led up to it, that you can make it more.

Quoted in The Structure and Interpretation of Computer Programs by Hal Abelson, Gerald Jay Sussman and Julie Sussman (McGraw-Hill, 2nd edition, 1996).


It really is the fault of the language that something this fundamentally horrible is allowed to exist in itself.

When people ask why Javascript is terrible, show them this.


The same could be said for your post. When people ask why commenting on message boards is terrible, show them this.

If you don't want to write modules this way, don't. Nothing about javascript requires that you even read articles about modules that you don't want to use. Or read articles and then follow up by posting on message boards about articles about modules you aren't going to use.

Bang some code out instead. Your opinion of javascript is about as valuable as the opinion of the person who wrote the modules.


When every major Node package depends on this, starting with Babel, then it actually does becomes something thats required.

There's no such thing as "don't use it if you don't like it!" in this case.

Javascript is a completely broken, shameful system that rightfully deserves all the scorn it's gotten. It really is the worst.

(note that i actually write Python & VanillaJS, and avoid Node packages entirely for now.)

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: