The JavaScript language has improved a lot since node.js was released (Promises and soon async/await). However, at this point NPM has a lot of libraries written in the old callback style and it seems even new libraries are still doing this. There are libraries like bluebird and Q which can "promisify" a callback style library, but I think using these is a bad practice. It's something that works most of the time, but sometimes breaks when the callback code isn't what the promisify function expected. Also, the library writers are not writing the code with any guarantees that promisify will always work. So even if it works now, it could break in the future.
The whole promisify is a good example of general node.js culture. It's considered "good enough" and everyone just uses it. However, anyone who has done software development in more solid languages would feel very uneasy using something like this. It's not really the fault of the JS developers, the dynamic nature of the language offers no other choice. For example, when Java added lambdas, since it has static typing it could consider a certain type of class (a single method class) as automatically a lambda. This allowed full backwards compatibility with any old library that conformed to this (and many libraries such as Guava did use single method class as a kind of "ugly" lambda).
node.js has a convention for callbacks (function(err, result)), but unfortunately it's only a convention and there's no compiler to enforce it. So automatic promisification is not possible. That leads to the current situation. There's all these new language features in JS, but everyone is still sticking to the "lowest common denominator" of callbacks. There's no path forward for existing libraries. The only way is a complete rewrite of libraries using the new Promise/async/await style.
I write and prefer libraries written in error-first callback, continuation-passing style. I also prefer `require()` to new `import`. Those choices have rather little to do with backwards compatibility, from my point of view. I think they are better choices, which just so happened to be the norm before ECMAScript began to change again. Opinions differ, and will differ, forever and ever, amen.
Maybe we can all agree that experience in other languages leaves ECMAScript with much to be desired. Whether the changes ES6 hath wrought and ES7 and friends portend bode ill or well is likewise up for debate. So are the one-trueness of futures, specialized async syntax, and static typing. Those aren't either-or kinds of debates as far as ECMAScript is concerned. ECMAScript could be more Scheme-like, more Java-like, more ML-like. I've forgotten to mention some other disappointed camp. Forgive me!
If specific judgments on those kinds of controversies unify a culture, there is no unified culture of Node.js, any more than there is a unified culture of C or Java or English punctuation. The only unity is that standards-compliant ECMAScript runtimes are everywhere and we all want to use them. As long as different tastes are at least sufficiently well specified, we've a fighting chance of automating adapters. So I'd argue promisifiers and depromisifiers might be the most important libraries of the moment, culturally speaking. But it's too much to ask of any broad and adaptable common platform or library to hide the fact that tastes and approaches differ.
I have a few reasons why node-style callbacks are absolutely the worst possible choice, so I'd like to compare.
Here are some:
1. Not values. Values can participate in expressions, can be passed as arguments to functions, can be returned as values from functions, can be stored in variables, arrays, etc. There are no values with node-style callbacks. You are throwing away half of the language capabilities regarding values and expressions.
2. No guarantees. Callback may be called multiple times. It may be called synchronously or asynchronously. There is no way to know what will happen.
3. Bad semantics for thrown errors. Its assumed that thrown errors will simply crash the process. This is a dealbreaker in node because every uncaught thrown error provides a vector for denial of service attacks, and the thrown error semantics of most libraries and the standard node library are pretty bad.
I know most people use Promises to escape the call-back hell, but you can do that with async library as well, so maybe you just like writing "return cb(err);" more than "return Promise.reject(err);"
Maybe you like, that you need to deal with the err in the call-back, while \w promises it is easy to write code that just silently fails?
- They are simple (fully transparent)
- They are the simplest good enough solution for many problems
- Cleaner code than new Promise().then().then().then().catch() clutter
- Yes, I like nested indentations (good functions are short anyway)
- They perform well
As a side note, I also prefer coffeescript, snake_case, CONSTANTS, and avoid using classes, prototypes, and this. I'm not very happy with the recent development of EcmaScript, too much complexity and unnecessary new concepts.
While - in general - you are quite right, most of the bigger libraries that I use in production offer interfaces for both styles. Mongoose for instance. You can easily use it with Promises or with the old-school callback style. Same for unit test libraries. And this trend will only continue. There is a lot of cool research being done in combining JS with hardcore FP (fantasy-land, ramda) and this will of course influence future developments. Not to forget the FRP community around React/Bacon etc...
I predict that in a year or two, the majority of devs will use Promises as if they always have been there.... And hey. Promises are Monads [1], so maybe we will also soon see a lot of Eithers and Maybes in Production code.
I taught today a lecture about this at my company, and people with no prior exposure to FP immediately loved it.
Thats also one cool thing of the js/node community. We just adopt stuff and try it out. With microservices, this also poses no problem. If it doesn't work, do it differently in the next service.
[1] I know, I know. A+ Promises are violating some laws BUT the general idea holds. And there are full monadic promise libraries like Data.Task or Fluture available.
Callbacks and events are significantly faster than promises, and promises I think most see as a wrapper around callbacks, with convenience functions for running and waiting on multiple concurrent async functions.
If you want promises, just embrace promisify. I get a lot of this fear about error-first callbacks being only a "convention", but everything follows this convention in practice. Anything that can't be promisified because of that will likely have a pull request or a more popular fork that can.
I don't think node core libraries should be promise-based, and you can't expect the whole ecosystem to switch to promises if node core won't. At some point you have to jump to callback-style to live on node.
I also don't really understand what this post has to do with the release.
There is a way to support both styles at the same time, and most libraries (that I'm aware of) use it. If a callback is given, call it with the result. If not, return a promise for the result. This doesn't require a rewrite of existing libraries, but merely a new minor version.
I don't see how compile-time type checking would help here. When checking out a new library, I read the documentation, or maybe it's test suite, and then use it accordingly. My tests ensure that I use it correctly, i.e. give it types it understands, give it the right data (the id of the correct user account for example), and handle the outcome correctly (i.e. display it in the right format).
JavaScript has run-time type checking, which could be used for automatic promisification, but I agree that's a slippery road, for many reasons.
You don't know what you are talking about. Not _everyone_ is sticking to callbacks. Many (if not most) are now on promises or async/await.
Also you slipped in weasel language to disrespect JS developers. I have been coding for 30 years in wverything from assembly to C++ to C# to OCaml to now for the last several years mainly Node.js. Your comment was (reading between the lines) as close as you could get on Hacker News to spitting in my face.
Promisifying works fine. So do rewrites, as the language has evolved rapidly.
We do not actually have a big backwards compatibility problem.
I have to disagree with you regarding promisify. I think its part of the solution, not the problem - it forces libraries to actually adhere to conventions, otherwise they are not promisifyable.
We need something like promises to replace node streams too. We need streams with well defined semantics. Quick, what are the unpipe semantics for stream erors? Does a stream unpipe from its source? Will it unpipe synchronously or asynchronously? Can a stream emit an error synchronously when its created? At the next tick of the event loop? What about at the end of the same tick? Are these semantics defined anywhere? Which versions of node streams adhere to them? All is fuzzy.
We need to replace node streams and of all eventemitter-based streamlike stuff, badly.
Regarding types, you always have the option to add TypeScript into the mix.
I actually think using your promise library's promisify methods are best practice. Reason: there are multiple promise implementations with different methods available (e.g. the Bluebird library extends the promise/A standard significantly), and thus if libraries included promises, a conversion from whichever promise library the library was using to the promise library which you are using would be necessary, and this would be considerably less efficient.
I have to agree. It's the wild west. You just have to be prepared for things to break, and if you're lucky, they break early and reliably enough that you don't have Heisenbugs. I'd say it's a bit of dynamic language culture, overall. Although, perhaps Javascript sees a particular flavor of it, with its first-class, always-variadic functions.
It's all good for prototyping, but man, once you're up in production, dynamism is a recipe for pain. Granted, statically typed languages crash too, but the big difference is that you can often eliminate the same problem throughout your program by getting the type right, as opposed to trying to TDD bugs one at a time as they emerge.
I think you nailed it when you said, "there's no compiler to enforce it". Actually, maybe a compiler isn't the key word, but automated checks is the idea. A type-checking compiler is just one solution, albeit a pretty powerful one. Type inference makes it much less onerous these days. But even staunch dynamic language devs have come around to embrace the use of linters. So while it's often seemed that static and dynamic language users would always be at odds, perhaps a convergence is slowly happening.
> It's something that works most of the time, but sometimes breaks when the callback code isn't what the promisify function expected.
The sounds like a limitation of promisify, not inherently of callbacks. There are many situations and styles where callbacks are perfectly adequate, and promises offer no benefit, so referring to it as the "old callback style" may be a bit premature.
> There are many situations and styles where callbacks are perfectly adequate, and promises offer no benefit
Promises are superior in general, I guess that has been discussed already[1], many times. Promises may indeed offer just an insignificant benefit sometimes (you can even call it just a different style), for example when you are first writing a monolithic prototype. In this worst case, you would simply make a giant chain of ".then" methods, any your gain is you can get creative with the error handling if you want. However, the greatest benefit revels itself later: The composability that comes with them will help you immensely when refactoring.
Promises are not superior in general. There is a significant performance hit that, in some problem areas, absolutely should not be ignored. Neither of those links claims that promises are superior in general. They provide additional, valuable features, but with tradeoffs.
Well, I stand corrected. Yes, there definitely is a performance penalty for promises.
Performance may be a tradeoff for the current engines, but I'm optimistic that it will be optimized away as it doesn't add too much semantics over the callbacks. Maybe error handling is tricky.
I would say they are superior to callbacks as object-oriented code is to procedural code - that also has some performance cost in many engines.
But, you are saying tradeoffs - plural. I, as an experienced web developer, am sincerely curious what any other can be?
That was exactly my point. Object-oriented code is conceptualized as a next step but there are indeed cases where you would be better off procedural and there are many people who like procedural just better. Maybe it's my poor English, would this be more clear: Promises are as superior as to callbacks as object-oriented code is to procedural code. (Ok now it's not very readable :) )
> [promisification is] something that works most of the time, but sometimes breaks when the callback code isn't what the promisify function expected
You're just saying that if you make a mistake and use it wrong it doesn't work. That applies to all code, not just promisify.
> Also, the library writers are not writing the code with any guarantees that promisify will always work. So even if it works now, it could break in the future.
That also applies to all code, not just promisify. If the author changes their API without a major version bump, that can break any code that relied on the old API.
Promisify is deterministic and works as specified, 100% of the time. If it doesn't work, that means you passed a function with the wrong signature.
Because it blocks anything else from happening. If on a server, it will prevent that process from handling any requests until the file has been read, and in a browser-like environment it will prevent the UI from updating and will prevent all UI interaction until the file has been read.
In a simple script you run from your shell, the middle one is fine.
Thank you. I can see the advantage of Promises and async/await. But most of the time, I want to read the file right now, or define what should happen after it has been read. I can not imagine any circumstance when I maybe want to read a file later witch I think is the only advantage of Promises over callbacks, EventEmitter and Sync.
I think it's perfectly fine to wrap standard nodeJS modules into your own module, Promisify or what not. Almost all NodeJS modules does actually use standard modules in some way, it would be very naive to suggest that the standard library should start returning Promises, as it would break almost every NodeJS module ever created.
I also think that Promises only makes sense when you only expect one return, it would note make sense to use Promises in NodeJS streams.
Because it makes points of potential interleaving obvious. Without await, you have no idea whether the call will block or not, and whether whatever shared state you accessed before the call will remain the same after.
With await, you know. If there is no await, the call will not block, and more importantly, shared state cannot be modified from outside of that block of code. If there is await, there are no guarantees and you need to take the necessary precautions.
I realize this follows semver, but bumping major releases without major changes is... not as fun? as the 'old days' where major versions of software meant something less incremental :-)
Kind of wish there was yet another number prefix to semver to signify this.
IMO until someone can figure out a way to objectively "quantify" the size of a change, it should stay with how semver has it.
The line of "major" is arbitrary, and changes per person. It's basically useless information to everyone but the person/people actually making the version change. Having it there can just lead to anger when something that you would consider "major" doesn't make the cut, or when the "major" bump doesn't have anything you care about.
Semver is in no way the end-all-be-all of versioning schemes, but at least it's pretty objective for the most part in the sense that it can prevent bikeshedding about version numbers, while still giving some useful information to the users.
That being said, I'd love a system that lets me view changelogs by the version "level" I want. So I can easily lookup the major changes since v6.6, and someone else can check what has change since v4.2, etc... With Semver a lot of my time is spent reviewing every changelog between my current version and the one i'm bumping to, and there's no reason it needs to be that way.
If I release 1.0, then add a couple "medium" size features and release as 1.1 a month or so later, I can keep doing this indefinitely with 1.2, 1.3, etc. Baring some truly some new big thing or a fundamental change in functionality, it becomes very unclear as to when to release "2.0" or what even makes it different than the other 1.x releases.
On the other hand, if I do the exact same work of adding a couple features per month, but don't actually release, I could then make a big splash a year later with 2.0 that had 19 new features, and I think most people could readily agree that qualifies as a "major" release.
From a quality point of view, release early, release often is very useful: get feedback quickly, iterate in small chunks, minimize breakage.
However, from a marketing point of view, this is boring -- especially by the time you're on 1.19 and your features are largely quality-of-life or only affect a small segment of the market. A big 2.0 release that gets press releases and such is much more exciting.
Yes, that's exactly what it means for any project using [SemVer](http://semver.org/). It says nothing about how many new features you added or anything like that, only that you introduced breaking changes.
Make a marketing "name" and use that as a way of showing off what's new, while still keeping semver true to it's name.
In your example, version 1.3 could be "Saucy Gorilla", while 1.8 could be "Delicate Apricot", and when you feel it's change enough from the start, you can pick an arbitrary point and make a new name and blog post.
It would still have some of the bikeshedding that older "major.minor" version schemes have, but it could be not as bad because it is more obviously arbitrary.
I tried to make this very argument with a former employer. Who cares about numbers except for weird technical people? Just pick a cool marketing name and market the heck out of that that. "But we've always marketed 2.0 or 3.0..." So what? I suppose there's a small psychological benefit in a monotonically increasing number, but it doesn't really tell you anything more than "Saucy Gorilla" in terms of what is in a release. It can help you order the releases if you need to know which release came first, but you can alphabetize your codenames or even just have a lookup table for those rare times that actually matters.
What I do is keep bundling things together in a dev branch for a while and wait until I have to introduce a breaking change to release it all as a major version. Of course only with smaller projects where a new feature might be 3-4h of work, I cannot think Node.js waiting to introduce new features just for doing a marketing stunt.
It's actually been done (IMO) and done for several decades. In Manufacturing, they have the 'Form, Fit or Function' "Rule" [0]. In essence, your have only a two segment part number, and the 2nd part is considered non-essential, and only for deep traceability.
For software, I would apply the rule as such: 1) merge the major and minor segments, and the revision segment becomes equivalent, 2) then you follow the Form, Fit or Function rule and apply it to the core product, programs, and APIs.
I have software that is in production and has been for awhile. I've locked down all the versions of libraries I've used (FFF works here) and everything is running smoothly.
However, a large customer requests a new feature, and I see that library X has a new version that has new features that will support development for the new customer feature.
With semver, I can see that this new version is, say, 2.3.0. If I was using 2.2.x, I have at least some assurance that the API I was using before hasn't changed, so the amount of work to upgrade is likely limited to implementing new features, and not converting and re-testing older code.
However, with the FFF rule, it seems that the new version would change (due to form/fit (api?] or function changing), and all I know is that this version is new, and I have no indication how much work it would be to upgrade, other than assume the worst case that existing APIs have changed.
I could see why the manufacturing methods might make sense, especially in certified software... but in the wild west of web development, I have a feeling it would never catch on.
FFF would say that you release a new part number. But, it does not say how you name your part. So, you can essentially use a SemVer technique to indicate the same family of functionality with some sort of indication non-compatibility.
Part Name: MY SW 2.0
Predecessor Part Number: 150000-0001
Part Number: 200000-0001
And you have
Part Name: MY SW 2.0-IBM
Predecessor Part Number: 200000-0001
Part Number: 200001-0001
Say your added features to your IBM branch caused some regression. So now you have a second build that is compatible but required a bug fix. Now you have
Part Name: MY SW 2.0-IBM
Predecessor Part Number: 200001-0001
Part Number: 200001-0002
The biggest thing you should take away is that Enterprise Manufacturing and Inventory systems do not try to stuff all the knowledge about a product history into a single field, as SemVer attempts to.
> I'd love a system that lets me view changelogs by the version "level" I want.
Our MIS vendor offers this, and it's indispensable. Especially considering that each of their customers are on a different version at any given time. You select your current version, and any other version to compare it to, and it spits out a report of all the differences from the module level down to the object attribute level, all nicely separated into logical groups. Due to the complexity of the system, I couldn't imagine a successful upgrade without it.
The data is there for projects like node. Issues are tagged with things like SEMVER-MAJOR and SEMVER-MINOR, doc, etc...
It just needs a front-end to slap on there, and maybe a bit of standardization to make it easy to pull this data from not just Node's docs, but from others that adhere to it as well.
When I'm writing Javascript, I don't want to have to think about which version of NodeJS it is running on. I appreciate that major releases are boring to everyone except for people using C bindings.
It's only unstable because the version of V8 is marked as unstable. Newer versions of V8 have async/await as stable. TSC member jasnell has stated if things go smoothly, we can see async/await stable in a minor version release of v7.
Correct. async/await was first introduced V8 v5.2 but put behind an testing flag. Node.js v7 includes the latest stable version of V8 (v5.4) which was promoted two weeks ago but still has it behind a flag. The current beta version of V8 (v5.5) now has async/await enabled by default. Shortly after V8 v5.5 goes stable, in about 4 weeks, Node.js will update to include it. However, it will most likely be only updated on the v7 branch and not the v6 LTS branch.
Also of note, chakra still has async/await behind a flag as well.
I have read this comment which suggests async/await in Node 7 (V8 5.4) is buggy. Anybody has some more information about that?
> I'm glad to hear so much excitement about async/await. However, Node v7 is based on V8 54, whose async/await implementation has some bugs in it (including a nasty memory leak). I would not recommend using --harmony-async-await in production until upgrading to V8 55
There are a number of semver-major changes included in v7. That said, the goal for this release has been improved stability and performance over new features so the jump from v6 to v7 is fairly small.
I think gedy meant that it used to be that "v7.0 released!" meant that one could expect exciting, fun features to be present, and that the new version is worth taking a look at, and playing with.
With semver, it seems like a lot of "new features" are typically released in minor versions, since quite often they don't need to break compatibility in order to introduce features. So major versions are, to me, almost more of a cause for concern these days. My first thought is typically "Oh no, what part of my stack is going to break now? How much time will I spend tracking down the fix?"
So major versions are, to me, almost more of a cause for concern these days. My first thought is typically "Oh no, what part of my stack is going to break now? How much time will I spend tracking down the fix?"
Isn't that exactly the point of semver?
And, assuming things will break at some point,* isn't that great? Now you know when to expect it.
Semver doesn't influence design decisions of a project's lifetime. It describes them.
* fair assumption, unless you're dealing with software which literally never breaks backwards compatibility.
Yeah, that's definitely the point of SemVer. The only point I'm (and presumably gedy is) making is that a major version no longer feels like Christmas morning, but rather akin to "see me in my office tomorrow morning." Okay, not quite that bad, but in the same vein.
SemVer is great and helpful and I wouldn't choose anything else currently, but it also lacks the builtin PR that old-school major versions seemed to have, where major version bumps usually meant you could get excited about exploring new major features. There's nothing special about a minor SemVer bump that says "new major features have been introduced." The spec only asserts that minor means new features.
That is, there's no obvious way to know that 1.1 introduced only one new method for checking status, while 1.2 introduced a new magic() method that finishes your work for you and makes all your dreams come true. :-P
Eh, no harm no foul. :-) I could have been more clear, if there was still room for misunderstanding, and it gave me the chance to mentally flesh out my thoughts a bit better as well. :-)
I think this is the dilemma of semver versus "market" versioning. If true semver is confusing, perhaps stick with the LTS releases which are named differently (one of the reasons being clear communication)?
The LTS versions are named after the periodic table of elements, starting at a and moving forward. First one was "Argon" (4.x), second "Boron" (6.x).
There are several breaking changes which makes it a good use case for the major version upgrade. Also the odd major numbers for node aren't LTS versions so I imagine they'll be adding a ton more features before releasing v8 LTS.
Yeah but as backward compatibility breaking change, where would that number go? Presumably still at the front, because it's the most important number for version checking, and then it'll still feel like the number that reflects a "big change" =)
Anyone have a good summary of the language feature changes & APIs (either standards or standards track) that are in the version of V8 that ships with Node 7 vs Node 6?
Additionally, there's a lot of pretty impressive optimization work done by the v8 team. You can read more about that on their blog: http://v8project.blogspot.com
From what I read in slack:
Do not use async/await yet (with --harmony-async-await flag) because it has memory leaks (V8 5.5 will fix it when it gets added)
As a not-elligible-for-LTS version, it's probably fine - you're not going to be using Node 7 in production anyway, as it's only ever a cutting-edge release, unless you don't care about stability in which case memleaks are part and parcel.
node releases should ship with yarn instead of npm. Yarn is better all around - faster, deterministic, local caching, offline mode, better licensing terms.
Not unless you want a lot of people upset that their dependencies aren't installing: "Myles Borins (@thealphanerd) recently ran citgm with yarn, and shared the results. It was 25 minutes faster than npm, but 20 modules failed to install." ( https://nodesource.com/blog/the-definitive-guide-to-the-firs... )
Not everyone is down to switch such a core piece of the infrastructure at the first sign of new and shiny. I think yarn will succeed, but so did lots of people about bower, ied, duo, etc.
Give it a moment to settle. The path forward might even be a merger instead of replacement.
node developers have overwhelmingly indicated that they want to switch to yarn and are willing to work through the short term problems until it becomes stable. npm wouldn't be a suitable caretaker for yarn.
I am sorry, but what? What is your source? From a new account[1] with only 3 comments overwhelmingly favoring yarn over npm it's difficult to trust this as it just seems like SPAM. I have been developing in Node over 3 years and trying new things constantly and no, I'm not really excited about yarn. Nor many Node developers I know.
Ah I see, Github stars. I am not sure they are a good indication though; many people (HN effect?) star something new and fancy when it comes out, while I think NPM can be considered to have "grown" organically with Node.js; which in my experience doesn't attract as many stars at all. When NPM came out the Node community was still in its infancy, with not so many people in the ecosystem. I haven't even starred NPM while I almost starred Yarn the other day.
Only "in additional to npm", and then really only once yarn is at parity. Right now yarn still has issues installing certain packages, and there is no yarn equivalent to "npm run ..." which is a pretty significant hold-up (projects that rely on npm scripts rather than make files or some other OS-encumbered approach cannot switch over at the moment).
It's in the works and slated for a later release, but makes it such that yarn is not a universally viable replacement for npm yet.
I just use JS scripts, because if node is not installed you aren't getting very far anyway.
Actions like "copy this file", "clear this subdirectory", and "pull these files over the network" are easy to write synchronously in node, are there are modules for things that aren't.
For more complex actions, there are modules like env-cmd or cross-env for setting environmental variables on different platforms. If you had something really complex, you could check os.platform() and then call scripts written for each OS.
But you're right, you can't write a .sh shell script and expect it to magically work on Windows.
I'm not saying this shouldn't be done; I personally will be using yarn for all my future work. It might be a bit early, but regardless, the comment I linked to is about the process that has to happen for that kind of change to be seriously considered.
Well, maybe not before `yarn` actually installs the dependencies of all projects. It still has bugs where it doesn't install "random" transitive dependencies when using `yarn --prod`. Seems premature to make it part of node.
The whole promisify is a good example of general node.js culture. It's considered "good enough" and everyone just uses it. However, anyone who has done software development in more solid languages would feel very uneasy using something like this. It's not really the fault of the JS developers, the dynamic nature of the language offers no other choice. For example, when Java added lambdas, since it has static typing it could consider a certain type of class (a single method class) as automatically a lambda. This allowed full backwards compatibility with any old library that conformed to this (and many libraries such as Guava did use single method class as a kind of "ugly" lambda).
node.js has a convention for callbacks (function(err, result)), but unfortunately it's only a convention and there's no compiler to enforce it. So automatic promisification is not possible. That leads to the current situation. There's all these new language features in JS, but everyone is still sticking to the "lowest common denominator" of callbacks. There's no path forward for existing libraries. The only way is a complete rewrite of libraries using the new Promise/async/await style.