Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Io.js v3.0.0 (github.com/nodejs)
159 points by antouank on Aug 4, 2015 | hide | past | favorite | 49 comments


I thought they would merge back into node? Why are they releasing a 3.0 version (new major version breaks even more compatibility?)?


They will probably release a merged version in October which will be named Node.js 4.0 and will be supported for 2,5 years.

https://medium.com/@nodesource/essential-steps-long-term-sup...


Ah, seems version skipping is all the rage nowadays. Having an LTS version will be welcome though. Having to go in there and upgrade node every few months is a mild inconvenience on my project.


It isn't skipping versions. io.js is strictly following semver and have introduced breaking changes (usually due to a V8 upgrade) in each major bump.


Ah, seems semantic versioning is all the rage nowadays.*

Fixed it!


Yeah, having your version numbers mean something? What'll those wacky JS hipsters come up with next?!


If that is true: That's just an odd jump, seeing that Node's current version is 0.12.7


It'll be version 4 because it'll be a continuation from version 3 of io.js


I understand that. But in my understanding a version number also reflects where a project stands in respect to features planned vs features implemented. Jumping from 0.x to 4.x is plain weird in this regard.


There are different philosophies of versioning, and version numbers may have different meanings depending on project/product.

io.js uses Semantic Versioning (semver) which has clear definitions of what version numbers mean; node.js previously has not; going to 4.0 when the fork is remerged is consistent with the semantic versioning of io.js, and since node.js versioning has no clear semantics prior to the merge, not inconsistent with node.js versioning. So it seems likely to be a reasonable numbering in context, even if its odd if you look at it only from the node.js side of the fork.


Technically node.js has been using semver—the semver spec says `0.*` is pre-release and anything goes. Since node.js is still 0.12 it's covered under that part of the semver spec (not a super useful part of the spec, but it doesn't actually violate semver).


But in my understanding a version number also reflects where a project stands in respect to features planned vs features implemented.

Nope. Version numbers are completely arbitrary. Firefox is on version 39 and Chrome is on 46.0.2471.2 and Safari is on version 8.0.6 and Webkit is on version 10600.6.3.

Windows 3.1 was followed by Windows 95 was followed by Windows 2000 was followed by Windows 7 was followed by Windows 10?


It is worth noting that Windows also maintains an internal version that is only visible through the `winver` command (the NT version).

  Windows XP:    5.1
  Windows Vista: 6.0
  Windows 7:     6.1
  Windows 8:     6.2
  Windows 8.1:   6.3
  Windows 10:    10.0 <-- New!
Interestingly, they also bumped the internal version to 10.0 in Windows 10. It was actually 6.4 in the early preview releases, but soon became 10.0. I find it intriguing considering Microsoft's conservative notion on backward compatibility, but even Microsoft seems to have finally reached the point of refreshing a version number.


In Windows 8.1, the GetVersion(Ex) APIs were deprecated. That means that while you can still call the APIs, if your app does not specifically target Windows 8.1, you will get Windows 8 versioning (6.2.0.0).

Microsoft didn't want to relive the hassle of Vista's major version bump. Thanks to the deprecation of the GetVersion APIs, however, programs will think they are on Windows 8 unless they explicitly opt-in to new version behavior. This meant Windows 10 could actually use the 10.0 version number without breaking stuff again.


> I find it intriguing considering Microsoft's conservative notion on backward compatibility, but even Microsoft seems to have finally reached the point of refreshing a version number.

Alternatively that bump could be caused by a request from a marketing manager


Hell, TeX version numbers are just additional digits of pi (latest version is 3.14159265), and Slackware jumped version numbers from 4 to 7 because people kept thinking it lagged behind Debian's whole-number versioning (ultimately, Slackware's version numbers would have surpassed Debian's even without the artificial bump, but whatever). Arbitrary indeed.


Exactly. Their rationale is TeX can only become more perfect over time.


As a mathematician, I prefer the expression "converging towards perfection" :-)

More seriously, might be a urban legend, but I was once told that upon Knuth's death, the TeX code will be frozen and all remaining bugs will be permanently declared 'features'.


But in my understanding a version number also reflects where a project stands in respect to features planned vs features implemented.

Nope. Version numbers are completely arbitrary.

These are both slightly less than correct (but not wrong). Version numbers are completely arbitrary, from a totally objective standpoint. In this case, however, they are not. Semantic versioning defines guidelines for when to bump a given part of the version number, and gives meaning to each part of the version number. io.js follows semantic versioning, and when they merge into Node.js it will carry with it the new versioning scheme, which pretty much everyone agrees is a way better approach to versioning than Node's current way of doing things. You can read more about it at http://semver.org. Version numbers typically (in my experience, anyway) do not reflect future features, or features planned. They are representative of what exists in the codebase as of that given moment. Not sure where you heard that, but I'd be interested to hear about it. Also not sure why one would care about "features planned" in the current release of their software. Why tease me with what's to come when I can't even use it yet?


semver is interesting because it's part of the GitHub Re-Architect Software Development Initiative.

When GitHub was new, the founders dumped dozens of utilities and methodologies for public-facing software development on the world (smver, README-first development, mustache (was that them?), an RPC system inspired by Erlang, and dozens more) and a lot of people glommed on to many of those as The Holy Way, but everything is still just made up. (This is where the "Science" part of Computer Science falls short.)

Much like how Heroku proffers 12-factor apps because it fits their own narrative nicely, and it's also useful outside of Heroku, but just because one way of doing something exists doesn't mean it's necessarily the only or best way.

And, as far as version numbers go, with C libraries we have the concept of a "library version" and a "ABI version" — you increase the library version with any changes, but only the ABI version with breaking changes. It confuses some people to have one library with two version numbers, but it kinda makes sense too. It directly tells you the least common denominator of features this library can provide regardless of bug fixes or new features added later.


you must not have been following the IO.js saga so far...

first they forked node, then began using semver, and very quickly got to 2.x and now 3.x.

Merging with node just means now anyone using node can use whichever version they want, and future versions are officially part of node.


They're still operating separately for now, though under the "Node.js Foundation". I believe a merge back in is planned though I can't seem to find any specific dates.


The faster url parser is still not merged [1] (was supposed to be added to 2.0.0). All in all, I think even io.js is still very conservative compared to how node was pre-0.10

[1]: https://github.com/nodejs/io.js/issues/643


Thanks for the reminder - I saw this floating around months ago. @petkaantonov does fantastic work and I'm doing what I can to help get the PR merged.


Haha, no problem. Its been a bit frustrating watching that PR, as requirements went back and forth from complete backward compatibility to whatwg compatibility a few times and petka did everything to accommodate.

I just want to see node near the top of the list of benchmarks again - its never good publicity to be so close to PHP :)


Was hoping for the inclusion of ES6 fat arrows. V8 has a solid implementation now.


Should be included in the next major ( which will get V8 4.5 I guess ) https://github.com/nodejs/io.js/pull/2221#issuecomment-12625...


You'd have to check which version of v8 added fat-arrow support... I've been using babeljs for most of my stuff regardless of node/v8 version as I want async/await support which is a ways off still.


The next+1 branch has a fairly recent version of v8 4.5 that has fat arrows and other new language features shipped. My guess is that next+1 will become 4.0.0.


I'd love for there to be some sort of indication of moving towards using a lot of ES6 features in the Node.js API. Having the file system module returning Promises would be amazing, especially when async functions are standardised. What'll also be awesome is if the streams and HTTP module become Observables (async generators, an ES7 proposal).


> Status codes now all use the official IANA names as per RFC7231, e.g. http.STATUS_CODES[414] now returns 'URI Too Long' rather than 'Request-URI Too Large'

Nice. Using the names (from a reversed copy of http.STATUS_CODES) can make a code using non-common HTTP codes a lot more readable.


PowerPC support is exciting. Can't wait to give this a whirl on some of my better hardware.


This can't be stable at this rate.


Do you have any reason to sustain this claim?


Have you tried io.js v3.0?

It may be stable itself but breaks many modules which may be why major version got bumped.


Have you tried reinstalling these modules? Often, some modules have C++ bindings that need recompilation every few V8 version bumps.

In any case, running `rm -rf node_modules` and `npm install` helps


The future of yesterday, today.


[deleted]


That's why it's a major version bump, because it has breaking changes.


I'm hoping they will converge towards Go, with its fine-grained threading model (with shared state), as opposed to an asynchronous model which is effectively the cooperative-multitasking that we remember from Windows 3.1.

Asynchronous programming forgets that the CPU is also a resource that needs to be managed in a non-blocking way. Currently, in Nodejs, long-running computations (e.g., computing a large prime number), totally block the CPU.

EDIT: For anybody saying that shared-state is a bad thing, read up on functional datastructures with structural sharing.


Never say "shared state" and "threading" in the same sentence.

If anything, all languages need to move to an Erlang shared-nothing model. But, Erlang was doing all this back in the 80s, so people have to re-invent it worse in the 2010s to feel special. You can't just go around using things other people made (OPC—other people's code) that just work, you have to make it your own and start your own conferences and books and webinars and vine instaslacks.

Erlang is also preemptive (bumping reductions) so your "long running computation spinning on a thread" problem goes away too. Though, we can't say "goes away"—sounds too much like Go—how about it "erlangs away" the problem?

Decent write up on erlang scheduling at http://jlouisramblings.blogspot.com/2013/01/how-erlang-does-...

Also see decent writeup about shared-nothing at http://jlouisramblings.blogspot.com/2013/10/embrace-copying....

Erlang also has per-process (green thread) heap/GC (i.e. no 'stop the world' required), full load balancing over your CPUs, unbelievably useful pattern matching, atoms/symbols, full serialization of all native data types, built with network distribution in mind from the ground up, plus a syntax that doesn't look like somebody woke up from the 70s in 2010 and decided to write a new language just like their old favorite language. Any knocks against Erlang syntax is because it stands on the shoulders of giants, not because somebody made it in 2010 and ignored all programming language and usability research from the past 40 years.


Any benchmarks comparing Erlang, Go, C, Node performance for network applications (eg. max # connections, max requests/s, request latency, etc) that you can point to?


Benchmarks are interesting these days. Software can generally be made "fast enough," but there's no benchmark for maintainability or ease of coding.

What good is a Java program being 2x as fast as a Python program if the Java program requires 3,000 lines and two weeks to write while the Python program requires 300 lines and two days?

The idea for Go is "become the new Java but without that startup time penalty." The goal of Java was to be a "dumb enough" language where individual talent or skill or knowledge doesn't matter. So, you get a language not targeted at expressiveness or ease of thought or ease of programming, but at ease of using people as interchangeable cogs in a multi-billion dollar software development machine. Not really exciting to be a part of.

We've come full circle back to Averages (http://www.paulgraham.com/avg.html) — startups not using Go will fare much better than encumbered startups just due to the brain rot you're required to experience while using Go itself. (I mean, TABS? In 2015? Non-versioned web-placed dependencies? What is wrong with these people?)


>What good is a Java program being 2x as fast as a Python program if the Java program requires 3,000 lines and two weeks to write while the Python program requires 300 lines and two days?

I will stop you right there.

If there is a 2x difference it very well might be worth it to me if that means I only have to run half the machines I would with the slower solution. Run-time costs can dominate all other concerns for some problems.

Specifically, I am looking for Erlang vs. other (ie. C, Go, Node) performance commentary.


> If there is a 2x difference .. I only have to run half the machines

No, it doesn't work that way. CPU is almost never an issue, it's all about startup time, network latency -- that kind of stuff. So, unless you're mining bitcoins, 2x difference is negligible.


I guess I don't understand where Erlang would fit in your (my) stack.

If you want to handle millions->billions of external client connections, you may want to be able to pack as many on to a machine as possible to reduce costs (the difference between paying for 5,000 machines vs 10,000 machines). If you want these to be secure connections (which I do), there is significant CPU overhead in encrypting and decrypting all network traffic. Even if you aren't doing encryption work, there is still significant CPU work in managing large numbers of sockets. Implementations may also vary significantly in the amount of memory required to support the connection, affecting costs and machine count.


Throwing out the Windows 3.1 argument is an effort to throw mud at the wall and hoping it sticks.

Node is meant for network programming, where letting the OS schedule which process can operate on the network data is the right thing to do. There are certainly problems with the lack of ability to lightweight share data between threads/processes/coroutines but for the most part it works very well.

There are things node shouldn't be used for due to that. So use another product.


It works well for servers which merely move data from A to B. For servers that actually do some useful computations, the programming model simply does not fit. In modern web applications, latency is very important, and that is the very thing you are sacrificing by choosing the cooperative multitasking model for your server.


Isn't that basically what I said?


You can use workers for threads, though isn't shared state




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: