That being said, I disagree with the main premise. The tooling is getting there but it's just not as good as old-school IDE's for typed languages. Specifically, the code insight tools don't really compare. For example: Intellisense, refactoring, go to definition of a symbol, and find symbol references.
I programmed in typed languages for many years and when I find myself back in an old-school IDE for a typed language like Eclipse or Visual Studio a cognitive load is lifted off my shoulders.
I know for front-end, and also a reasonable number of back-end web developers in scripting languages lightweight editors like Sublime, Atom etc are popular these days.. maybe that's driven the idea that "scripting/untyped/loose typed languages can't have good Code Intelligence"?
the biggest issue i see, is that the "dialect" that you need to follow to enable the intellisense isn't well-defined. what seems to work for me:
1. explicitly define constructors
2. explicitly define the prototype
3. use jsdocs if you're using inheritance
4. initialize object literals (even if they get overwritten immediately)
One idea I've had on this subject is that each layer of tooling multiplies or reshapes the capabilities of all the other layers.
This sounds great, but there are good reasons to choose limited tools. Sometimes a powerful tool can grow so powerful that it does the work of other tools, and suddenly your other tools are dependent on this one layer.
So we use vim to edit 10 KLOC files, but should your files be that big? Your editor can do braces and semicolons for you, but should you need those? If I have a fancy type system do I need tests?
I'm not specifically advocating for a crappy editor, but the rule of least power suggests that we might get ultimate flexibility out of weaker tools.
Honestly I don't buy this. Nobody sufficiently experienced to need the power of vim/emacs should also be so comfortable with live editing on a server that it drives their choice of editor.
I've used simple text editors and semi-intelligent editors that have basic code intelligence from a prebuilt library and honestly i wouldn't go back to either of them.
The advantages of a good IDE are enough to make me overlook that it runs on Java of all things.
There is apparently less effort spent making sure those stubs are really accurate.
Does anyone (do JetBrains staff read HN?) know what process they use to generate the stubs?
It's amazing how many capabilities people miss on.
My main experience of using webstorm was on a client's legacy code base. Their policy was that webstorm had to used over any other tool.
None of the clever feature like go to definition worked properly and it ended up just dropping back to grepping for symbols project wide.
Personally I couldn't see any features you couldn't get from using TernJS and ctags with a quality editor.
I found it a bit slow and the keyboard navigation to be about limited. (Not in the main editor so much but in little things like navigating window splits, manipulating pairs, etc)
My gf works in js with webstorm and was amazed by what I whas doing in php.
That said php is much more static than js.
On the other hand I'm sometimes surprised by the things it can figure out from my code.
Then I also use Laravel's Eloquent outside Laravel, models don't get auto complete in that case either. e.g. if you do a $user = User::find(1); PHPStorm doesn't understand that $user is an instance of User class unless you explicitly tell it. Apparently if you use the whole Laravel framework there is a CLI utility that adds annotations to you and fixes this issue.
VS Code provides all of this (at least for Angular, which I use it for). I think JetBrains supports those actions as well (WebStorm)
Are you sure this is true for everyone? I learned programming through Java, doing about 6 months of it in an IDE. But I have now been programming in pretty much pure JS with no ide (just vim) for a much longer time and I feel a big cognitive burden when I go back to heavy IDE's. I think our minds change over time and become used to whatever we're programming in.
These features are available for JS using add-on features such as JSDoc, TernJS, etc... but you're right. Cross-file, these tools don't work as well as they do for Java.
If you need super-advanced cross-file refactor capabilities, chances are you're not using modern best practices for JS application design, including small modules with simple APIs & the open/closed principle (try to make APIs open for new features, but closed to breaking changes).
Annotations come with a cost, and JS provides many benefits that are not available in languages like Java.
As I said in the article, the JS runtime tooling makes intellisense look like a baby chew-toy.
I'm also surprised he didn't mention flowtype , which is by Facebook and is supported by Babel! If you're writing React, you're probably already using Babel, so using flow is very low friction.
Furthermore, if you're interested in runtime assertions, you can use babel-plugin-typecheck , which will convert your flowtype annotations into assertions. If we're looking to be pragmatic, IMO, this is the optimal path to take. The library that Elliot is working on looks interesting, but even he recognizes in its page that it's still a work-in-progress.
If you're writing a modern JS app and you wanna take advantage of all the ES6 goodies, you'll pick up babel. And if you're using babel, you can get jsx and flowtype for free. It's ridiculous to try and build a JS app without some sort of bundler, whether it's browserify or webpack. And if you're gonna use a bundler, pulling in a transpiler like babel is really easy and it lets you write much saner code.
Oh, and if you're already drinking the Facebook kool-aid with react, jsx, and flowtype, you'll wanna give nuclide a shot. As I understand it, it has nice flow support, with stuff like "click-to-define" and inline linting .
I'm using flowtype annotations with the babel plugin I mentioned above, but due to my usage of non-standard features, I can't successfully run flow itself over my codebase. I'll also note that my experience with nuclide when I last tried it was underwhelming.
This is essentially the same thing that happened to AMD, and one of the reasons that I advocated for users to chose Node-style modules and Browserify (and now Webpack) instead of AMD.
I consulted on several projects and gave exactly that advice. They didn't listen. Now they're stuck supporting a huge AMD legacy & Bower ecosystem and the authors of Bower are no longer supporting it.
That is the future that awaits those who build TypeScript + Flow-annotated applications. Fine if it's just a few modules here and there, but for large enterprise apps, you're building a mountain of technical debt what will eventually need to be paid.
Don't get me wrong, I am really excited that Flow and TypeScript are paving the way, and I blatantly stole some good ideas from them for rtype/rfx, but I'm also very wary about leading people down a road that I am pretty confident will dead-end within a few years.
Both projects still have great value though. We learned a lot from AMD, and it has influenced the new standards. I am confident that TypeScript and Flow with both influence whatever we get in the JS standard. I know the TypeScript team is already experimenting with additions to the runtime reflection specifications coming in future iterations of ECMAScript.
With regards to Typescript: I don't think it is a "dead end" and I definitely think you are wrong about the future of it. Typescript has been relatively careful at keeping "an eye to the horizon" and having some idea of the "forward compatibility" of its features. For instance, the module format in Typescript was built before the ES2015 standard finalized, but the final standard was surprisingly close to what Typescript used and nothing necessarily required being rewritten and it was very easy to clean up the old style incrementally if you did feel like being particular about using the final ES2015 syntax. I feel like the same can be said about Typescript's type annotations. They are definitely informed by ES4 ("the lost version", what forked into ActionScript, sort of) and in turn, yes, the Typescript team is certainly influencing the standards to possibly add type annotations to the EcmaScript standards ("this time for real"), or at least a Python 3-like approach where type annotations are safely ignored by browsers when left in the code.
At the very least, if ES standardized type annotations are suitably different from Typescript there will presumably be a Typescript transpiled output to it (just as the old Typescript module imports get output correctly when targeting ES2015), and you can even use that output to bootstrap your conversion to the new syntax (thanks to Typescripts attempt to remain a pure superset of ES).
Documentation and community support online for Flow seems quite light compared to Typescript - I get the sense it is mainly just used inside Facebook at the moment, and it was hard to find help for problems I encountered - and even with Nuclide, the IDE support for flow doesn't touch Typescript's (I guess partially because Typescript has definitions for the 3rd party libraries you use, which is usually quite a big part of JS programming) - the plugins for Sublime and Atom for Typescript are really impressive, as is Visual Studio Code - it's hard to imagine going back!
Typescript also seems to offer a more complete "correctness" check of your code, I guess because it is compiling it - I found that Flow didn't pick up on typos in things like imports, which led to runtime errors, whereas Typescript does. I also couldn't get Flow to validate my React prop types - I was probably doing something stupid here, but the lack of docs made it hard to know what.
The final straw for me with Flow was that at a certain point in the project, my Atom/Nuclide started using all my CPU with loads of flow-server processes, and again, I couldn't find a solution online, which made it useless to me.
Flow's lightweight nature is still appealing and I will keep an eye on it, but at the minute, for a commercial project especially, I think Typescript is a better and more mature bet. Shame it doesn't support non-nullable types though!
The main cost of using Typescript is the need to have definitions for third party libraries (although it is possible to create very basic "skeleton" definitions, just listing exported members as being of type "any", which doesn't give you type safety but is a quick way to get it to compile), and also interop with some "cutting edge" features like requiring CSS modules in a React component requires some hacks, but I think the benefit of having the code compile is worth it - I've found most of the time when a runtime error slips through, it's because I've used the "any" type, either in my code or for a library. I plan to write up my experiences at some point, as there's not a huge amount out there about using TS with the latest shiny stuff like CSS modules.
Also one other note on Nuclide - I had terrible problems with the version distributed through the Atom package manager - it would completely freeze my Atom and use 100% CPU and the only fix was to delete ~/.atom and start from scratch. Installing it from source fixed this, and even though I am now using Typescript, I am still using most of the Nuclide packages as it adds a few nice features (especially the inline lint/compile errors, not sure which plugin adds these but they appear on mouse over whereas the default Atom ones don't seem to). It does add ~10s to Atom start time, but otherwise doesn't seem to impact performance too much.
The article then goes on to not use the word "refactor" again. So tell me, why am I confused? Because I have some pretty expensive JS tooling and refactoring JS absolutely, 100% still sucks compared to a statically typed language with a good IDE.
Thankfully, I've been doing more JS, and less Java, in the last year. Despite the curly braces, it feels like going back to the golden age of the 80s sometimes, before the original sin of C/++/Java. I remember using dynamic languages and (in very small part at school) Lisp (as well as better static languages) before C and its spawn were shoved down our throats for the better part of 20 years.
(Unix is a great OS, but C is just a portable assembler, not a good high level language model)
It's a shame JS had to be dressed up to even look like Java/C++. We got functions, we got dynamic size arrays/lists, we got property lists with atoms, lists and functions as values. Hoo-boy!
Yes. Notice this is a list of only the trendiest of tooling and excludes many great and popular tools that aren't used by the cutting-edge react-webpack-flux-babel developer this post is directed towards.
The goal is the same as commercials with attractive people, associative advertising.
I gave a talk at the O'Reilly Fluent conference a while back discussing lots of the developer tooling that was available for JS a couple years ago. I wanted to gauge general awareness of the tools, so I conducted some informal audience hand-raise surveys.
I think there's a lot of value in a general ecosystem overview. What kinds of tools are there, what are the tools like? Are there interesting future trends to make note of?
I think raising awareness of the overall ecosystem is valuable.
React users have React.PropTypes, which is pretty cool, but too React-specific and also awkwardly verbose. I am working on rtype/rfx to fill a void in this department.
If there was anything I believe to be better, I'd happily link to that, instead. =)
For example I remember doing a Mandelbrot set generator in js about 3 years ago and it was a lot slower than C#. I'm talking orders of magnitude, methods that took 10 ms in C# would take 1000 ms in js.
But worse than that it hit incredible performance problems when I had an array of more than 10,000 items and tried to use push/unshift or a shift/pop.
I'm no language expert, and I never really dug into why as it was just for fun. Be interested to hear other people's perspective or a more technical explanation. I'm not really interested in those 'we tested these simple method speed' comparisions, I'm more interested in practical experiences.
EDIT: Regarding tooling, I'm presently optimizing js in a client's site and the profiling in dev tools is still basic compared to professional tools in C# for example, so this guy needs to expand his experience if he thinks DevTools is 'good'.
Another issue you may look at is that we don't have standardized support for SIMD yet (but it's coming soon). SIMD would help a lot for the Mandlebrot set. Try with one of the experimental SIMD implementations and see how that works for you.
You're right that I'm not familiar with the profiling features for C# tools, but DevTools' profile recording, flamegraphs, and memory profiling features are better than the tools I used in C++ and Java. =)
You can get whole games like Quake to compile and run in real time in JS and in high resolution, so the Mandelbrot set should pose no issue at all.
You need to take care to work with the JIT though, not sabotage it. Perhaps the code went in an non-optimal way?
For Mandelbrot, eg. instead of an array of tons of objects you could allocate memory in a buffer and have JS manage it.
The equivalent Ruby, Python or PHP could easily run Quake on today's hardware.
Also, what do you mean by 'much'. Like orders of magnitude or 30%? Microbenchmarks are fairly useless.
I didn't say Quake 1. Actually all of Quake 1-4 has been ported to run under JS/WebGL. Besides, it's still amazing given that the rendering happens in high resolutions and gets good refresh rates.
Moore's law or not, Python and Ruby can't pull this off with the same speed.
>Also, what do you mean by 'much'. Like orders of magnitude or 30%?
For things that test the raw language (e.g. its interpreter and implementation, instead of directly calling to some C helper that does everything like e.g. Math.cos would do) it's usually an order of magnitude.
So where are the toolings for refactoring then? Article doesn't mention a single one.
If you have a refactoring problem, I suspect it's a symptom of a monolithic architecture problem. Solve that, instead. =)
Having monolith is not a problem. In many cases having it makes perfect sense. Making everything micro-services because it's hot nowadays - that's a problem though.
I haven't gotten around to playing with Babel or ES6, and I want to. I haven't bothered to Google it yet, but anybody have experience with Babel + Closure compiler? Is it easy, or difficult and inconvenient, or not possible?
Google also has the Traceur Compiler which does transpilation of new JS features (ES6, ES7 proposals, and a couple of features that aren't in either) into older versions. 
: https://github.com/google/closure-compiler/wiki/ECMAScript6#... (might be a bit out of date, since there have been releases since that document was last edited)
Poking around, Traceur looks like something to play with and get acquainted, but not for production code just yet?
It relies heavily on JSDoc, and I believe JSDoc has had its time, and is on its way out (being replaced by tools like Flow, TypeScript, rtype/rfx).
ES6 may help with this, but it will be a long time before ES6 are natively supported and ES6 class syntax are adopted among devs.
Great stuff, thanks for sharing.
It also has some refactorings and even a console and plugins. What do you think it lacks to be an IDE?
Note that it's also also in early stages atm.
I haven't used Visual Studio Code enough to be able to speak to it specifically, but editors like Atom and Sublime can be outfitted to basically be IDEs.
I can actually do more with Atom now than i could with Webstorm just a year ago.
Plus, IntelliJ and Eclipse BOTH implement ALL of their functionality as plugins. It's just that they come with a default set of plugins pre-installed.
If you don't count the plugins, it's barely even an editor (even syntax highlighting is a plugin).
And I agree with the last part as well. Properly configured Emacs and vim can be used as IDEs, however they can also be used as editors.