Then read all of the docs in 15 minutes and realized that I might like this.
Then I used it and I know that I'll keep using it, unless there are any showstoppers.
Great job, Devon!
Why? Because Rollup.js makes it possible to structure code into modules in a way that is backwards compatible and future proof. Yeah it takes some getting used to and it is not yet as simple as it could be, because integrating libs which are not yet available in the module format, takes some manual configuration, but once you know how to use it you won't want to got back.
I can highly recommend it, not because it is the hottest tool out there, but because I think it has a much higher long-term value than most JS libs.
Parcel seems to have a similar functionality but due to the wide range of other functions it comes with, I think there is a much higher chance that it will be rendered obsolete in the near future. Rollup.js on the other hand _focuses_ on the combination of modules to a single bundle which will continue to be required for the foreseeable future.
Actually, I still use Gulp to call Rollup.js as it is more flexible and allows me to use features which are not the scope of Rollup.js (e.g. automatic browser reload). My point is, that the exact thing Rollup.js focuses on is very important for the long-term development of JS: Modularization. Yeah, I know thats an upcoming part of the JS standard. But with Rollup.js you can use it today, as it has various adapters for legacy formats and therefore you have a tool which allows you a smooth transition into the future ;-)
1. "Ugh, OldTool™ v3.0.0 is overcomplicated, bloated, slow and unnecessarily configurable!"
2. "Introducing SuperTool™ v1.0.0, which just does the stuff you really need. No bloated configuration. No huge ecosystem of extensions. Simple, straightforward and fast."
3. "SuperTool™ v1.0.0 is great! But our setup really needs it to do something a bit different, so we've hacked in this extension."
...repeat for a while...
10. "Introducing SuperTool™ v2.0.0, which now has a more flexible API and configuration , allowing many former hacks to be done in a straightforward way."
11. "SuperTool™ v2.0.0 is great! But our setup really needs it to do something a bit different, so we've hacked in this extension."
20. "Introducing SuperTool™ v3.0.0, with a whole new, flexible, pipeline based API. Everything is a plugin! There's even a plugin to send email!"
21. "Ugh, SuperTool™ v3.0.0 is overcomplicated, bloated, slow and unnecessarily configurable!"
22. "Introducing HyperTool™ v1.0.0, which just does the stuff you really need. No bloated configuration. No huge ecosystem of extensions. Simple, straightforward and fast."
Its not necessarily bad as most new tools are slightly better than their predecessors, it just takes some getting use to and you have to learn how to structure your code base in a way that you do not have to completely refactor it every time a new build tool is adapted ;-)
If you don’t like a new tool, don’t use it. I can’t stand this kind of meritless complaints. If you’ve got a problem with the implementation of a new tool, that’s understandable. But to take issue with its existence? That’s just petty and a waste of everyone’s time.
If it takes away from their community and momentum, then yes.
Attention in FOSS is close to a zero-sum game.
Not to mention when the devs themselves get bored with tool X they've created and jump to creating some newer tool Y, dropping the whole community...
Given that you don’t pay for the product I don’t see anything wrong here. People work on what they want then change if they’re bored; you can’t do anything about that but pay them to continue supporting the tool you decided to use.
Things might not be wrong and still be a problem. E.g. your neighbor practicing tuba (badly) during the whole day, in perfectly legal hours...
In fact someone can gift stuff with the best intentions, and still cause a problem.
You can. You can encourage developers to contribute to a single project, rather than create fragmentation, which is exactly what's happening in this forum.
2) A fragmentation of development efforts across competing projects. Webpack has many contributors, but they'd have 1 more if the author of Parcel built parallelism into Webpack instead of a new bundler.
Second, even τηε premise of it being "unexpected" is not 100% true. None dev communities influence the development of projects all the times -- projects even have special voting boards for people to vote what features they want next etc.
Besides, it's very possible to not pay for a tool and still have a voice in the development of it. It just takes the development team to be willing to hear their userbase.
Last, but not least, direct payment is not the only/ultimate influence a user can exert on a project. Some FOSS projects are made by companies that want them to see wide adoption, so user complaints (of non coding users) matter very much -- things like Mongo for example.
In my experience, what is more common is that people who would otherwise not contribute to an existing project for whatever reasons (e.g. complexity, bad experience w/ maintainer, etc) start contributing to a new project because these younger projects are generally more approachable.
People don't really jump ship from something they've invested time into unless it's very clear that something else is taking over.
This was also the case with jQuery/Sizzle and friends back in the day. There's js-joda/moment/date-fns, underscore/lodash/ramda/just, css modules/styled-component/styletron, etc etc etc.
Lack of competition tends to lead to stagnation, and since js has such a large (and growing) community, you're never really going to be at a situation where major libraries will run out of community support due to competition of equal caliber.
Read Coverity's account of building static analysis tools for C: https://cacm.acm.org/magazines/2010/2/69354-a-few-billion-li...
"The C language does not exist; neither does Java, C++, and C#. While a language may exist as an abstract idea, and even have a pile of paper (a standard) purporting to define it, a standard is not a compiler."
The immense fragmentation of C compilers makes it difficult to build static analyzers.
You're correct that the cost of fragmentation is seldom that there's a lack of resources around any single project. It's more that complementary products are much harder to build, because there's no standard for the product category.
Change != !Improvement
So I'm not seeing the point here.
In my imperfect memory, it was perfect.. (I couldn't go back though...)
No one is forcing you to use the latest bundler / tool / programming language.
Let others experiment.
You’ll be happy to reap the benefits of their work when something mature comes out of it.
Tell me that when you get turned down in a job interview because you don't know <JS framework of the year>.
This stuff comes up in interviews. Trying to avoid answering direct questions about a framework and answering with "Well I can learn it really quickly because I'm a good vanilla JS Dev" doesn't fly all the time. If you can't answer direct questions about a framework, they will pass on you if they get someone who can answer those questions.
It has nothing to do with marketing yourself.
There are tons of companies that are going to do certain things you aren't going to be a fan of. Isn't it better to find out they're not that into mature platforms and prefer the latest trend during the interview and not on the job?
(in JS ecosystem)
1. It's really hard to find something that works
2. It's really hard to figure out all the dependencies, and configs, and plugins, and conventions, and ..., and ... for something that works
3. Once you have it figured out, each and every moving part, including the tool itself, will get updated with deprecations and breaking changes.
Except everyone is using them and you get quickly outdated, specially if you are looking for a job with heavy experience on python, php and asp.net web development. The market already shifted to "3 Years worth of experience on Parcel".
I used to love writing code and building stuff from scratch. Nowadays it's exactly what you said. I spend more time configuring stuff and finding tools that work well together than actually writing the underlying code that makes the app or the website work.
In the same time it costs you to boot up notepad.exe and create a HTML wireframe, create-react-app/electron/etc has already set up an application that is modular, knows dependencies and resources, is debuggable, can hot reload, etc. The time you would spend to get this manually ... it wouldn't even be fair to compare.
Most of the times you don't need any of those things, you don't need modularization, you don't need a dependency manager, you don't need a debuggable version (because you are working on plain versions without minifies or module bundlers that compress your code), you don't need hot-reload (because you aren't using a transpiler/compiler-to-js), you don't need the hot-and-untested framework, sometimes you don't even need a source control.
The big difference between that and going from scratch is that you choose what you want based on your needs not because everyone or a framework tells you "this is the right path", current state of web development increases "lego developers" to rise and shine like if they were expert.
I might take 1 day on the feature, and then 3 more days just getting it past the testing gauntlet
Node/JS in general has a pretty terrible track record for me with version hell.
However, this tool actually do useful work, based on quick glance I made. So maybe don't discount it just yet.
But again, I share your frustration.
I've been configuring Webpack projects for almost 2 years now and I'd rather use Parcel tbh. My only problem is the lack of plugins such as loading .vue files.
It would be nice though
This isn't reducing complexity, it's just taking all the junk you'd normally have in your bloated boilerplate and putting it directly into the build tool. I'm not convinced that's a good idea.
- Xcode, for iOS/Mac Development, is usually around 5GB (and
no, even if you remove every single simulator, it's still over 100-200MB).
- Android Studio is roughly the same deal as Xcode.
- I've not used Windows so I can't say for sure, but I'd be surprised if the various tech there doesn't follow this pattern.
Xcode and the Android SDK each come from a single vendor who provides an integrated development stack and presumably vets the contents. Does anybody do that with these npm packages?
It seems like every time I run the du command in my node_modules folder, I find some crap that shouldn't be there. Earlier this year, I submitted a PR that reduced the size of the babel react preset install footprint by about a third, just by removing a totally unused dependency that nobody noticed. I don't think the contents of npm packages get nearly enough scrutiny and it scares the hell out of me.
It's crazy to me that a package that is downloaded from npm over 4 million times a month gets so little scrutiny that nobody but me noticed a 5MB dependency on all of lodash that wasn't even being used.
With various CI setups and some server-side rendering configurations, there are potentially scenarios where build tooling actually do run in environments where there are higher risks, though it's not as applicable in this specific case.
If I put the answer on the internet, then thousands of people benefit. If my work only save you five minutes one time, it still may be worth the bother in the grand scheme of things.
Which is a long way around to my point: this group is curating a bunch of build related modules which means I don't have to pay as much attention. That's not automatically a win (especially if they do a poor job of keeping up, which happens a lot in this space), but it has potential at least to be pretty useful.
Well, npm may implement some kind of content-addressable storage to transparently deduplicate the dependencies tree on disk. It's not quite easy to do well.
As long as your bundle size is small, and the build times are fast, it's a clear improvement, at the expense of spending like 0.15% of even the cheapest 60GB SSD.
cries tears of joy
For example, if one of the JS modules in a library package says "import classes from './styles.scss'" and an application package imports that JS module, the application should somehow end up with the CSS output from styles.scss without having to configure Sass itself, and without having to include the entire stylesheet from the library package.
The only downside is that you either deliver one large bundle with mixed HTML/CSS/JS content to the browser or have to use some complicated configuration the split at least the CSS to a seperate file.
I have a few github issues open that haven't been touched in months!
I used brunch with a rails app. Breakfast gem. Not as fun to use as it seems, since doing anything out of the norm is almost impossible to find documentation on.
Honestly, Webpack was easier to use, simply because it is easier to find information on the web about it.
This just confused me for some reason but it's more me focusing on the wrong thing, haha
Thank you so much for this work! Not only are we happy to see a real contender in the bundling space, but also look forward to not only work together, but also learn from parcel in ways we can help make webpack more flexible and easy to use for those who want "zero configuration".
Welcome to the bundler family!
I love webpack to death but there are just so many parts that I keep getting confused, esp with things like "!" in in require, always copy-pasting the same boilerplate code over and over to for sass, es6, etc. I know it may be more flexible but sometimes you just want things to work and can't be bothered with too many details. Laravel did this with elixir where it wrote a wrapper just to do it (which lets you just get on with your work without having to learn another tool). This too seems to be focused on that. Great job!
After reading the docs though I couldn't find any mention of tree-shaking or other things that would shrink the final bundle size. Any word on that front?
Especially since Webpack's tree-shaking doesn't _really_ work for a lot of third party libraries:
I imagine Rollup might do a bit better on that front since its the bundler that popularized tree-shaking in the first place. Can anyone with experience with Rollup comment on this?
Is this officially backed by one of the author's employers or any other organization? I've seen too many projects (including major parts of bundlers browserify, gulp) die early when a sole author is forced to move on. Beyond the technical advantages, if I knew a company had already invested resources it would give me a solid business reason to switch.
However, my first reaction was "yet another build tool, huh?"
My next thought was "...I must be getting old."
Or if somebody has switched from webpack to parcel, can they give an idea of the performance improvement?
We've got a moderate sized app, and webpack rebuild (typescript, server-side react, client side js bundle, css export) is less than 3 seconds.
Can someone ELI5 what is bundling and what problem does it solve?
The logic for web "applications" these days can be complicated and involve loading hundreds of such files. If loaded individually, this requires a lot of HTTP requests and results in a slow page load time.
Bundling is similar to a static linker in that it produces a single binary out of all its dependencies. This can be loaded with a single HTTP request, cached, zipped, whatever.
1. Browsers don't support JS modules
2. It's more efficient for browsers to request a single file than a bunch of smaller modules. So even if you use something like browserify, which dynamically loads modules in the browser, you end up with much longer load times because of all the overhead with each HTTP request.
-you can log output
-perfectly capable of handling your unique build style
-no setup (pretty much)
-just call the command lines of other tools like (less.c, yui-compressor.jar ... etc) for compliation tasks
talk about over engineering. The thing is that the reason we dont have 1 to rule them all now is because no one has ever objectively argued why one is better than the other.
Question , what is the thing npm/grunt/gulp gives us that cannot be done with bash, if all the tools we used were command line executables?
Try running `npx ember-cli` directly in your terminal.
I have a sense that webpack is the same, its a bloated environment. I dont need an all in one solution for a build process. Just give me the tools, ill use bash and call it a day. So Im just asking, is there something Im missing? What is the feature from these environments that I can directly preform by calling a script from bash? Why cant we just pull that script out into a command line tool and then devs can use it to their liking, pipe its output to a folder, cat that file to soemthing else, call another tool to minify the results, move all the contents to the dist folder.
Node just doesnt seem to give me any more power as a dev (build wise), why is it so important to the space.
Hats off to frontend engineers, the space has evolved to something respectable but I feel like over engineering is a pursuit of community. Its like they keep making new tools, to encapsulate more features and then post on HN and say "Look at this cool new way to do things that fixes that problem you really didnt have".
No ones webpack was bottlenecking their work, im guessing you can choose what to build and when so you dont have to do a full build every time. No one is going to choose a build system because of its blazing speed. I would rather not have people work on these problems that dont really need to be solved.
This is the same as Java (JVM), .NET (CLR), and plenty of other languages. You can compile to a native assembly if you like (similar to C++ or Go) but it's not necessary when you have fast runtimes and JIT compilation.
Everything you listed for functionality is possible today - while also letting people write and ship code to multiple platforms quickly.
You seem to be looking for the ability to run JS tooling (builders, transpilers, minifiers) as common command line utilities. That seems to be entirely possible and I'm not sure why you're making the distinction. I can pipe input between all of them, include them in bash scripts, make files, whatever else you're used to.
There's no meaningful distinction between a "standalone command line tool" and a command line tool that is written in node. If I added webpack (for example) to an ubuntu package repository, you could do `apt-get install webpack`, and have a "standalone" command line utility without ever downloading it from npm. It would have to pull other ubuntu packages as dependencies, but that's standard practice for installing something these days.
Unless you're wondering about packaging it into linux. In which case I'm just as confused as you are.
I guess because everyone needs to do those steps, so why not have a single command that does it for you. Sure, you could write your own and have everyone maintain thousands of different random shell scripts all eventually doing the same thing, but why?