I really hope Bun at least becomes a big player. Node.js has been very sluggish implementing new features, to the point where most people just don't trust them much anymore with this. Deno's direction is interesting but doesn't align well IMHO with dev's interests. Bun in exchange:
> Out-of-the-box .env, .toml, and CSS support (no extra loaders required).
This makes a lot of sense. Node.js should be "sherlocking" (integrating into the core) the most popular/common features devs use. It's crazy to me that, after 10+ years of `.env` being a common abstraction to manage environment variables, you still need to install a separated library because Node.js doesn't know to read the file itself. Same with yaml or toml.
Same with features like fetch(), which took 5 years from the Issue to be implemented in Node.js (and not for lack of collaborators, but for lack of wanting to merge it).
I'm happy though that Node.js is finally approving web features, and that the move to ESM has been finished, but they are moving so slow that I can def see a focused small team (might be too much for one person) overtaking Node.js.
I have zero experience designing systems like that but I think the sweet spot would be a standard library that's modular and detached from the core. You could pull it in after setting up the core, replace it or upgrade it in parts but it's still maintained by one entity with a consistent level of quality.
Sure, but I'd argue that value is marginal, for 90%+ of people building websites you want a straightforward HTTP server that you can build and customize with plain JS, and for some deeper cases yes then that bolt-on system is useful as well.
The problem with stuff like introducing .env support out the box is breaking backwards compatibility in a silent way, ie without any code changes and in a way that'd still run the same in dev, and pass tests on CI etc, and then just affect prod.
I know this shouldn't ever happen, but you can well imagine plenty of legacy/badly configured setups where it would. More pertinently, where it would no matter how loudly you warn about it in release notes etc.
When you're as mature and used a platform as node, you just can't risk things like this, unfortunately, no matter how more convenient it would be for the vast majority of users.
That would be very easily solved by adding a Node.js version or range in your package.json that specifies where the program is supposed to run, and treat major versions as breaking. There's a balance to be had here, and avoiding breaking anything at all costs is surely not really balanced and it's starting to add a lot of cruft that will be needed to be maintained long-term.
The solution is not to "warn loudly" here; specify a Node.js version in your package.json, and your code will continue working on that version. Upgrade the Node.js engine version, and then it's on the person upgrading making sure that nothing breaks. That's how virtually all platforms work (except the web itself, but that's not "versioned" so it's fair).
It's a bit more troublesome when changing core packages and considering dependencies, but the same could apply there.
Apple breaks stuff all the time when updating their APIs. If you hold yourself to decisions made long in the past, then your platform will undoubtably become crusty and stale. Maybe Node maintainers are ok with that, but it shouldn’t be surprising when people go to greener pastures.
I do agree to an extent but the specific point I'm making here is about breaking things in a potentially silent, not obvious way.
Not sure what things you are referencing here in terms of Apple's breaking API changes but if it's the sort of breaking change where previous code just doesn't compile or run on the new version it's a different matter in my view - as in, it's something far easier to catch in dev or testing and not only affect prod.
I explained some right in my comment, currently missing are things like native support for .env, for .toml and .yaml. Support for ESM and fetch() was missing for way too long until recently.
Currently I don't see big gaps between web and node anymore, but I still find the APIs a bit messy and to e.g. do work with files I have to import all of the three `node:fs`, `node:fs/promises`, `node:path`. It'd be nice if at least `node:fs/promises` ALSO included non-promises from `node:fs`, like "createReadStream()" to avoid having to import another core module for that. Also WebCrypto should define the variable `crypto` as a global, like `fetch()` and `URL` do, for compat with the browser.
Just some ideas/examples from the top of my head, no much search/research was put into this so def take with a pinch of salt.
.env would be nice but I personally do not miss .yaml/.toml at all so there's likely some subjectivity in these feature-requests and keeping it in user-space has a value too (simple, rock-solid core)
being able to run typescript code (without type checks) is in my opinion the biggest improvement in deno, I don't care about their own formatting or linting and I especially don't care about their LSP and opinionated way of file urls (requiring .ts suffix everywhere). Also, webgpu is nice but it shouldn't be in the core, etc.
BTW: Speaking of complexity, Deno takes 140M of memory, bun and node are both around 10M.
I think bun has huge potential to replace node, especially because it's not written in C++ nor in rust, both of these languages are extremely hard to master and that limits contributions (and FUN) to some extent. Safety is important but it seems to harm productivity a lot.
Those are the kind of things that you don't think about everyday until you actually need to think about it and go like, why isn't there a `YAML.parse()` like `JSON.parse` or similar? I need not only to import a separated library, but install it first? These things should def be in the core IMHO.
This is probably naive, but I'd love to see one of the Node.js competitors, i.e. Bun or Deno, innovate on the "it takes ~forever to load node_modules every time I run a test/script/etc" problem.
I.e. Bun is improving "bun install" time for projects with large node_modules, and that's great, but what about, after it's on disk, having to parse+eval all of it from scratch every time I run a test?
As admitted, this is a very naive ask due to the interpreted/monkey-patching nature of the JS language, but I'll hand wave with "something something v8 isolate snapshots" + "if accomplishing this requires restrictions on what packages do during module load, deno is rebooting npm anyway..." and hope someone smarter than me can figure it out. :-)
Deno doesn't use node_modules so you won't have a problem there. You specify each dependency as an import URL (either in the code or in an import map) and it grabs them directly (also using a local cache directory). There's no need in the deno world to even use a package manager.
Ah yeah, you're right that Deno doesn't have a node_modules, but AFAIU it still downloads dependencies to "somewhere on disk" and then, every time your code runs, it re-evals all of them from scratch.
So, admittedly I was using "node_modules" as a shorthand for "the code that makes up my dependencies", and that AFAIU Deno has not implemented this "use a v8 snapshot to cache preloaded/pre-evald dependencies" optimization.
You could maybe load bytecode if that's a thing in javascript to avoid parsing at least, but outside of that you'd have to make sure the environment is actually the same.
For example if I define an environment variable and some script deep inside the modules folder reads that variable and does something with it. You'd need to make sure the environment is identical before everything is loaded. There are other things as well like a script defining a global (perhaps for polyfill) and it needs to load before some other script.
The Nix package manager is probably a good place to look at how this should be done. However it's all based on strict constraints and other functional principles that I don't think npm packages are anywhere near of satisfying.
Bun and Deno both look really impressive and I hope development in this space continues at the massive pace it's happening right now. But I've tried using both on a small greenfield project recently (nothing fancy - just a static website with some custom build logic made by stitching together some templating engines and libraries) and I ended up reaching for Node again.
As mentioned in the article Bun still has a lot of incompatibilities with Node. For example, as far as I could tell scripts designed to be run with npx don't work right now. And I'm not sure what to make of binary lock files. Sure, it's more efficient but how do you know a `bun add` didn't change something it absolutely shouldn't have changed?
Deno has really fast TypeScript compilation and file watching built in which is awesome. I always loathed using tsc together with nodemon or some custom file watcher thing. And permissions are great. But the package index is more or less the most valuable asset of Node and Deno doesn't provide (and doesn't aim to provide) compatibility with most of it. Also, while I do like the idea of using URLs for dependency management, the way Deno does it with import maps and lock files feels extremely convoluted and too easy to mess up for me. NPM is more intuitive.
At the time it was the "tailwindcss" package IIRC. I also just tried it again on the latest iteration of that codebase and ran into some import problems, seemingly related to "markdown-it" (version 13.0.1). First I got "SyntaxError: Cannot use import statement outside a module", when I changed the package.json file to add "type: module", as requested in the error message, I got a panic. If you want, I can open an issue for that.
Additionally, I'm currently using at least one function in "fs/promises" that doesn't seem to be supported yet ("opendir") and the "vm" module to evaluate some (trusted) JS.
Most of those issues can be worked around, I just decided to go with Node instead for the time being.
Some of Bun's server speed can be attributed to uWebSockets, which has node bindings [1]. Of course this is just a small detail, since Bun is a huge project.
Good to hear there are others successfully using uwebsockets.js! We are using it in very early stage project, not production yet. Can you share your experience using it?
Great experience with it. The examples in the repo, the issues and discussions sections, and the documentations are all very helpful. Alex and the other users are also quite hands-on in replying on each and every issue and discussion there.
We have some thin wrapper for uWebSockets.js that lets us do the following:
Got to say your code looks very clean - really impressed! Db access, cache access sessions, s3 access and server init with middleware and endpoint interface - in a few short files with practically zero deps!
Honestly if it was 2 months ago we would definitely start with this, but unfortunately it is a little bit too late. Oh well
Installing a gigabyte of NPM packages (or whatever is being said to be faster) a little bit faster and a little bit worse is not that interesting IMO.
I actually use Deno right now because I only use a handful of third-party dependencies in my project, and I can use Deno’s bundler instead of webpack (even though that’s not its intended use). I’d rather simplify away the stuff that’s slow and complicated rather than making it faster and more complicated (less correct, less compatible).
It's _mostly_ talking about the speed of the surrounding tools. They wrote a package manager that speaks NPM (presumably in C++ or zig instead of JS) which unsurprisingly completely dominates NPM in performance.
The other stuff he benches is built into the runtime ... HTTP requests, copying files, and a webserver Bun ships with. Presumably the Bun team wrote these tools with performance in mind, but the article doesn't compare features. I'm skeptical the webserver in particular is at feature parity with the ones he benched against, which makes the numbers look pretty watery to me.
It does not address the runtimes of JavaScriptCore (Webkit/Bun) vs V8 (node/deno) which, as you pointed out, are probably very similar.
For the simplest benchmarks the performance disparity might come down to the difference between JavaScriptCore (which Bun uses) and V8 (which Node uses), but for anything non-trivial there's a significant amount of overhead introduced by the Node runtime.
Someone wrote a barebones V8 wrapper called Just Js, which is based on V8 just like Node, but it crushed Node in the Techempower benchmarks [1].
Bun is currently lacking workers, which at least for me is a dealbreaker. I'd also think it'd be interesting to compare against justjs when it comes to speed, since that is a very small wrapper around V8 (much smaller than node/deno) and manages to score extremely high on techempower benchmarks (top 25 on all, first/second spot on a few): https://just.billywhizz.io/blog/on-javascript-performance-01...
Considering the performance difference between node, deno and justjs I don't think that's right.
There is clearly a big difference between different wrappers and how to handle different things. For example some of them delegate most of HTTP handling to a native lib while IIRC node does a lot of that in its js stdlib.
FWIW I read it as the opposite. I think most of the stuff he benched was explicitly not benching V8 against JSC. This is all me reading between the lines .. maybe you know more about JS runtimes than I do and can confirm/deny my theories?
Package manager perf: Bun (presumably) wrote a C++ or zig package manager that's integrated, and speaks NPM. I guess you could aruge that this benches their fast one against the npms V8 runtime, but.. that's a bit of a stretch for me.
Copying large files: I'd be surprised if any of the benched distributions rely on the javascript runtimes to copy files on disk.
HTTP Requests: Maybe distributions actually call into JSC/V8 for these .. I have no idea. I'd guess not, but this one sounds the most plausible case for "benching V8 against JSC".
What is up with buns binary config file? How does one check dependency versions and warn of security vulns? How can someone think binary format config is a good idea?
Honestly, I can sort of see it making sense. The only time I need to look into my lockfile are to see the exact version of something I'm pulling in, and generally I think I would be fine doing something like `lock-file-tool --show-me xxxx`. In Rust, there's a `cargo tree` command to see the tree of your exact resolved dependencies for when you want to see the whole thing expanded, so I generally don't use the lockfile for that anyhow. I certainly don't ever update my lockfile by hand, so there'd be no loss of usability in that regard.
That said, I'm not super convinced that performance would necessitate this; if parsing the text file was really that slow, I think you could instead have a separate binary file created whenever the lockfile is changed that has a serialized representation of the lockfile along with a checksum to ensure that it's not out-of-sync, then hash the lockfile before using the binary representation. I guess it's possible that if the tooling is brittle or people try to edit their lockfile by hand, this might end up detecting an out-of-sync lockfile more often than not, but at that point I think the issue isn't really with the lockfile format.
There's an option to output a yarn-compatible lockfile. In practice, I think this means you'd need a branch protection rule to disallow a change to the binary lockfile without updating the yarn lockfile. I'm not sure that complexity is worth the performance gain of the binary format, personally. I think Bun should have an option (maybe in bunrc) to always use the human-readable format, though that detracts from the "batteries included" nature a bit.
> Documentation is limited, but Bun’s Discord is very active and a great source of knowledge.
Discord is a blackhole for information that search engines cannot index, it is not a replacement for documentation.
I hate Discord with such a bloody passion, and this is one of the biggest reasons. People think "just ask on discord" is appropriate as a means of documentation, it absolutely is not.
People choose the "fastest" language or framework, then add a bunch of lazy ORM queries without really knowing how to properly leverage a performant database, stitch together a bunch of "microservices", adding multiple HTTP trips across the wire, and by that time the performance of the core language is a rounding error.
I got a PHP gig recently (after working as a Go developer for some time), and this rings very true to me. In PHP you often have to make the most of your database, and this tends to result in a far more enjoyable codebase experience.
To note, nowadays node is not just for production runtime. Anything related to tooling (compilation/check/bundling etc.) for instance will benefit from faster runtimes.
Also, having a slower runtime only marginally push people to code more efficiently (e.g. people wanting to use an ORM will do so either way)
Sure, until you're stuck on some inner loop with a poorly performing language.
Of course,if you're doing microservices correctly, it should be easy to separate the parts where performance is important and write them in a language optimized for that.
That is almost never used correctly. I have not seen it, for one. If it's crazy math calculations, file processing, image processing - sure, but that's not how microservices are used in 99% of the cases. It's basically an anti-pattern.
Yes, I had to backtrack while parsing this sentence. Not all-capitalized would not have helped, "will" is still the first word, so would get capitalized regardless, and Bun is a proper name that would also get capitalized. It doesn't help that "bun" is a kind of bread, and is also part of the name of the "bo bun" dish.
There are some real obvious downsides to using this tool right now. Most importantly, the article lists there:
* Documentation is limited, but Bun’s Discord is very active and a great source of knowledge.
I'm not interested in becoming part of a community, I want to use the tool. Without official docs, I'll pass until the project has matured.
* Bun is not 100% compatible with Node yet. Not every npm package works. Express, for instance, is not yet functional.
I don't expect full bug-for-bug Node compatibility but if extremely popular packages such as Express don't work, I'm not sure if any of my projects will even run with this runtime.
Hopefully, this project will turn into a proper Node replacement because the Javascript ecosystem can use a big performance boost. It'll be a while before it'll become part of any pipeline I have a say in, though.
Yep, doing documentation and support through a chat service designed for gamers with terrible search that means the same question is going to be asked over and over again and where most users behave as if they were 15 year olds because that is the prevailing social norm means I'm going to pass.
I'm stuck with an open source project on Discord for a mix of developers, artists and casual users across a wide range of age groups. I'd love to migrate off but I honestly can't think of the right platform.
1. Lots of people like realtime chat. It's worth having something that fills that niche (the old "mailing list + IRC" combo used to work well)
2. Forum software is usually fairly awful. I don't want to install some 15 year old pile of PHP with bad UX.
3. A "Stack Overflow" style site requires a lot of moderation and either annoys users for being too messy or annoys users for being too strict
4. I actually still have some fondness for Google Groups but Google's brand is too tainted for most people.
5. Nobody under 30 seems to use email any more so mailing lists are out but something that syncs with email is a must (reply notifications etc)
I am also hoping for something free but very easy to install. (SaaS with a free tier, one-click docker or similar). And easy on resources ideally (I guess $10/month for hosting is about our limit at the moment)
I'm not convinced it's better than irc+proper mailing lists - but given that real users are stuck in awful mail clients (like web/Gmail or outlook) - mailing lists isn't a great option any more.
Main thing I miss from zulip is a proper weechat plug-in (like wee-slack) - although the official cli client isn't terrible - it's just not irc-like.
Mutt/pine (or newer clients like sup/notmuch) are faster, handles quoting/threading better, allows decent keyboard operation and is workable with high-volume of email.
My biggest issue with Gmail (as someone who interacts with users of Gmail) - is that it generally breaks quoting and hides this from Gmail users with its magic conversation view.
Outlook(web) does atrocious top-quoting rendering it useless for mailing lists.
In general the web interfaces doesn't work well with hundreds/thousands of mails IMNHO.
They also tend to needlessly lean on html formatting rather than plain text - with things like "my replies in blue" - rather than just proper quoting.
Ed: in general I've just given up the idea that the average user has any hope of interacting well with email lists - which means such lists aren't useful for general discussion (but can still work for specialist groups like Linux kernel etc).
The conclusion being that one will need "something else" for a general audience.
Maybe. I think the bigger problem is that there's no easy fix/recommendation to give new users; "stop using your propiatary group-ware that your company/organization pays for" isn't a great recommendation.
Gmail can't do basics properly like replying in the middle of a quoted text, etc. It shows you something relatively sensible, but sends completely garbled mess to the recipient.
Mailing lists aren't great, and neither is nntp. But mailing lists (and nntp) work fine with proper clients and some discipline/netiquette.
Sadly I don't think there is a reasonable way to introduce new users at this point - I'm not aware of any great gui clients that do the right thing; and AFAIK neither Gmail or o365/exchange work in a reasonable way with open standards (allow checking for when others are free; accepting/changing appointments etc) - so users are herded towards the semi-proprietary clients. (and interop across silos doesn't really work in a sensible way).
There are things like the d-lang forums that mix a decent web front-end and nntp (and mailman makes an effort for web+smtp) - but I'm not aware of anything that really works great, with a low barrier of entry and reasonably lets users participate across more than a handful of threads.
I just checked out Flarum the other day, why is it more modern? I actually found Discourse is still much better. Flarum does not even have nested reply threads as far as I can tell, it's all flat and hard to read for any thread of replies.
I far prefer Discourse myself, and it is superior in useful features. "More modern" is mostly what I heard others saying in response to me recommending Discourse. Suppose it is a look&feel thing mostly, and personal preference. That said, you can customize Discourse quite extensibly.
It's locked behind a proprietary service/account owned by a publicly-traded corporation, Microsoft, with shareholder value obligations. The other options listed in siblings can be self-hosted, are all open source, and some don't require a an account or at least support a form of decentralized identity.
Have you tried Outverse?[1] It's a new startup aiming to solve this problem, with support for forum spaces, threads and more. I've been using it for some time now and would definitely recommend it! Makes it easier to save threads, common questions, etc and has a super intuitive interface overall. I'm looking forward to using it for my open source project, and it solves a lot of the problems I've had with current platforms.
I think NNTP+IRC is good, and you can have bridging to other services if desired (which is probably a good idea too). You should also make logs of the IRC; the IRC channel for my project has public logs.
Discourse and Flarum are even worse, in my experience.
However, having chat services does not substitute for having documentation; you should also have good documentation, too; you should not expect someone to only ask questions.
How about GitHub discussions? One open source library I depend on uses it to good effect. You could keep the Discord for realtime chat for stuff that should be realtime. Of course, you’d probably have to make effort to push support questions to GitHub. People would probably still ask questions there.
i think element (matrix) could work for you. it can't do email notifications, but if you have the mobile client on your phone you can get push notifications.
Well - I am kinda saying "don't use Discord" - at least except for specific cases where it's ephemeral nature isn't a liability. I personally wish I wasn't so tied to it.
Really? There's no way to do verbatim searches and it's "very" generous in deciding what it thinks you meant to search for. It's almost useless for searching technical posts (or anything when you need to disambiguate similar words)
GitLab is charging for metered compute minutes, which is hardly an unreasonable bargain for users who have been them for free for two years. Slack's feature-gating is disconnected from any unit economics - indeed, they actually do _store_ the messages prior to the most recent 10,000, but they just refuse to index them. The cost of that indexing is relatively minimal and does not scale linearly with number of users.
Doesn't matter what the technical justification is, it's a business move, not a tech move. It's about the customer's willingness to pay, and it looks like for Slack, companies are very willing to pay.
The primary use case that we’re talking about is that someone has a common error, in which case they can put that error message into Discord search and read conversation about previous errors. Discord search is perfectly adequate for that use case.
I don’t love that projects use discord now, but I don’t hate it either. Beyond the crappy search, what other reasons make discord a bad fit for technical forums? What’s your preferred solution?
My question was driven by the fact I don't have a preferred solution that would suit both technical and non-technical users.
But as for why Discord is a bad fit - it's completely opaque to search engines, it requires an invite, it has a fairly complex UI and it's basically "unstructured chat". Threads are an afterthought and don't come close to giving any proper structure.
I want chat for people that, you know, want to chat - but structured posts with topics and categories are needed for any sane, long term knowledge-base.
People on chat (this goes for IRC as well) will continually ask the same questions and you will continually have to answer them again because there's no structure.
I actually don’t mind projects that use discord as their primary online community, but goddamn, discord’s text search is NOT amazing. Yes, it searches text and is likely fine for gaming / casual online communities. But man, it provides none of the utility I would expect a modern chat app’s search feature to have. Boolean operators, date ranges, wildcard matches, fuzzy matching, scoped searches (beyond just being able to specify a channel, like searching for a string in a user’s posts in a specific channel), etc. The subpar search functionally is a huge weakness, especially for communities like OSS where being able to meaningfully surface past comments / threads / solutions provides enormous value.
In one sense, I completely agree with your central point: Bun isn't yet stable enough or mature enough for production, "Node drop in" usage.
But on the other hand, when I look at Bun I think it has heaps more going for it than Deno. The fundamental value prop of offering better performance (both runtime and developer experience) is huge, and something that would get me to want to use it. Contrarily, while I see a number of improvements in Deno, the vast majority of them seem to be "niceties" that solve some initial "setup" issues that can be painful in Node - but since I already have taken care of a lot of those Node issues in my own projects, there is not a ton that I see compelling in Deno.
Point being, if I were a betting man, I'd easily put all my chips on Bun vs. Deno. It's a "plan where the puck is headed" vs. a "plan where the puck is now" approach, and from that perspective I see a lot more value in Bun.
> …I already have taken care of a lot of those Node issues in my own projects, there is not a ton that I see compelling in Deno.
I felt this way too before using Deno for a while. So far I enjoy:
- Breaking from npm/node modules support on purpose turns out to be a real plus for me. It's refreshing to have dependencies referenced by URL, and it's good to have them cached centrally by default (in $HOME/Library/Caches/deno on macOS and $XDG_CACHE_HOME/deno or $HOME/.cache/deno on Linux, for example).
- `deno lint` and `deno fmt` make adopting and using JS/TS feel more like Rust/Go/other languages with good built-in ceremony-free tooling.
- Fresh is turning into a very nice Next.js/Astro alternative (https://fresh.deno.dev/ ) that I found very easy to learn and deploy, with great performance and developer experience out of the box.
Bun is interesting, but I wish it didn't embrace node modules: perpetuating its use instead of attempting to move the community on by recognising it for the mistake it was feels sad to me. (See Ryan Dahl's “Design Mistakes in Node” PDF or talk for more: https://tinyclouds.org/jsconf2018.pdf and https://www.youtube.com/watch?v=M3BM9TB-8yA )
To your point, it would be refreshing - and appropriate - if more projects baked "documentation-first" into their purpose. For all intents and purposes it's "marketing materials". When it's lacking, as you noted, it creates doubt. "Join us?" Huh. Join what?!?
This is one of engineering things that happens year over year. We all understand the value of good docs. Yet it keeps happening. Why?
These days now that I'm fully in charge of building my own product, I often wonder if we place too much emphasis on centralized documentation sites that users have to deliberately seek out and visit vs in-context documentation snippets that show up as closely as possible to where users might actually need them, deeply integrated into the product and user journeys.
Users don't search for documentation for the sake of finding documentation. They search for documentation because they want to know how or if our product can solve a particular problem they have. My hypothesis is that the documentation discoverability problem is really just a symptom of the product discoverability problem, and that centralizing docs in 1 searchable website to make docs "discoverable" is only addressing the symptom, when that effort can be much better spent addressing the root cause by making the product itself more discoverable and deeply integrating useful documentation into it.
> centralizing docs in 1 searchable website to make docs "discoverable" is only addressing the symptom
a searchable docs website is the most important thing for me. having to "discover" an api by stepping through code and comments is a waste of time--only useful when you already know the basics, which requires documentation
Centralized documentation is valuable for several pieces:
* Easily get an overview of the entire API. I may just be scouting the library, so I want to understand what the API looks like.
* Examples as a starting point. How do I use your API?
* Makes your product more discoverable / approachable to potential users.
Think of your users like a funnel - how are they using the library? What are the common reasons you’re losing potential users? What are the common reasons you’re losing existing users? Users are also different so you have to analyze by cohort.
Now can something better be done? Maybe It takes a lot of work and would have to address the above issues and I don’t know if it would necessarily change the need for something centralized.
I read the documentation before downloading anything, to find out if the software will work for my problem. I’m not sure that “in context” helps with that?
That’s what kaseya is doing with their software, basically having contextual assistance for each feature as an “AI Buddy”, where users can choose to listen/learn or just continue using the product, all integrated as one
I think it's because documentation (and design) requires different skills than writing code. A great programmer is not necessarily a good writer or designer. There are rare gems, people who have a balance of such skills, but most of us struggle to write useful documentation, even the bare minimum necessary for others to get started - or any at all. The code, or the lead dev's brain, is the documentation.
A perfect example of this is documentation generated from docblock comments. Some projects only have such docs, and expect users to go from there. It's as if programmers envision their audience as another machine to program.
I feel similarly about business/marketing aspects of software projects. Many programmers seem to assume, "If you write it (the program), they will come."
Successful software is so much more than just the code, it usually involves a communal effort of various skills, especially human communication, including writing good documentation.
Because being a small project with limited resources requires devs be as resource efficient as possible. Docs, demos, tuts are all super important but it creates an additional maintenance overhead - if they are not maintained up to date, they become worthless.
So for the early stages of a project it makes sense to keep the overhead low and work with a small group of focussed early adopters.
As projects grow, mature and become more stable, more investment into learning material becomes important. Doing it too early burns valuable resources.
I've learned over time that it's much easier to incrementally update existing docs than it is to add them to a large project from scratch - so every single one of my projects, no matter how small, now has documentation from day one.
As I add new features I update the docs to reflect the changes, trying to keep those documentation updates in the same commit as the tests and implementation.
Since I started doing this the quality of documentation I produce has gone up a ton, because I'm constantly exercising those muscles.
It's been a huge win for my coding quality and productivity too - I don't have to remember as much stuff because I can refer back to the docs, and documenting as I go along causes me to make much better design decisions.
Worth noting: I'm a native English speaker writing documentation in English, and I've been blogging frequently and writing online for over twenty years so I've accumulated a LOT of writing experience. So what's easy for me may not be easy for other people!
Good point but writing and maintaining README.md files for your solo projects is nothing like writing proper docs for big projects with multiple contributors and users. That's a full time job, isn't it?
Writing non-code is easy and fun to me - I used to do it for a living - but when I am pressed to deliver features, documentation takes a backseat. Also, I've never felt like documenting other people's code
I find the same approach I take to READMEs for smaller projects scales up pretty well: any time I make a change to one of my larger projects I ensure that the documentation is updated as part of that change.
If I accept a PR from someone without documentation I'll follow up by adding the docs for it myself in the next commit. I think that's more reasonable than demanding people add documentation if it's not necessarily their core skill set.
Honestly, I'd love to experience working with a professional technical writer on this kind of thing, but that's not something that's happened at any point in my career to date!
I've been working on my OS library for 9 years now, almost entirely by myself. For the majority of that time I kept documentation separate from code. Now I do enjoy documenting stuff - I have diplomas in both Computing, and Creative Writing - but my various approaches to documenting the project would start off with the best intentions but soon enough degenerate into an irrelevant mess as I tweaked code, forgot to update related webpages and demos, etc.
In 2017 I stopped working on the project entirely: I'd coded up a new major version of the library but never released it because knowing I had to overhaul and re-document so many different pages, demos, etc ... like a dementor, it sucked all joy from me.
Then in 2019 I recoded the entire project from scratch. This time I thought about the documentation first, before I wrote a line of code. I decided the best approach was to generate the main documentation from inline comments. I did this for both core code, and for demo examples - the demos stopped being standalone afterthoughts and became instead my end-to-end testing suite. To present the documentation to any developers who might show an interest in the library, I coded up the library's website in a way so that the core documentation[1] and the demos[2], with easily accessible code, could be very easily copied over to the website whenever I rebuilt the library (which regenerates all the documentation). I also added a set of lessons to the website, and a set of "How do I" articles - both of which are an ongoing project.
The library's website doesn't have any functionality where users can ask questions - but that's what the GitHub issues and discussions pages[3] are for.
The system isn't anywhere near perfect (I still need to automate the demo testing, for example, and there's no CI for copying stuff from the repo over to the website, etc) - but, given the depressing messes I've managed to fall into in the past ... it's working really well for me!
The boring (and probably mostly correct) answer is that engineers don't like writing documentation so it is the last thing to get done, if it gets done at all. This is especially true of FOSS projects. You may be motivated to spend your free time hacking on something cool and interesting, but spending your free time writing documentation is another matter.
Because a poorly-documented product that does something, is much better than a well-documented product that does nothing.
Engineers have to decide where to spend their time, and taking time away from feature development when your early-stage product doesn't have many features is not a winning move.
Agree. But it’s not either/or. I invested easily hundreds of hours in docs before launching TinyBase.org - but I don’t think it made the subsequent community building challenge any easier.
True. Adding tho' that the original topic isn't a side project. It's a team bringing something to market and looking for participants.
And to your point, writing docs *is* something to consider when building the team. As is making sure the culture has a reward system if such behavior is important.
> This is one of engineering things that happens year over year. We all understand the value of good docs. Yet it keeps happening. Why?
Because doing anything is is irrational?
Time is finite, only a rounding error number of projects (especially open-source ones) are going to be successful, with or without docs.
Only an irrational developer would think that they are going to hit the 1-in-a-thousand jackpot with their project.
Any time spent producing documentation is time not spent on adding features and fixing bugs.
Spending time on documentation over and above the bare necessity needed to get out a working product "just in case we win the jackpot" is simply irrational.
This is probably why you don't find many projects spending significant startup time on good, clear and comprehensive docs (as opposed to a README and an FAQ and nothing more): the ones that do as you suggest mostly die before even getting users.
TLDR; don't treat your startup project as if you already have 5m users who depend on you. The odds are that you are never going to get to that point without a good product, and the better the product, the fewer the docs needed.
if you spend your time making sure your product works but don’t write thorough documentation, it can become a breakout hit even though some hacker news commenters may complain about a lack of good documentation
if you spend your time writing great documentation, the product will suffer and nobody will use it
> if you spend your time writing great documentation, the product will suffer and nobody will use it
This is an unfortunate perspective. And I could say the opposite is true. If you don’t spend time writing great documentation, your product will suffer and nobody will use it.
Documentation gets neglected because developers don’t feel like doing it. If you are building a product for developers, documentation is critical.
And it doesn't address the semver malware injection bug demonstrated by colors author. Funny isn't it, any one of the thousands of npm package authors can inject a malware into our computers and nobody gives a shit.
Dependabot watches all the transitive dependencies in your lock files. For better and worse. For worse in that it's not a great developer experience to get a PR on a low level transitive dependencies, which is also one of the largest complaints about Dependabot that it often feels too low level and not working at the dependency level you are working at. But Dependabot (and npm audit) still exist to audit all your low level transitive dependencies in your lock files.
It’s a work in progress, it’s definitely not meant to be ready for production workloads yet. It’s in beta, I’m assuming if they do get to a full stable release, node compatibility will be far better, Express will work, official docs will be improved, etc.
Another concern I have is that cool as zig is, it is still pretty unstable, with breaking changes almost every release. So, I'm not sure i would want to trust this with production code.
Thr just reinforces the power of the java community to me. Servlet container is completely a choice built around standards unlike the js community where it still seems to be node or bust. It makes java look light years ahead of js.
Discord scares the crap out of me. I just don’t trust the app. I couldn’t figure out how to stop it from opening on boot. And it has access to my camera and mic for some reason? Managing identities across different communities seems leaky at best because I think I have to change my name after joining. I’m sure a lot of this is my own ignorance of the tool but I have no interest in learning a modern gamer chat service. None of it is going to stay the same in 10 years, or even 10 weeks. That’s all a hard pass.
This says nothing of what a terrible choice it is for a knowledge base, especially long term. The only way you choose Discord is if you aren’t thinking beyond the “next release”. That’s a huge red flag in a fundamental framework project like this.
Sure. For now. But I have to carefully police the features of every release before I start the application.
It’s a minefield. People will get hurt. Those of us who have been around will be entitled to say we told you so. But I hope users move away before they get hurt, which will happen.
This is a proprietary application and protocol. It will come back and hurt you. Get out while you can.
> Documentation is limited, but Bun’s Discord is very active and a great source of knowledge.
:-1: I loath to use discords search interface to look for information. I'm sure the Bun team doesn't see this as an end goal, but I wish forums were still a thing. At least they're mostly indexed.
> Choosing proprietary tools and services for your free software project ultimately sends a message to downstream developers and users of your project that freedom of all users—developers included—is not a priority.
I see your point, and I don't use GH when I can avoid it, but those are very different in my mind. GH, for all its flaws, is a value-added hosting service on top of git. It interoperates freely with any other Git host and you can clone/pull at any time. Discord is in an entirely different category: a completely proprietary reimplementation of IRC, with rent-seeking features bolted on.
A friend and I recently started a dev collective (https://pico.sh) to work on projects targeting the smol web. We decided that many popular communication and collaboration tools would not fit into our stack. Things like GitHub and discord were out of the question pretty quickly.
Instead we opted for Sourcehut and irc and have been pretty happy with the results.
How is that first issue a blocker for you? It arguably makes the plafond worse to have more gamification and social features.
Anyways, GitLab vs. GitHub is a myopic lens for the quote I posted. All proprietary software choices have consequences, and it this case we're focusing on the parent's issues with Discord vs. open alternatives.
In this case, you're not supposed to search. You're supposed to ask your question, hoping that once somebody gets annoyed enough to repeat the answer over and over, this question gets deemed worthy of documenting.
This was my experience working with Unreal Engine. Couldn't figure out what to do next until I could get somebody to take a look at my problem and that would take multiple postings on discord to get somebody's attention. Somebody had the galls to get upset by my question because he was in the middle of getting help on his.
This support by discord or community only on discord trend needs to stop. It's just creating toxicity and those with malice/narcissism personality dominates.
Rarely do fast response times create a sane knowledge based community, it only agitates and lots of noise is created as a result.
Not really. Know your history first. Node split to io.js due to team disagreements and fragmentation. io.js made an immense amount of progress, adopted semver, and had all the momentum. At the time, Node was stagnant. The choice to merge was rational and good. Deno is a separate project, and while has a node compat mode, is meant to be a TS-first ecosystem with many different rules and methodologies. Bun is an experiment to see if it's possible to build node greenfield today. Only the brave will use it in prod, and only the wild mustangs of management will allow it.
Try not to give into FUD. The JS universe is fine.
That’s a great interview. He sites one of his motivations for wanting to make the dev cycle faster: “if it is slow you get distracted and read Hacker News”…
IMHO Deno's has been approaching it "wrong", so it'd never take over Node.js. Deno has 3-4 big differences over Node: url-imports/no npm, explicit permissions, web compat, typescript.
As both a dev and as a package publisher, those are either not an advantage, or a straight disadvantage. npm, for all the downsides it has (and it does have them), IMHO has been a huge net positive for the JS/web community. Explicit permissions on the land of 100-dependencies is just a non-starter. Web compat is def the biggest advantage as a dev myself, but Node.js has finally woken up and catching up. I don't use TS so won't comment.
IMHO the main advantage of Deno, like IO.js back then actually, has been to make Node.js stop being sluggish and actually accept PRs/new challenging features.
I had completely given up on Node and JavaScript in general until I had a chance encounter with Typescript recently. Seriously looking at deno specifically for the built in TS support. Will never touch basic JS again.
From Deno's early days, it seemed their goal was to make something fairly different to Node. Holding up "had it replaced Node yet" as a yardstick seems misguided.
Agreed. The guy just wanted to make something he felt was better. I've never seen a mission statement that said the goals were to replace Node. It's a separate beast.
"No native Windows support" should be written in all caps at the beginning of the article. This is a dealbreaker for most developers working on Windows, and this project will never take off until it treats Windows as a first-class citizen. WSL helps but not everyone uses it (probably most people don't use it for regular development environment).
Don't believe me? Just wait and see how this turns out in three years.
Fair point, but I sometimes wonder if there are even any web developers left who use Windows as their main platform. The one thing that has probably changed most in the last 15 years is that Windows isn't the center of the (coding) universe anymore. And (from what I can tell), Windows support with node.js isn't perfect either, otherwise tools like rimraf wouldn't be needed.
>if there are even any web developers left who use Windows as their main platform
There's quite a lot of developers at stodgy Fortune 500 non-tech companies that are sort of forced to use Windows for development. Either explicitly, or though poor enterprise support (vpn connectivity, local admin restrictions, difficult path to purchase a Mac, etc).
It's one reason things like Gitbash, WSL, Docker Desktop, etc, are very popular.
> There's quite a lot of developers at stodgy Fortune 500 non-tech companies that are sort of forced to use Windows for development.
In that sort of company it’d take 2 years to get approval to install it anyway. If you submit the paperwork now there’ll probably be a windows port by the time it’s approved.
Those numbers are all developers overall. In the Django developers survey (i.e. web only), Windows is just above 10%. The most used platform is Linux (42%), Mac 32%, Windows with WSL (i.e. Linux in a VM) 17%, and straight up Windows is a paltry 12%. In other web frameworks the numbers are likely worse, since Django has a pretty good Windows support.
In case you are seeking anecdotal evidence: does your company employ offshore devs? Try and ask them what do they use. Outside of the few countries where most people can afford Macs, things are different.
I wonder if respondents were allowed to check more than one item on that list when answering the question? If so, there is likely a lot of double counting. Large, overarching surveys are quite difficult to create and interpret. The Django survey I quoted earlier was a worldwide survey for what it's worth.
> I wonder if respondents were allowed to check more than one item
yes, you are right, they definitely were allowed to check more than one item, we can see that the total sum of all options gives us a total of more than 100%, so that must have been the case.
But even then, it's hard to overlook the fact that the first three options in the list are related to web development, and with quite high percentages.
I'm coming out of the game development 'social circle' which is still a Windows fortress, but even there the wall is slowly cracking ;) In general I notice more and more that 'new peeps' are not automatically familiar with Windows, but instead started to tinker with programming on Linux.
We are talking about web development, while the survey you linked is about all software developers. No one argues that Windows is king in software development world in general, but for web development I would argue Linux/Mac duo is way more popular.
Any data to support this claim? I'd argue the opposite: Especially in web dev, Windows is king since most people, including semi-professionals, do some sort of web dev.
Health care, logistics, law, finance, retail - there are a number of major industries that all still use Windows as primary. I have a good friend working in tech for Walmart, and another for a major hedge fund; both are Windows shops.
It's naive to assume that Windows isn't prolific. There's probably no good way to quantify numbers, but it's a major player still and likely always will be. WSL certainly helped keep that a reality.
> And (from what I can tell), Windows support with node.js isn't perfect either, otherwise tools like rimraf wouldn't be needed.
I don't see how rimraf is related in any way. If you want to remove folders by calling a shell command (with full knowledge of all potential security and portability risks), your code is going to be platform dependent - it's not Node's responsibility to reimplement bash and/or GNU coreutils to make it magically work. Therefore, you need a Node re-implementation of the same functionality, either in the standard library or as a package.
It might not seem like it but a vast majority of developers exist outside of hacker news sphere and have no doubt that windows is by far the most popular dev platform. WSL2 is making this a lot easier to use linux though.
> Fair point, but I sometimes wonder if there are even any web developers left who use Windows as their main platform.
I kind of have to as of late, because due to corporate security policy I get around a minute of internet access from WSL and then the antivirus steps in and blocks it.
I'm working in web Dev for a decade. Yet have to see at least single Dev using linux. Have seen several working on Mac, both designers to QA.
Strictly speaking I'm a bit biased, cause work in . net stack. But still.
Same here. Lots of .NET software around in my country, and most of it isn't on .NET Core yet (and having done it once, migrating away from .NET Framework isn't always easy... though it did get easier with .NET 5)
Mainly subtle differences around command line tools and the file system. There are UNIX cmdline tools collections for Windows (like busybox) running in the vanilla cmd.exe or Powershell, but they don't quite fit in (for instance the way how string escaping and cmdline argument parsing works). You can use bash in a mingw 'UNIX emulator', but then you can just as well switch to WSL. And those little things propagate and amplify upward into toolchains and workflows.
It's possible to create command line tools that work across Windows and UNIX-oids, and I appreciate this, but it's a lot of additional work (even 'cross-platform' solutions like Python don't fully wrap this stuff, even though they do their best).
FWIW, I have been remarkably impressed with Rust for this. The stdlib and package ecosystem are unusually good at building abstractions that work across *nix and Windows, and so the average command line tool written in Rust usually has good Windows support.
Of course, the additional work hasn't really gone away - it's just been relegated to libraries unusually effectively :)
Sometimes the target OS for deployment matters. Obvious case I run into: a deploy broke because a developer using Windows didn't realize he had some MiXeD case file name with the wrong capital letter. The Linux OS on the production system noticed. Sometimes it's the availability of native modules. They tend to exist for Linux, because it's the target for deployment, and sometimes not for Windows.
The file names yes, modules are catched at development time. "We can't use PackageX because it doesn't exist for Windows and Joe won't be able to run the project anymore." A VM first and WSL2 now solved much of those problems, which leaves us at why bothering with Windows in this kind of business tough.
* How easy it is to get my server’s backend running on my local machineas a dev environment
The last one is most relevant to this conversation. If I’m working on a bun-based project, not being able to run a local dev copy on Windows easily is a nontrivial obstacle.
Lots of little gotchas if you don't at least do a fair amount of testing on Windows, since that's what most of your end users (usually) will be on.
Things like "www-authenticate: Negotiate" in an SSO environment, people pasting rich text into web forms, handling environments with private certificate authorities, and so on.
I swapped between Windows and Linux (Mint/Pop) for a while because quite a lot of JS ecosystem things were problematic on Windows. WSL2 has solved most of those problems though.
I have bad luck with Macs, getting dev tools properly installed seems as hard as on Windows and everything falls apart a few times a year. The OS also seems to break itself and crawl to a halt sometimes (>1 minute to open settings). Linux distros were stable only if I can stayed on the happy path (single monitor, integrated graphics, don't try to sleep/hibernate) with close to default config. Windows can seemingly tolerate a lot more fiddling, at least since 7.
I hate it but where I work WSL is not allowed and Docker requires continuously renewed exceptions.. InfoSec/IT says they can't see what we (or malware..) are doing within..
I think "WSL is our Windows strategy" is perfectly acceptable for "bleeding edge"/modern things like this. I've been using WSL as my full time web development environment for years now. Thanks to VSCode's great remote support for it I don't notice any downsides to it.
VS Code remoting is great and magic when it works well (which in my experience has indeed been most of the time), but it's still so much overhead versus native builds and native support. I use WSL for the rare times I need to use Ruby, but Node has been mostly good enough in Windows for years at this point and it is nice to have native builds with no remoting overhead.
We should also talk about why this is the case. JavascriptCore. The reason Bun is so fast is mostly due to the fact that it is using Apple's JavascriptCore and what all of the benchmark comparisons are really doing are comparing Google's V8 engine to Apple's JavascriptCore engine.
So basically, there will never be Windows support until JavascriptCore is able to be used on Windows and I'm not entirely sure on the state of that. My guess is that it has limited to no support for that scenario.
> Why not point the finger at Windows? Development on Windows is archaic.
How so? Zig is a programming language, so I'm guessing the main interactions it needs with the OS are file IO. It shouldn't need to do any GUI work as long as it provides proper bindings to the C functions. And file IO is essentially cross platform in C++ as long as you use the stdlib. Threads are also essentially multiplatform if you're using the stdlib. I also don't know if Zig is written in C, C++, or Zig so it may differ.
Generating binaries is different, but I wouldn't consider windows binaries any more archaic than Mac or Linux binaries, and I'm not sure if zig already uses LLVM backend or something anyways.
But given all that, writing basic Win32 code to do file IO or any sort of OS level interactions isn't any less archaic than what I've had to do in Linux. It's an API, and it's got a lot of cruft built up over the years, but so does any sort of Linux OS API.
Here's the Linux API for creating a file[0] and here's windows[1]. There's more parameters for the Win32 version, but the documentation is solid and gives a lot of tangential information. I actually prefer the Win32 docs to a lot of the Linux docs that I've used because they describe all the details very explicitly. So I wouldn't call Windows any more archaic then any other OS.
Basically what I'm saying is, it takes like 2-3 hours at most to add Windows support to a programming language unless you've architected your code in a way that tangles OS operations with regular operations. It's really not that hard to keep OS code separate from the apps logic to make porting the app to different operating systems trivial.
This is all that it took me to port a fairly sizeable code base to Linux[0]. This commit allowed me to run the app with the only problem being some font issues that I needed to fix by modifying how I used a library.
The total:
> Showing 26 changed files with 255 additions and 90 deletions.
If you architect your code well, porting between different systems shouldn't take anymore than a few hours ;)
Edit: I just looked through the diff and remembered that the bulk of these changes was fixing warnings that surfaced from using a different compiler.
The actual code that I changed necessary to get this running on Linux was in File.cpp and consisted of 124 lines of code. Going the other way (from Linux to windows) would have been just as simple, I just would have added the code in the __WIN32__ macro block instead of the code in the __linux__ macro block.
It really isn't that much different though. After skimming through the source code of Zig, you can see that well over 90% of the code is OS independent.
Not only that, it looks like they do have windows support and it's just failing atm. It also looks like they have cleanly separated all OS functionality from the logic. This is where it looks like the majority of the OS dependent code lives[0], and the implementation for this is 1000 lines of code. So clearly, it looks like it shouldn't take more than a few hours to get even a programming language up and running when porting it.
Further, it looks like they're using the cpp stdlib to assist with some OS dependent functions[1]. They're clearly using at least:
* std filesystem
* std future
* std iostream
* std mutex
* std thread
* std atomic
And more. So if you're being smart about things, which it looks like the developers most certainly are, then you don't need to reinvent 90% of the OS dependent code and can instead use the stdlib that already exists to automagically get that functionality.
I suggest never access native files, if you are still keeping your code in C:\ somewhere I think you're approaching WSL the wrong way. Go all in and I've never looked back
> Don't believe me? Just wait and see how this turns out in three years.
There are so many projects which do well without a native Windows port. It may fade away in three years, but lack of Windows support will not be the primary reason for it.
It is the other way round: Windows support usually comes because something is popular (and therefore has the funding, resources or help to do so from keen Windows users).
I am genuinely curious who is using production node.js web apps on Windows server OS. I'm sure there are niche things like a massive legacy Windows shop that's slowly moving off, or trivial load internal applications where it doesn't matter. But it would really surprise me to see someone take a major bet on running large public, internet facing load on node + Windows server.
ASP.NET + Windows, sure that makes total sense. Even C++ + Windows is solid and very performant. But node.js has always been a second class citizen with Windows support (it wasn't until Microsoft themselves put fulltime devs working on node that it supported Windows) and it's a huge bet to take with very little benefit and tons of potential failures.
Someone on Twitter was upset that people they follow were following someone they disagree with. I pointed out that it's healthy to make yourself aware of dissenting opinions, and that following someone doesn't mean you align with every one of their beliefs and opinions.
They seriously couldn't fathom the idea of intentionally following someone they didn't agree with.
So not only are people aware of their filter bubbles, they guard them like it's the responsible thing to do.
Question you should. For decades I've been trying very hard to avoid such statements because what do I know about "errybody" or "most people". I find it very off-putting when someone uses these levers in a conversation
those areas where Windows is still popular account for a tiny percentage of the devs who responded to that survey. The majority of responders define themselves as backend, frontend or fullstack developers - which are web development roles https://survey.stackoverflow.co/2022/#developer-roles-dev-ty...
TL;DR nowadays, most developers are web developers.
PS the reason why the total of all responses is more than 100%, is that this was a multiple selections response.
That "over 100%" is more significant than you think.
If you use windows primarily, there's a very big chance it's the ONLY operating system you use.
If you use MacOS or Linux, there are most likely windows machines sitting around too.
In my case, I have a windows laptop from work in addition to a macbook. I'd check both boxes, but I've only opened that windows machine a handful of times for esoteric windows issues.
If this idea held true just 50% of the time, then real use of windows would actually be somewhere around 25% and that's assuming a 100% overlap between the Linux and MacOS people (a bad assumption).
Another telling point is the tiny sliver of WSL responses. MS Server is so incredibly unpopular that even the majority of Azure instances have run on Linux for years now. Writing on Windows without WSL then deploying to Linux is a recipe for disaster.
A question where the reader has to infer a lot is a bad question. They should have asked about how much a particular OS is used.
to be clear, I'm not disputing that in some areas of web dev, Windows is less supported than Linux, I'm just saying that a significant percentage of web devs use Windows, whether or not an open source project decides to keep this into account is obviously not up to me to decide, and I can understand why some projects, especially smaller one, might decide to focus on Unix/Linux in the initial periods.
You were just shown statistics that disprove your idea that almost nobody codes using Windows... over half of the developers who answered StackOverflow's survey are using Windows. Pointing at more anecdotes around you does not change this fact.
Most surveys are skewed to a particular set of users who choose to fill it in! SO is one of the few examples of a major developer site that runs on Windows, for example.
I work at Microsoft, and the majority of people I know use Windows for development. Macs are around, but they seem to be more popular with the managers than with the devs.
(It can be different in some teams - e.g. lots of Macs in VSCode - but company-wide that's an exception rather than a rule.)
If you use windows, you are VERY unlikely to use Linux and OSX. Conversely, almost EVERYONE who uses Linux or OSX will still have to use Windows. If even just half of the Linux users checked Windows, but rarely used it, then Windows would drop to 3rd place.
The overwhelming majority of servers are Linux (even Azure was announced to be majority Linux a few years ago). Despite this, only a fraction of "windows users" also used WSL. Trying to build UNIX software on Windows is fraught with difficulty. The fact that this number is so low either indicates a sampling bias for Stackoverflow or a lot of people checking the Windows box despite not using it a lot.
That looks like global statistics. Of course most developers in the developing world aren't using MacOS. I'd be interesting to see the OS breakdown when selecting for just the developed world. I'd imagine Windows percentage is much lower.
On a lark recently, I tried to get node setup on my Win10 gaming box so I could try out a basic workflow and see how I liked it. My normal environment is using MacOS and Linux.
I don't remember any specific "wtf" details, but generally it was kind of a non-starter. I got node installed, but then there were issues with using that runtime for my simple server demo. The whole attempt was frustrating. Tried installing the pseudo Linux shell thing for the command prompt and I gave up trying to use that pretty quickly.
Obviously I'm not a native Windows developer. This was just me experimenting. I'm not sure how Win devs actually work these days on non-Win applications in any productive capacity.
I can easily get stuff done on MacOS and Linux but Windows is just this multi-decade old black box of cruft.
Are you sure? From let's say the 100 web devs I encountered the past years I remember 3 using Windows, and they all used the Linux environment for development.
Web development is predominantly macOS and Linux; node servers generally run in Linux containers. Your claim that this won't take off until Windows is at parity just doesn't ring true.
several folks in the threads here have posted references to the contrary. perhaps you have data which disagrees, or you may be operating on observations from your own circles.
Those numbers were all developers overall. In the Django developers survey, which is web only, the most used platform is Linux (42%), Mac 32%, Windows with WSL 17%, and straight up Windows is a paltry 12%. In other web frameworks the numbers are likely worse, since Django has a pretty good Windows experience.
I have seen that while Bun is fast for startup, it performs about the same as Node and Deno for longer running tasks, simply because it just wraps JavascriptCore, which performs similarly to Node and Deno's V8.
I think it’s really unfair to Deno as thats existed for many years and gets ignored meanwhile a hobby project is being touted as “the node replacement”.
On my machine (M1 pro), Bun calling into a C library and running a no-op is 15x faster than Python calling a no-op function. V8 has never had stable bindings but Bun is changing the game with a TinyCC JIT-compiled FFI that yields a simple API.
$ bun bench.js
3.5331715910204884 nanoseconds per iteration
$ python3 -m timeit -s 'def f(): pass' 'f()'
5000000 loops, best of 5: 53.8 nsec per loop
Long answer: Maybe, in the long term. Node is really entrenched in orgs and 99.999% compatibility is probably a must have before they’d would consider switching. Also, don’t underestimate the strength and persistence of Google’s V8 team. Year over year incremental performance improvements may prove to be enough to ensure Node’s dominance for another 10 years.
Oh there are way more than 10 implementations of JavaScript before we even start counting all their distributions (Bun being a distribution of JSC, etc).
Of course, but as far as general purpose runtimes go your options currently are pretty much Node, Deno, and QuickJs. Most of the other ones are for niche things like IoT or embedded systems.
"Bun uses the JavaScriptCore engine, which tends to start and perform a little faster than more traditional choices like V8. Bun is written in Zig, a low-level programming language with manual memory management."
I follow bun since the start and its author has achieved a tremendous work!
However, I am a bit afraid of the rapid growth of the project.
I could appreciate a slower growth in order to think twice about the direction of the project. It could be terrible to have new tools' specifics in a world already crowded by them.
From the start of the project, at least from when I first saw it, it seemed to me that the author already has a long-term vision for the project, and the direction it should go.
There was a list of super ambitious goals, like replacing Node.js, NPM, Webpack, etc. It's been impressive to watch how the project is making such progress in achieving them.
So I think the rapid growth is a good sign that the project has communal support and momentum. I'm looking forward to seeing what Bun becomes, maybe more excited about it than Deno.
> Out-of-the-box .env, .toml, and CSS support (no extra loaders required).
This makes a lot of sense. Node.js should be "sherlocking" (integrating into the core) the most popular/common features devs use. It's crazy to me that, after 10+ years of `.env` being a common abstraction to manage environment variables, you still need to install a separated library because Node.js doesn't know to read the file itself. Same with yaml or toml.
Same with features like fetch(), which took 5 years from the Issue to be implemented in Node.js (and not for lack of collaborators, but for lack of wanting to merge it).
I'm happy though that Node.js is finally approving web features, and that the move to ESM has been finished, but they are moving so slow that I can def see a focused small team (might be too much for one person) overtaking Node.js.