"Concurrently" is a nodejs library for running more than one task at once, it's necessary if you want to build Typescript and Tailwind with one call to `npm run build`, for example.
It's kind of a PITA to use, and just calling a Bun script is better.
It's probably not different. It works the way I want it to / expect it to in Bun, where with Node's process.* functions, I always run into a footgun before I get it working properly.
> - I wired together live reloading plumbing using easily using the file watcher.
Can you expand on that? I was looking for something that lets me reload JS code just yesterday, and apparently the recommended way to do it is to restart the process :D.
OK. I just took a look at the project, and I was wrong. I'm not using the file watcher anymore because I change my architecture (from an SPA to a traditional server-rendered application).
I run bun in `--watch` mode which means the server restarts when I make changes.
The live-reload logic is around 100 LOC.
To make the client refresh when the server restarts, I wrote a little middleware function that runs in dev mode, but not production. It inspects the response, and if it detects a `text/html` response, it replaces `</body>` with `<script>...</script></body>` where the script performs a long-poll fetch to an endpoint that the middleware also injects (listening on `/{someuuid}`). If the long-poll connection drops, the client begins a reconnect loop, and once it reconnects, it reloads.
That's the gist of it, anyway. It's not as complex as it sounds, and it works quite well for my needs.
In general I’ve found watchexec super useful since you can wire together arbitrary build steps, which once they get to unwieldy or ossified go into a just file.
It would be useful to understand how Bun avoids the same problem with it's JS engine... my guess is that it provides this API directly, via a zig implementation, and leaves the JS engine completely out of it.
If so, then node can take exactly the same strategy.
The JS engine is impossible to leave out, as this is a JavaScript API and strings are very engine specific.
We spend many hours reading JavaScriptCore (the engine)’s code and when we wrote the code for Buffer and related APIs we spent a lot of time benchmarking and iterating on different approaches. A lot of performance work looks like this
> If so, then node can take exactly the same strategy.
That's bun in a nutshell. Node could be faster, but they are not.
Bun is essentially using the same architecture as Node, just implemented in a more efficient way.
Therefore, the hopes that Bun would herald a breakthrough in JS performance are ill informed. However, best case-scenario Node gets a good nudge to become more efficient.
Curious, does that affect the complexity of string concatenation? As far as I remember V8 "uses" ropes, so string concatenation was constant time. Not O(n) like java. Which saves a lot of headaches
Not quite, Zig strings are not zero terminated like C strings but array slices (language-builtin ptr/length pairs). C++ now has std::string_view which is similar, but as stdlib feature, which makes it more awkward to use.
Zig also has a concept of "sentinel terminated arrays/slices" which allows easy interop with C APIs for string data, but the details and implications go a bit too far for a comment :)
C++'s std::string might as well be syntactic sugar on top of a std::vector<char> (due to significant similarities in implementation and API), with null-termination by default for C interop. In general, in C++ you shouldn't be passing around null-terminated strings unless you're working with some legacy C API, at least without also passing the size of the string (unless you really like calling strlen, like that GTA5 code [0]), at which point you might as well pass const std::string& or std::string_view (which can be constructed transparently from a std::string).
Why do you say that std::string_view is more awkward to use as a result of being part of the stdlib?
> Why do you say that std::string_view is more awkward to use as a result of being part of the stdlib?
As far as I'm aware, the type of string literals in C++ is still a "raw" char pointer and not a string view. Also few libraries actually make use of std::string_view, while in Zig everything is built around strings as slices, from the language to the stdlib to 3rd party libs (easy to do of course in a new language ecosystem).
The first part is true for ‘normal’ string literals, but there's suffixed string literals since I think C++17: "foo"sv is a std::string_view if you have operator""sv from std::string_view_literals visible.
> As far as I'm aware, the type of string literals in C++ is still a "raw" char pointer and not a string view.
You can define a constexpr std::string_view with a literal if you so choose. And if you pass a string literal to a function that accepts a string_view, then the compiler has enough information to (and typically will) construct the string_view in constant time. (A sibling commenter also points out that you can use the sv suffix: https://en.cppreference.com/w/cpp/string/basic_string_view/o...)
(Meanwhile, if you pass a string literal to a function that accepts const std::string&, then that will be a linear-time operation, as it copies that data since std::string owns its data. But with any amount of indirection, you'll end up with an implicit strlen call. So this is absolutely a pitfall.)
> few libraries actually make use of std::string_view
This is a fair point, as it's a relatively new addition to the language and by no means mandatory.
It's the most sensible approach, especially for a low-level language. By the point you start caring about the meaning of the string contents you almost always need to deal with grapheme clusters and a whole lot more Unicode bs. Meanwhile many use cases only care about passing the string along or concatenating or replacing substrings, all of which can be done at the byte level with a sensible encoding like UTF-8.
Codepoint-level string abstractions in particular are complete nonsense that only serve to give you the illusion of making things easier before learn the hard way that Unicode is more complicated than that. This also goes for UTF-16 which is only a cope extension uf UCS-2 for those that already made this mistake before additionally realizing that 2 bytes are not enough to encode all human languages.
Now you might think that declaing all your strings are UTF-8 wouldn't have any of these problems and is the way to go .. until you find out that there are strings you can't represent as (valid) UTF-8 including things that are almost UTF-8 like filenames and other OS-provided data under most POSIX operating systems. This also applies to UTF-16 under Windows btw.
Sure, it's a low-level language, but my concern is that it doesn't even enforce the encoding? Yes, UTF-8 source code prevents string literals from being invalid, but any input from users/sockets/files could be invalid UTF-8. Basically, there's no distinction between a "string" with a known length and a byte array with a known length — the same mistake C and C++ have. Conceptually, strings and byte arrays are different things, even if they can be represented the same way, and a type system could enforce that.
And yes, UTF-16, used by Java and C#/.NET is a pain-point as it forces conversion from sockets/files from UTF-8 into UTF-16 so they can be used, then another one when writing back. But that's beside the point when talking about Zig.
Treating UTF-8 as a different and optional view on a type-agnostic bag of bytes is a feature, not a bug ;) (just look at the mess that Python3 made of that topic) Most of the time, data is just passed around without looking at or caring about its content, and for that it doesn't matter if the data is binary bytes, ascii, code-page encoded, shift-jis, utf-8 or any other format, it only matters at the endpoint when "opening the box". UTF-8 encoding/decoding/validation is handled by the stdlib in Zig, not by the language.
Of course it doesn't enforce encoding, it needs to be able to handle invalid data, bad encoding etc. if it wants to be a system language.
You can trivially enforce it's valid utf-8 if that's what your application requires by using the `@import("std").unicode` package.
that's a good thing actually. if your language restricts strings to valid Unicode, you lose the ability to do things like open files who's path contain invalid Unicode characters.
> replacing substrings, all of which can be done at the byte level
You actually cannot implement substring replacement at the byte level with Unicode, just think about what happens if there is a modifier right after the substring in the original text. You cannot just avoid the fact that Unicode (and human writing in general) is a mess.
I have been incredibly impressed with Bun after using it in some smaller projects. Compared to Deno, it feels like a true alternative to Node.js. It’s so nice to have a TS-native runtime.
Is it native in any significant way or is it just also transpiling but hiding that step enough to make it feel native? I guess this really hinges on how we define native.
I really want a runtime that actually leverages the type hints for JIT optimization.
Native as in "if you import a .ts file it Just Works" I'm afraid. Roughly it strips the TS type syntax and hands the result off to JavaScriptCore - so it's not 'native' in the sense you might have been hoping for, and perhaps you'd find 'built-in' or 'seamless' or maybe 'out of the box' a better description of what it actually has.
(I find the experience of -using- what it actually has to be excellent but 'native' is definitely an overloaded usage here)
There are certainly cases where type hints could be leveraged.
But ultimately the way a JIT like Turbofan works is that it will do better optimizations if your JS code is already monomorphic (functions signatures, object fields and so on).
Now TS sort of nudges you that way if you write simple TS code. But due to structural typing, expressive type golf features and the fact that TS is generally unsound I doubt it would be easy to leverage type hints much beyond that.
Here are two talks that are very much related and give some insights into how difficult these kinds of problems are:
Although my feeling is that typescript’s idea of a type system living on a separate plane was one of the greatest ideas of all time. Never felt so good about something that is basically js. With strict semantics like Rust/C++/Zig/Go it would be just a weaker clone of those with all the usual strings attached.
Oh I fully agree. The ability to turn the type system off when you’re testing or hacking and want to be quick, or to gradually type a JavaScript project… it’s really the best.
But I’ve got projects that are now as “correct” as possible, with full validation at deserialization and such and I just wish a compiler could take things an extra mile.
To grossly oversimplify, it's a choice between "make things better by starting fresh" (Deno) vs "make things better but don't change anything" (Bun). Bun's approach is easier to adopt, and seems to be "good enough" for many people.
Deno made a very deliberate compatibility break from the Node ecosystem. They wanted fresh start, to make smarter choices and ditch historical baggage. They thought people would be motivated to push through the adoption friction. I think that plan has been less successful than hoped, and Deno keeps walking it back. They're more compatible than before, but aren't a drop-in replacement. IMHO they prefer it that way.
Bun, on the other hand, explicitly feels that any compatibility gap with Node is a bug. Bun wants to beat Node at its own game, wants adoption to be as easy as running `bun index.js` instead of `node index.js`. Then you opt into their special APIs as-needed. Bun's headline feature is "free speedup", but they also target many of the same DX conveniences Deno does, like trivial TS integration.
When Deno came out, the question was "how is Deno better than Node?". Deno had strong answers, give or take the compatibility differences. But today you could instead ask, "why port to Deno instead of just dropping in Bun?", and that's more complicated to decide.
How common is this scenario really? I rarely find myself using obscure packages. Most of the ones I use are hugely popular and vetted by the developer community.
I use it to run typescript projects. It seems to just work. whereas TSC and tsx and ts-node all seem to complain a lot about configuration and module setup and stuff.
The parent is asking about how bun compare to Deno, not node. TSC, tsx, and ts-node are nodejs related. I also would like to know. Deno seems to be mature and ready. It runs .ts files from command line. Of course there is no point of picking one JS runtime over another. I would follow the WinterCG group, and use something that is runtime agonistic like hono. It already has a React like front end built in (jsx-lite type).
I don't think Bun "definitely can" run most Node code. Both Deno[1] and Bun's[2] Node compatibility are incomplete. Bun being perceived as more compatible is mostly clever marketing.
From the beginning, Bun was designed to be a drop-in replacement for Node.js. That’s why Bun implements Node’s globals. That’s also why Bun automatically detects when CommonJS is used in the entry point and ensures CommonJS is loaded. require and many other Node.js features “just work” in Bun.
I was put off by Deno because of the way it handles dependencies. I didn't understand why I had to use URLs instead of `npm i`. Also not a big fan of the permissions flags when running a command. Explicit declaration for every single permission was very annoying.
Bun has better speed and great documentation. And they're shipping new features very fast.
> I didn't understand why I had to use URLs instead of `npm i`
(a) With npm, Microsoft is the gatekeeper of everything NodeJS. URLs are the most decentralized way to do dependencies.
(b) Self-hosting can help with some security issues with npm. Makes it easier for private projects to not have to trust npm hosted third party libraries, which is important in many corporate environments.
Yeah, I think import maps resolves a lot of those issues myself... I both loved and hated it. I liked that it was just a matter of referencing any live source module, but hated trying to keep various references synced (like std).
I stopped using Bun, because it provides a nice API and just works and all, but it seems to be just a thin wrapper over Node, hiding the messy stuff underneath.
I got some problems and it started leaking Node diagnostic messages all over the place. Was not impressed with how easily the facade fell over when I tried to do things just a little bit different from the happy path.
Bun does not wrap Node. Bun is a JavaScript runtime and suite of tools separate from Node. We do implement Node APIs and spend an enormous amount of time on compatibility. We have to use the same `code` property on many errors or libraries that rely on it break.
It’s also possible you ran a package.json script that had the #!/usr/bin/env node shebang at the top, which Bun by default respects. You can force it to use bun by prefixing the command with “--bun”, like “bun --bun my-executable”
I'm guessing this was from a time when they invoking bun on a script was just an alias to run it with node instead, and you had to do a more specific incantation like `bun run index.ts` to actually use the bun runtime.
This is probably what happened; it was like almost 2 years ago. Thanks for the explanation
I do think it's a gotcha to run a script with another runtime than the one I launched from the terminal, though. If I launch a script directly with a runtime, I expect it to disregard any shebangs.
Good to hear. I did not have time to go inspecting the code, with all the shiny tools that scream for me to try them, only to fall over. I think it's a gotcha to run a script with another runtime than the one I launched from the terminal, though.
But I really want to have explore alternatives to Noe, so I will give Bun a new look soon.
Have only used it for small projects but it doesn't just "crash" out of no where. There are only 2 issues I encountered:
- Packages using Node functions Bun hasn't implemented yet. Google generative ai sdk streaming mode doesn't work, the rest of packages work fine for me though.
- Bun won't shutdown at the end of the script if there are async functions running in the background. I have to close the DB at the end of every script instead of just using pre-exit hook.
Not quite... I'm a fan of Deno myself and have been using it as a shell scripting language for a lot of things. That said, Bun has had some different points of focus, some better imo, some worse.
One thing Bun did, that I wish Deno had was a built in API for Sqlite databases. Deno has had the library in the runtime for a while, but has resisted exposing it directly. While I get the reasons, it would help with security integration over the other options IMO.
I think integrating SQLite (libsql) and even Tuirso/AstroDB (libsql-server) support in the box could really be a boost to app development productivity myself. While the latter would be more difficult in self-hosted scenarios.
- TSX is baked in, so I’m using that as my HTML templating layer.
- SQLite is baked in, so I’m using that as my database.
- Bun shell replaced the need for something like “concurrently”.
- I wired together live reloading plumbing using easily using the file watcher.
Its documentation is excellent. It starts up fast. When you do need to install dependencies, that is impressively fast, too.
If you haven’t checked it out yet, I’d highly recommend it. 9/10.
(The reason for the 9 is that there are still some lingering APIs and behaviors missing from Node which might trip you up.)