I have a lot of respect for Jarred and bun. The lean approach of just one person working for a year is rather underrated imo. No funding and therefore no pressure for growth. No overhead from scaling up a team and having to implement process, standards, etc. And at the same time, incredible content marketing by just tweeting about every little detail. Me and many others got hooked just because Jarred was posting interesting stuff and kept posting interesting stuff. It wasn't a stuffy blog or padded out conference talks, it was just good, technical content.
My only hope is that Jarred gets some rest for a bit. According to his Twitter he's been pulling 80 hour weeks and while that does lead to impressive work, it's not great from a human perspective. Hope he can rest up and keep this going for the long haul!
I've done this solo sprint for a year to get a startup running a few times in my career.
The time it worked, I worked 10-4pm (heads down) in the ideation phase, ate well, slept well, got two hours of exercise daily, and had an interesting social life. Once the startup took on a life of its own, the personal life started to suffer and I got up to 80+hrs for a month or two before settling into ~60.
The two times it didn't work, I embraced the 60hr/week grind and was a bit more isolated, perhaps drank a bit more to unwind, and my focus deeply suffered.
I really love the solo thing, but in the early days, be creative and happy. The grind will find you. Market fit means more work. No doubt Jarred will find building a company and Bun at the same time even more challenging that just building Bun.
> The lean approach of just one person working for a year is rather underrated imo
It's honestly got me thinking about doing the same thing for my Big Project that I've been squeezing into nights and weekends for the past year and a half
He should have rested up and taken a week off to get some perspective before attempting to write recruiting spam on Twitter.
Sounding twitchy and strung out and high strung and histrionic? A filter for sure, but not for the things he thinks he wants.
This is how you get broken, desperate, unhealthy people, and that is reflected permanently in culture, and almost always in product, with poor prognosis for success.
It's an incredible achievement, though I worry about the use of an unproven language (Zig) that lacks memory safety by design. JavaScript runtimes, especially those with JITs, have been plagued by vulnerabilities from memory safety, type confusion, and data races.
Node.js, despite being based on V8, is still susceptible independently of V8 and introduces its own vulnerabilities. It's not sufficient for the runtime to be secure, but the new facilities Bun provides must also be vetted.
Bun/Oven are new, and similar in position to Node. Here are the hard questions I'd ask if I were on a security team and asked to review adopting Bun:
> 2. What measures is Oven taking to proactively detect and mitigate vulnerabilities? (e.g.: fuzzing, audits, bug bounties)
Fuzzing will begin soon. Regular security audits will happen around the 1.0 release. Bug bounty seems like a good idea, but it's too early today to know when this will start.
> 3. Will Oven support Zig development to avoid an existential risk in upstream vulnerabilities?
Yes. Oven will donate to Zig Software Foundation.
More broadly - I think about all of this a lot, but until now Bun has been mostly the work of just me. Bun is still very early - there's a lot that's just not implemented yet.
Thanks for answering Jarred, and I appreciate your answers given the early stage you're at. Runtime diversity in Node is quite exciting, and I'm sure you've more interesting challenges ahead than just security.
I look forward to seeing what you can make of it with Oven.
Worth clarifying that Bun isn't a from-scratch JS runtime, it's a wrapper around JavaScriptCore (WebKit's JS engine), like Node is around V8
As you say, there's still plenty of room for vulnerabilities in the parts it does implement, and Zig isn't strictly memory-safe. However, Zig has lots of modern features like optional-types that help quite a bit with avoiding memory errors: https://www.scattered-thoughts.net/writing/how-safe-is-zig/
Which is to say your question is definitely valid, but there are also reasons to think this isn't as huge a concern as it might seem
> 2. What measures is Oven taking to proactively detect and mitigate vulnerabilities? (e.g.: fuzzing, audits, bug bounties)
We're huge fans of bun at Fuzzbuzz (waiting for it to get a bit more production-ready). If Jarred's interested, we'd be happy to donate some compute to support fuzzing Bun.
I am fairly confident that Zig does not lack memory safety by design; from what I understand, the language as a whole isn't even finished, and the memory safety design isn't like a GC'd language or like Rust, so people assume there isn't memory safety when in reality it is at least partially implemented.
I'm afraid "partially implemented" memory safety is also widely known in the offensive security industry as "not memory safe". Many people have fooled themselves into thinking there is a gradient with safer C++ abstractions, libraries in C, custom allocators, static analyzers, valgrind, etc.
> Rust isn't even memory safe if you want to be THIS pedantic about it. the unsafe keyword exists in Rust, and it is often used.
In practice, Rust seems to be much safer. I've seen Jarred talking about segfaults in Bun. Those are practically unheard of in Rust programs, and indicative of the possibility of quite serious security vulnerabilities.
Unsafe is perhaps poorly named, and several Rust core team members have commented as such. It doesn't mean memory unsafe, it means "not checked by the compiler".
Safe APIs that contain unsafe blocks must still be proven correct, via Miri, a model checker, formal proof, etc. Any safe functions that violate memory safety are considered bugs. The limited number of unsafe functions exist as helpers to build safe APIs when the compiler's borrower checker is insufficient.
What this means is that to verify memory safety, one can restrict their search to unsafe blocks. And hypothetically if the Rust compiler were to get much smarter, it should be possible to prove to the compiler that those blocks are safe (via theorem prover, perhaps?) and remove the "unsafe" declaration.
In most languages, there is no such distinction between the "memory safe" common set users ought to use and the subset that has to be verified independently. Neither Zig, C, C++, nor even Go have a clear delineation between safe and unsafe code.
> Unsafe is perhaps poorly named, and several Rust core team members have commented as such. It doesn't mean memory unsafe, it means "not checked by the compiler".
That's lack of memory safety when your memory safety work is done during compilation, as is the case with Rust.
Is there not a gradient though? Are C++ smart pointers equally as bad as regular old malloc? Can safer abstractions not render most buffer overflows impossible?
I misspoke. Yes, security is on a gradient and the things I've listed have caught and prevented many, many bugs, but memory safety is a binary proposition. Those things have not made C++ "memory safe".
Despite the "How Safe is Zig"[1] blog post, it's false that there is a spectrum. A lack of temporal safety implies a lack of spatial safety, and vice versa.
If one can use a use-after-free, invalid write, time of check-time of use error to write a byte to an invalid location, the program's data structures are now in an inconsistent state, violating invariants required for "spatial safety" such as objects being the correct type, buffers and lengths being paired together correctly, etc.
Likewise, if one can accomplish a buffer overflow, a spatial safety violation, or an out of bounds write, then by definition they've made temporal violations as well. Writing objects out of bounds or arbitrary heap writes imply data races.
Offensive security folks use gadgets that exploit one to accomplish the other, as needed.
Does it follow that the fact that temporal violations could be used to violate runtime spatial checks, therefore means that spatial safety in itself is entirely without value?
What are your thoughts also on buffer underflows? I ask since I take it you also work on offensive security.
Alas, I don't work in offensive security but it's been a hobby of mine as an engineer to keep up to date. Some day, perhaps.
To be precise, I don't think the mitigations Zig has, which the author labels as "spatial safety", are entirely without value. Optionals & sum types, range checks are helpful.
Buffer underflows as in writing to negative indices? I wish I could go in a time machine and default early languages to saturating arithmetic instead of wrapping. Even Rust does wrapping arithmetic in release mode, in debug mode overflows will panic.
Yes, agreed with you as to buffer underflows. Here, I really like that Zig has checked arithmetic enabled by default in safe builds. It's a small decision (to many) but so important. It surprises me that Rust does not do this for safe builds. A panic is stronger (and safer) than only wrapping or (implicit) saturating arithmetic.
Congrats to Jarred! Bun has blown me away in terms of performance. Often (in other projects) performance claims out of nowhere are simply cherry-picked but I’ve found bun repeatedly impressing me with both speed and elegance.
What doesn’t get mentioned enough is just how friggin ergonomic bun is. Install it and you’ll see what I mean immediately. Play with any API in the “bun:” namespace and it’s just a breath of fresh air.
I'm curious about where all that performance comes from
I know it uses the WebKit JS runtime instead of V8, which is super interesting. Does that cause a performance lift? Or is there some other secret sauce that pervades throughout Bun? Or is it just a matter of Jarred giving lots of attention to spot-optimizing the most important bottlenecks outside of the core runtime?
Just lots of time spent profiling and trying things
On the runtime side, JavaScriptCore/WebKit's team are extremely receptive to feedback, especially performance-related. Today, @Constellation made `Promise.all` about 30% faster https://github.com/WebKit/WebKit/pull/3569
It's worth noting that most of Node's API are written in JavaScript, and often not particularly optimised JavaScript (sometimes constrained by API compatibility). I think bun is taking the approach of implementing a lot of core APIs in zig.
I suspect it’s the latter. I would guess V8 with a thin wrapper and a ton of effort from someone who cares about perf to port all the native goodies of node would get similar wins.
Absolutely. And it has been done already, JustJS[1] has made it to the top 10 fastest in the TechEmpower benchmarks. It’s Linux only though, and not nearly as complete or easy to use, which makes it unappealing for real world projects.
I'm curious if it will be as fast after implementing some must have features like sourcemaps, minification, and tree shaking? Bun is definitely not production ready yet, but I'm keeping my eye on it for sure
Sourcemaps are technically already implemented, but not for bundled dependencies. Initially, it was a 30% drop, but I made it use lots of SIMD and that compensated a lot. When you use Bun's runtime, it transpiles every file (including non-TypeScript files) and generates an in-memory sourcemap so that error messages have useful line/column numbers. Bun's fast startup time is very much in spite of things like this.
Minification will likely make it faster because there will be less code to print (see https://twitter.com/evanwallace/status/1396304029840396290). Many of the syntax compression and small optimizations like ("foo" + " " + "bar") becoming "foo bar" are implemented already, but not the whitespace removal
It might have an impact on bundling performance overall because that will involve an extra pass to more correctly handle `export * as from`, which isn't as important when used with the runtime (since that can happen at runtime)
I'm not looking to use Bun as a JS runner because all of my JS runs in the browser. The time and complexity of implementing these processes will increase build time. We will have to wait to see how it compares to existing tools for bundling and creating production code.
> I'm curious if it will be as fast after implementing some must have features like sourcemaps, minification, and tree shaking?
Can you explain why you think a transpilation step will prove to be a technical challenge here? The "bundler" part of Bun seems like a very small piece, and not something that would impact its performance as a runtime too greatly.
A lot of the conversation around Bun's performance has included the performance of non-runtime things like how fast it can install packages, perform builds, cold-start, etc
I only use JS/TS on the client side, which will end up running in the browser, thus being able to use Bun to create production code will require these features. I didn't say they were technical challenges, but these processes require more complex analysis and algos.
I’m not familiar with deployment in the modern web world, but I’m curious how fast those specific features need to be in a production setting? Are they done dynamically for every request?
It’s consistently in the top 5 fastest web framework (beating out Rust, etc).
Just-JS is already faster than Bun.
Additionally, JSCore appears to be a significant reason why Bun is faster than NodeJS (V8). Just-JS is investigating switching to JSCore as well - which will only extend its lead.
Besides being Linux-only, the GH page lists intentional restrictions like “commonjs modules, no support for ES modules” -- that seems like a biggie!
The benchmarks seem to focus exclusively on startup time, which is certainly important but not the only important thing. (And unless I misread those tweets it’s only a tiny bit ahead of bun.)
More competition is good and helps keep everyone honest, of course, but it looks like extremely early days for this project.
Bun does ESM, JSX, Typescript and SQLite support out of the box. It also does bundling. It’s quite nice.
I’m thinking of building a zero-dependency toolkit on top of it. I rewrote Tailwind this weekend with that in mind, and my 4-second Tailwind build (on a 90k line project) now takes 200ms. That’s mostly due to how I architected my Tailwind library; not Bun, but Bun is fast where it’s been optimized.
I love the idea of a zero (or very low) dependency, modern toolkit. I’m rooting for Bun.
You might be right, though it's Linux only, and still has a "coming soon" as documentation. Worth keeping an eye on, but seems more in the experimental phase compared to bun.
> Linux only? I think that'll cap adoption regardless of any perf advantages
For web applications, Linux is usually the eventual development target, and running Linux development environments on Windows and MacOS is a solved problem. So no, not really going to be much of a problem.
While obviously extremely impressed with Bun and Jarred's achievement (especially because I really like Zig being similar to C) I'm also shocked that he was able to get a 3-letter `.sh` domain (bun) _and_ a 4-letter `.sh` domain (oven) both real words!
Looked to me that Oven was registered shortly after Bun exploded in popularity.
Don't be shocked. There are plenty of short domains left for grab, they're just expensivish for most people (usually $100 - $1000 but could be more). Glad that's the case, otherwise squatters would've "invested" into them already.
They are roughly in competition, though Deno made the decision early on to largely split itself from the ecosystem and existing conventions of Node (and an enormous body of existing JavaScript packages on npm), whereas Bun did not. Bun, AFAIK, aims to be compatible with Node and existing the npm ecosystem.
I met Ryan Dahl once at like a CouchDB meetup in Oakland or something a decade ago. He was obviously a brilliant guy, and humble, and very likable as a result.
I very much admire that he seems to be trying to get things right in light of lessons learned, but history would imply that backwards compatibility tends to trump better technology on average.
It seems both are attempting to make money by providing their own "serverless clouds" and hoping people will use those, instead of AWS/GCP/Cloudflare Workers/Fly/etc.
https://bun.sh/. "Bun is a fast all-in-one JavaScript runtime. Bundle, transpile, install and run JavaScript & TypeScript projects — all in Bun. Bun is a new JavaScript runtime with a native bundler, transpiler, task runner and npm client built-in."
It might, if the code can be optimized. There are all sorts of reasons why it might not. For example, at one point in time, a switch statement with more than 128 cases could not be optimized.
finally, a tech company with a cute design aesthetic :D i'm so happy to see a cute little bun in an oven instead of standard tech company imagery. i've never used bun and don't have great reason to, but it makes me hopeful to see.
> Before Oven can offer hosting, Bun needs a stable release. The goal: a stable release of Bun in under six months from today (August 23rd).
Wow! That is an ambitious timeline. Also ambitious is this:
> The plan is to run our own servers on the edge in datacenters around the world. Oven will leverage end-to-end integration of the entire JavaScript stack (down to the hardware) to make new things possible.
That’s a lot of work in a crowded space. Maybe they’re aiming for an acquisition or something?
Anyway, I wish them luck. Bun probably tops the list of projects I’m excited about at the moment.
from the bun website
> Bun natively implements hundreds of Node.js and Web APIs, including ~90% of Node-API functions
Is there any status tracker to see what is supported and what is not, or in other words how do I know if bun supports a particular framework or a library?
Interesting effort. How do you plan to make money? I would love to do something like this but it must be really hard to figure how to turn revenue sustainably apart from donations, consulting and maybe some sort of hosting services.
I love these funny names. First time I introduced HomeBrew to my gf, she was all laughing at the funny terms:
keg - Program binaries created from source
bottle - Program binaries downloaded
cellar - Directory where kegs / binaries are stored
tap - git repository
cask - macos native binary (not used in Linux)
I 100% agree. Cutesy names should end once the package name has been chosen. Package authors and contributors that overdo the thematic names should stick to writing fiction, not code.
well, to see the exact opposite of "cutesy", please fork homebrew and give all of those things GUIDs for names, including all the packages a user could install; you'll soon see why the names chosen are quite acceptable, indeed.
This is the worst straw-man I've seen this year. Realistically, non-cute homebrew names would look like bin, libexec, repo, that is to say descriptive names that are already in common use, not GUIDs.
hm quite a sour take. I find that the names actually are reflective of an actual storage process so they fulfill both the playfulness and the natural naming.
I think homebrew is OK. It should be originated from the idea where the install script is user created instead of maintained by the original package author?
I mean in a way its right. One then need to go for professional companies, sign profession contracts, pay professional service fee and real professional sounding software
I am surprised every day at how much raw human effort has gone (and continues to go) into making JavaScript into a good language, when at most 1/20th of that effort could have been used to simply replace it with something that is good to begin with.
Today’s JavaScript works because it has gradually evolved from the primordial JS which (just about) worked; at each stage in the chain it’s seen real-world usage as an in-browser language.
The alternative would be to build something better from scratch -- but that never works, says Gall’s law -- or to take some other battle-hardened language (C? Java? Python?) -- but trying to adapt those to work well in the browser would hit just as many technical and political barriers as JS did in becoming a decent general-purpose language.
Javascript has had zero barriers in becoming what it is, specifically because people are afraid to attempt to unseat it. there has never been an adult in the room willing to ask “is spending several hundred thousand person-years on this hack of a language better than starting over with something that is more thought out?” Of course it would be better to replace it, especially so in the early days when people saw what was coming.
everyone just continues doing what they are doing on the assumption that either JS is the best that humans can come up with, or that it’s too difficult to unseat it. both are ways of saying “i don't want to think about that problem right now.”
so then someone made the simultaneously laziest and most expensive decision ever: put JS on the server, too. now the code serving the client and the client itself get one thread each. “gosh, performance isn’t great here, but i don’t want to think about that, i have a sprint goal.”
it is only recently that WASM became a thing, and adoption seems slow. it has only succeeded because it did not try to usurp JavaScript’s position, and even requires JavaScript code to interact with the page content or the user in any way. if you can’t beat ‘em, or don’t want to try, join ‘em.
JS has hit zero barriers and has never been strenuously challenged at all. if it had, it would be gone. everyone just keeps throwing money at it because they don't want to think about how awful it is.
Only Google attempted it with Dart, and then they got tired and gave up, because the team was (presumably) minuscule compared to the team working on JavaScript in the same effing browser.
I don’t think your reading of history is correct at all. There were plenty of attempts to unseat JS in the browser - ActiveX, Flash, Java, Dart.
Only Google attempted it with Dart, and then they got tired and gave up, because the team was (presumably) minuscule compared to the team working on JavaScript in the same effing browser.
Funnily enough it was the same team! Lars Bak led a small team to build V8, and then when that was a big success, he had enough clout within Google to design a new language that could improve on JS, so he took most of the same team and they created Dart.
ActiveX, Flash, and Java all supplemented Javascript. none attempted to replace it. they all did things that could not be done with Javascript at the time.
only Dart attempted to replace it.
> Lars Bak led a small team to build V8, and then when that was a big success, he had enough clout within Google to design a new language that could improve on JS, so he took most of the same team and they created Dart.
How many people (original team or not) continued to work on V8? I don't know, but it felt like Dart had about 5% of the workforce that V8 did. If so, there was little chance they could have ever caught up to V8 on any measure in the short time Dart existed in Chrome as a DOM-manipulating scripting language.
I don't know why this was downvoted, what's the monetization plan of making an improved runtime just for the sake of it with no complementary business model? Do you make it closed source and sell/subscribe it? Do you try to survive off of paid support? (A runtime should be incredibly stable and not need support). Hope you get enough donations?
I wonder if AWS is a sleeping crocodile ready to eat all these dev-focused paas companies up (by competing or takeovers) in a while. Maybe they are waiting to see who wins first.
I’m not sure where this movement went but efforts like SSPL [1] and similar aim to make it hard for cloud providers to resell an open source product as a service.
I feel like there’s two reasons why I don’t understand your comment at all. Either you made a pun somewhere in there that just went way over my head, or it’s way too early for me to be up. But I have no idea what „of in the cold food of out hot eat the food“ could mean
My only hope is that Jarred gets some rest for a bit. According to his Twitter he's been pulling 80 hour weeks and while that does lead to impressive work, it's not great from a human perspective. Hope he can rest up and keep this going for the long haul!