Hacker Newsnew | past | comments | ask | show | jobs | submit | philippta's commentslogin

Wookash Podcast

It always comes as a surprise to me how the same group of people who go out of their way to shave off the last milliseconds or microseconds in their tooling care so little about the performance of the code they ship to browsers.

Not to discredit OP's work of course.


People shaving off the last milliseconds or microseconds in their tooling aren't the same people shipping slow code to browsers. Say thanks to POs, PMs, stakeholders, etc.

Sometimes they are the same person.

It just take someone to have poor empathy towards your users to ship slow software that you don't use.


I've never met a single person obsessed with performance who goes half the way. You either have a performance junkie or a slob who will be fine with 20 minutes compile times.

I have. They cared a lot about performance for them because they hated waiting, but gave literally no shit about anyone else.

TBH I don't know how to do that work. If I'm in the backend it's very easy for me. I can think about allocations, I can think about threading, concurrency, etc, so easily. In browser land I'm probably picking up some confusing framework, I don't have any of the straightforward ways to reason about performance at the language level, etc.

Maybe once day we can use wasm or whatever and I can write fast code for the frontend but not today, and it's a bit unsurprising that others face similar issues.

Also, if I'm building a CLI, maybe I think that 1ms matters. But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".


It's not too diffcut in the browser either. Consider how often you're making copies of your data and try to reduce it. For example:

- for loops over map/filter

- maps over objects

- .sort() over .toSorted()

- mutable over immutable data

- inline over callbacks

- function over const = () => {}

Pretty much, as if you wrote in ES3 (instead of ES5/6)


Yes but it's not really fair to expect me to know how to do that. Just because I know how to do it for backend code, where it's often a lot easier to see those copies, doesn't mean I'm just a negligent asshole for not doing it on the frontend. I don't know how, it's a different skillset.

Nobody expects you to know that, but I'm curious to hear how do you know it for backend code but not frontend code. Have any examples?

The parent commenter earlier seems to be implying that it's only a matter of not caring.

> care so little about the performance of the code they ship to browsers.

> but I'm curious to hear how do you know it for backend code but not frontend code.

Because I find backend languages extremely easy to reason about for performance. It seems to me that when I write in a language like rust I can largely "grep for allocations". I find that hard to see in javascript etc. This is doubly the case because frontend code seems to be extremely framework heavy and abstract, so it makes it very hard to reason about performance just by reading the code.


That's completely relatable, and also a major point in my original argument. Using heavily abstracted frameworks will automatically cap you performance wise. The only way out is to not use a framework or one that's known to be lightweight. In backend or tooling like with the JS compiler from OP, one tends to not use heavy frameworks in the first place.

The work is largely the same.

You think about allocations: JS is a garbage collected language and allocations are "cheap" so extremely common. GC is powerful and in most JS engines quite fast but not omniscient and sometimes needs a hand. (Just like reasoning with any GC language.) Of course the easiest intervention to allocations is to remove allocations entirely; just because it is cheap to over-allocate, and the GC will mostly smooth out the flaws with such approaches, doesn't mean ignoring the memory complexity of the chosen algorithms. Most browser dev tools today have allocation profilers equal or better to their backend cousins.

You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers). On the flipside, JS is a little harder to reason about threading than many backend languages because it is extensively cooperatively threaded. Code has to yield to other code frequently and regularly. Shaving milliseconds off a routine yields more time to other things that need to happen (browser events, user input, etc). That starts to add up. JS encourages you to do things in short, tight "bursts" rather than long-running algorithms. Here again, most browser dev tools today have strong stack trace/flame chart profilers that equal or exceed backend cousins. Often in JS "tall" flames are fine but "wide" flames are things to avoid/try to improve. (That's a bit reversed from some backend languages where shallow is overall less overhead and long-running tasks are sometimes better amortized than lots of short ones.)

> But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".

The heavily event-driven architecture of the browser often means that just sitting on a webpage is "browsing in a hot loop". Browsers have gotten better and better at sleeping inactive tabs and multi-threading tabs to not interfere with each other, but things are still a bit of a "tragedy of the commons" that the average performance of a website still directly and indirectly drags everyone else down. It might not matter to you that your webpage is slow because you only expect a user to visit it once, but you also aren't taking into account that is probably not the only website that user is browsing at that moment. Smart users do directly and indirectly notice when the bad performance of one webpage impacts their experiences of other web pages or crashes their browser. Depending on your business model and what the purpose of that webpage is for, that can be a bad impression that leads to things like lost sales/customers.


I don't think it's the same tbh. In Rust I can often just `rg '\.clone'` and immediately see wins. Allocations are far easier to track statically. I don't have a good sense for "seeing" allocations when I look at JS, it feels like it's unfair to expect me to have that tbh. As for profilers, yes I could see things like "this code is allocating a lot" but JS hardly feels like a language where it's smooth to then fix that, and again, frameworks are so common that I doubt I'd be in a position to do so. This is really in contrast to systems languages again where I also have profilers but fixing the problem is often trivial.

> You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers).

My issue isn't with being able to write concurrent code that has no bugs, my issue is having access to primitives where I have tight control over concurrency and parallelism. The primitives in JS do not provide that control and are often very heavy in and of themselves.

I think it's perhaps worth noting that I am not saying "it's impossible to write fast code for the browser", I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.


> I don't have a good sense for "seeing" allocations when I look at JS, it feels like it's unfair to expect me to have that tbh.

I still think that's a training/familiarity problem more than a language issue? You can just as easily start with `rg \bnew\b` as you can `rg \.clone`. The `new` operator is a useful thing to start with as in both C++ and C#, too. (Even though JS new is technically a different operator than both C++ and C#'s.) After that the JSON syntax is a decent start. Something like `rg {\s*["\.']` and `rg [` are places to start. Curly brackets and square brackets in "data position" are useful in Python and now some of C#, too.

After that the next biggest culprits are common library things like `.filter()` and `.map()` which JS defaults to reified/eager versions for historic reasons. (There are now lazier versions, but migrating to them will take time.) That sort of library allocations knowledge is mostly just enough familiarity with standard library, a need that remains universal in any language.

> JS hardly feels like a language where it's smooth to then fix that

Again, perhaps this is just a familiarity issue, but having done plenty of both, at the end of the day I still see this process as the same: move allocations out of tight loops, use object pools if necessary, examine the O-Notation/Omega-Notation of an algorithm for its space requirements and evaluate alternatives with better mean or worst cases, etc. It mostly doesn't matter what language I'm working in the basics and fundamentals are the same. Everything is as "smooth" as you feel comfortable refactoring code or switching to alternate algorithm implementations.

> frameworks are so common that I doubt I'd be in a position to do so

Do you treat all your backend library dependencies as black boxes as well?

Even if that is the case and you want to avoid profiling your framework dependencies themselves and simply hope someone else is doing that, there's still so much in your control.

I find JS is one of the few languages where you can somewhat transparently profile even all of your dependencies. Most JS dependencies are distributed as JS source and you generally don't have missing symbol files or pre-compiled binary bricks that are inscrutable to inspection. (WASM is changing that, for the worse, but so far there are very few WASM-only frameworks and most of them have other debugging and profiling tools.)

I can choose which frameworks to use based on how their profiler results look. (I can tell you that I don't particularly like Angular and one of the reasons why is I've caught it with truly abysmal profiles more than once, where I could prove the allocations or the CPU clock time were entirely framework code and not my app's business logic.)

I've used profilers to guide building my own "frameworks" and help proven "Vanilla" approaches to other developers over frameworks in use.

> The primitives in JS do not provide that control and are often very heavy in and of themselves.

Maybe I'm missing what primitives you are looking for. async/await is about the same primitive in JS and Rust and there are very similar higher-level tools on top of them. There's no concurrency/parallelism primitives today in JS because there is no allowed concurrency or parallelism. There are task scheduling primitives somewhat unique to JS for doing things like "fan out" akin to parallelism but relying on cooperative (single) threading. Examples include `requestAnimationFrame` and `requestIdleCallback` (for "this can wait until you next need to draw a frame, including if you need to drop frames" and "this can wait until things are idle" respectively).

> I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.

I think I'm saying that it is surprising to me that people who have developed skillsets for optimizing backend code in languages designed to be fast seem to struggle applying the same skills to a language with simpler/"slower" mechanics, but also on average much higher transparency into dependencies (fuller top-to-bottom stack traces and metrics in profiles).

To be fair, I get the impulse to want to leave it as someone else's problem. But as a full stack developer who has done performance work in at least a half dozen languages, I feel like if you can profile and performance tune Rust you should be able to profile and performance tune JS. But maybe I've seen "too much of the Matrix" and my "it's all the same" comes from a deep generalist background that is hard for a specialist to appreciate.


> I still think that's a training/familiarity problem more than a language issue?

But that's fine. Even if we say it's a familiarity problem, that's fine. I'm only saying that it's not reasonable to expect my skills in optimizing backend code to somehow transfer. Obviously many things are the same - reducing allocation, improving algorithmic performance, etc. But that looks very different when you go from the backend to the frontend because the languages can look very different.

> You can just as easily start with `rg \bnew\b` as you can `rg \.clone`.

That's not true though. In Rust you have to have a clone somewhere if you're allocating on the heap, or one of the pointer types like `new`. If I pass a struct around it's either cheaply moveable (ie: Copy) or I have to `clone` it. Granted, many APIs will clone "invisibly" within them, but I can always grep to find the clone.

In Javascript, things seem to allocate by default. A new object allocates. A closure allocates. Things are very implicit, you sort of are in an "allocates by default" mode with js, it seems. In Rust I can just do `[u8; n]` or whatever if I want to, I can just do `let x = "foo"` for a static string, or `let y = 5;` etc. I don't really have to question the memory layout much.

Regardless, you can just learn those rules, of course, but you have to learn them. It seems much easier to "trip onto" an allocation, so to speak, in js.

> Again, perhaps this is just a familiarity issue

I largely agree, though I think that js does a lot more allocation in its natural syntax.

> Do you treat all your backend library dependencies as black boxes as well?

No, but I don't really use frameworks in backend languages much. The heaviest dependency I use is almost always the HTTP library, which is reliably quite optimized. Frameworks impose patterns on how code is structured, which, to me, makes it much harder to reason about performance. I now have to learn the details of the framework. Perhaps the only thing close to this in Rust would be tokio.

> I've used profilers to guide building my own "frameworks" and help proven "Vanilla" approaches to other developers over frameworks in use.

I suspect that this is merely an issue of my own biased experience where I have inherited codebases with javascript that are already using frameworks.

> Maybe I'm missing what primitives you are looking for. async/await is about the same primitive in JS and Rust and there are very similar higher-level tools on top of them.

I mean, stack allocation feels like a pretty obvious one, reasoning about mutability, control over locking, the ability to `join` two futures or manage their polling myself, access to operating system threads, access to atomics, access to mutexes, access to pointers, etc. These just aren't available in javascript. async/await in js is only superficially similar to Rust.

I mean, a simple example is that I recently switched to CompactString and foldhash in Rust for a significant optimization. I used Arc to avoid expensive `.clone` calls. I preallocated vectors and reused them, I moved other work to threads, etc. I feel really comfy doing this in Rust where all of this is sort of just... first class? Like, it's not "weird" rust to do any of this. I don't have to really avoid much in the language, it's not like js where I'd have to be like "Okay, I can't write {a: 5} here because it would allocate" or something. I feel like that shouldn't be too contentious? Surely one must learn how to avoid much of javascript if they want to learn how to avoid allocations.

> To be fair, I get the impulse to want to leave it as someone else's problem.

I just reject that framing. People focus on what they focus on. Optimizing their website is not necessarily their interest.

> I feel like if you can profile and performance tune Rust you should be able to profile and performance tune JS.

I probably could but it's definitely not going to feel like second nature to me and I suspect I'd really feel like I'm fighting the language. I mean, seriously, I'd be curious, how do you deal with the fact that you can't stack allocate? I can spawn a thread in Rust and share a pointer back to the parent stack, that just seems very hard to do in javascript if not outright impossible?

> I think I'm saying that it is surprising to me that people who have developed skillsets for optimizing backend code in languages designed to be fast seem to struggle applying the same skills to a language with simpler/"slower" mechanics

Yeah I don't really see it tbh. I mean even if you say "I can do it", that's great, but how is it surprising?


> I probably could but it's definitely not going to feel like second nature to me and I suspect I'd really feel like I'm fighting the language. I mean, seriously, I'd be curious, how do you deal with the fact that you can't stack allocate? I can spawn a thread in Rust and share a pointer back to the parent stack, that just seems very hard to do in javascript if not outright impossible?

I had alluded to it before, but this is maybe where some additional experience with other garbage collected backend languages like C# or Java could help build some "muscle memory" here.

The typical lens in a GC-based language is value types versus reference types. Value types are generally stack allocated and pass-by-value (copy-by-value; copied from stack frame to stack frame when passed). Reference types are usually heap allocated and pass-by-reference. A reference is generally a "fat pointer", with the qualification that you generally can't dereference one like a pointer without complex GC locks because the GC reserves the right to move the objects pointed to by references (for instance, due to compaction, but can also due to things like promotion to another heap). References themselves follow the same pass-by-value rules generally (stack allocated and copied).

(The lines are often blurry hence "generally" and "usually": a GC language may choose to allocate particularly large value types on the heap and apply copy-on-write semantics in a way to meet the pass-by-value semantics. A GC language is also free to stack allocate small reference types that it believes won't escape a particular part of the stack. I bring up these edge cases not to suggest complexity but to remind that profile-guided optimization is often the best strategy in any language because any good compiler, even a JIT compiler, is trying to optimize what it can.)

In JS, the breakdown is generally that your value types are string, number, boolean, and your reference types are object, array, and function. `const a = 12` is a static, stack allocated number. `const x = 'foo'` is a static, stack allocated string. It will get copied if you pass it anywhere. Though there's one more optimization here that most GC languages use (and goes all the way back to early Lisp) called "string interning". Strings are always treated as immutable and essentially copy-on-write. Common strings and strings passed to a large number of stack frames get "interned" to shared memory (sometimes the heap; sometimes even just reusing the memory of their first compiled instance in the compiled binary). But because of the copy-on-write and how easy it is to trigger, and often those copies start stack allocated, strings are still considered value types, even though with "interning" they sometimes exhibit reference-like behavior and are sort of the "border type".

Of things to look out for `+` or `+=` where one of the sides is a string can be a huge memory allocator due to copying string bytes alone, which should be easy to expect to happen.

On the reference type side `let x = {a: 5}; let y = x`, the `{a: 5}` part is an object and does allocate to the heap (probably, modulo again things like escape detection by the JIT compiler), but `x` and `y` themselves are stack allocated references. That `let y = x` is only a reference copy.

> it's not like js where I'd have to be like "Okay, I can't write {a: 5} here because it would allocate" or something. I feel like that shouldn't be too contentious? Surely one must learn how to avoid much of javascript if they want to learn how to avoid allocations.

Generally, it's not about "avoiding" the easy language constructions because they allocate, it is balancing the trade-offs of when you want to allocate and how much.

Just like you might preallocate a vector before a tight loop, you might preallocate an array or an object, or even an object pool. (Build an array of objects, with a "free" counter, borrow them, mutate them, return them to the "free" section when done.)

But some of that is trade-offs, preallocation is sometimes harder to read/reason with. On the other side the "over-allocation" you are worried about might be caught entirely by the JIT's escape analysis and compiled out. For almost all languages it is best to let a profile or real data guide what to try to optimize (premature optimization is rarely a good idea), but especially for a GC language it can be crucial. Not because the GC language is more complicated or "magic" or "mysterious", but simply because a GC language is tuned for a lot of auto-optimizations that a manually managed memory language doesn't necessarily get "for free". The trade-off for references being much more opaque boxes than pointers is that a JIT compiler has more optimization options because it can just assume pointer math is off the table. It's between the JIT and the GC where an allocation lives, more times than not, and there are some simple optimization answers such as "the JIT stack allocated that because it doesn't escape this method". It shouldn't feel like a surprise when such things happen, when you get such benefits "for free". The JIT and GC are still maintaining the value-type or reference-type "semantics" at all times, those are just (intentionally) big easy "traits" with a lot of useful middle ground and lot of cross-implementation.

> stack allocation feels like a pretty obvious one, reasoning about mutability, access to pointers

A lot of the above should be a decent starting place for learning those tools. `let` versus `const` as maybe a remaining JS piece not explicitly dived into.

References are generally "pointer enough" for most work. The JS GC doesn't have a way to manually lock a reference to dereference it for pointer math today, but that doesn't mean it never will. Parts of WASM GC are applicable here, but mostly restricted to shared array buffers (blocks of bytes).

In other GC languages, C# has been exploring a space for GC-safe stack allocated pointers to blocks of memory that support (range checked) pointer-like math called Span<T> and Memory<T>. It's roughly equivalent to Rust's Arc-like mechanics, but subtly different as you would expect for existing in a larger GC environment. As that approach has become very successful in C# I am starting to expect variations of it in more GC languages in the next few years.

> control over locking, access to atomics, access to mutexes

For the most part JS is single threaded, stack data is copied (value types), and reference-types get auto-locking for "free" from the GC. So locks aren't important for most JS work and there's not much to control.

If you start to share memory buffers from JS to a Service/Web Worker or to a WASM process you may need to do more manual locks. The big family of tools for that is the Atomics global object: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

But a lot of that is new and rare in JS today.

> the ability to `join` two futures

`Promise.all` and `Promise.any` are the two most common "standard library" combinators. `Promise.all` is the most like Rust `join`.

There are also libraries with even higher-level combinators.

> manage their polling myself

Promises don't poll. JS lives in a browser-owned event loop. Superficially you are in a browser-provided "tokio"-like runtime at all times.

There are some "low-level" tricks you can pull, though in that the Promise abstraction is especially thin compared to Rust Futures. The entire "trait" that async/await syntax abstracts is just the "thenable pattern" in JS. All you need to make a new non-Promise Promise-like is create an object that supports `.then(callBack)` (optionally a second parameter for a catchCallback and/or a `.catch(callBack)`). Though the Promise constructor is also powerful enough you generally don't need to make your own thenable, just implement your logic in the closure you provide to the Promise constructor.

Similarly on the flipside if you need a more complex combinator than Promise.all, and the reason that some higher-level libraries also exist, you just have to build the right callbacks to `.then()` and coordinate what you need to.

It's generally recommended to stick with things like Promise.all, but low level tricks exist.

> I mean even if you say "I can do it", that's great, but how is it surprising?

I think what continues to surprise me is that it sometimes reads like a lack of curiosity for other languages and for the commonalities between languages. Any GC language is built on the same exact kind of building blocks as "lower level" languages. There is a learning curve involved in reasoning about a GC language, but I don't think it should seem like a steep one. The vocabulary has strong overlaps: value types and stack allocated; reference types and heap allocated; references and pointers. The intuitions of one often benefit the other ("this is a reference type, can I simplify what I need from it inside this loop to a value type or two to keep it stack allocated or would it make more sense to preallocate a pool of them?"). Just because you don't have access to the exact same kinds of low level tools doesn't mean that they don't exist or that you can't learn how to take what you would do with the low level tools and apply them in the higher level space. (Plus tools like C#'s Span<T> and Memory<T> work where the low level tools themselves are also starting to blur more together than ever before.)

It just takes a little bit of curiosity, I think, to ask that next question of "how does a GC language stack allocate?" and allowing that to lead you to more of the vocabulary. Hopefully, I've done an okay job in this post illustrating that.


Yeah I basically already know all of this tbh, I'm already very familiar with how GCs work, the JVM, C#, etc.

I personally met a lot of folks who care about both quite a bit.

But to be fair, besides the usual patterns like tree-shaking and DCE, "runtime performance" is really tricky to measure or optimize for


While using Electron in the process.

> They use Claude to skip the typing, not the thinking. They're 10x faster than two years ago.

I'm not sure a 10x increase in typing speed makes you a 10x developer.


I've raised this exact point to many team leads throughout my career.

Yet, they unanimously said, they are interested or need to know the progress.

I can't say if thats what they have to report to their managers, but I assume it's something you won't be able to fix from bottom-up.


When I connect my server over SSH, I don't have to rotate anything, yet my connection is always secure.

I manually approve the authenticity of the server on the first connection.

From then, the only time I'd be prompted again would be, if either the server changed or if there's a risk of MITM.

Why can't we have this for the web?


> Why can't we have this for the web?

How do you propose to scale trust on first use? SSH basically says the trusting of a key is "out of scope" for them and makes it your problem. As in: You can put on a piece of paper, tell it over the phone, whatever, but SSH isn't going to solve it for you. How is some user landing on a HTTPS site going to determine the key used is actually trustworthy?

There have actually been attempts at solving this with some thing like DANE [1]. For a brief period Chrome had DANE support but it was removed due to being too complicated and being in (security) critical components. Besides, since DNSSEC has some cracks in it (you local resolver probably doesn't check it) you can have a discussion about how secure DANE is.

[1] https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


So DNS-adjacent protocols are supposed to be handling this TOFU directory,

but industry behemoths are too busy pushing other self-serving standards to execute together on this?

Am I…close?


What "TOFU directory" ? The whole point of TOFU is that you're just going to accept that anybody's first claim of who they are is correct. This is going to often work pretty well, after all it's how a lot of our social relationships work. I was introduced to a woman as Nodis, so, I called her Nodis, everyone else I know calls her Nodis, her boyfriend calls her Nodis. But it turns out her employer and the government do not call her that because their paperwork has a legal name which she does not like - like many humans probably her legal name was chosen by her parents not by her.

Now, what if she'd insisted her name is Princess Charlotte. I mean, sure, OK, she's Princess Charlotte? But wait, my country has a Princess Charlotte, who is a little girl with some chance of becoming Queen one day (if her elder brother died or refused to be King). So if I just trusted that Nodis is Princess Charlotte because she said so, is there a problem?


Would the issue not be that you would need to trust that first connection?



Cookie banners aren’t annoying enough for you?


For the handful of regularly visited websites, I wouldn't mind.


SSH has its own certificate authority system to validate users and servers. This is because trust-on-first-use is not scalable unless you just ignore the risk (at which point you may as well not do encryption at all), so host keys are signed.

There is quite literally nothing that prevents you from putting a self-signed server certificate. Your browser will even ask you to trust and store the certificate like your client does on the screen that shows the fingerprint.

Good luck getting everyone else to trust your fingerprint, though.


Perhaps our industry should adopt a different approach, that fills in the gap between those.

- You host open-source software on your own hardware.

- You pay a company for setup and maintenance by the hour.


> I saw from the SSD was around 800 MB/s (which doesn’t really make sense as that should give execution speeds at 40+ seconds, but computers are magical so who knows what is going on).

If anyone knows what’s actually going on, please do tell.


Presumably after the first run much or all of the program is paged into OS memory


Yes, or it was still in memory from writing.

The numbers match quite nicely. 40gb program size minus 32gb RAM is 8gb, divided by 800mb/s makes 10 seconds.


I'm not entirely sure but could it be predictive branching?


No, it needs to read the entire executable in order to be correct, it can't skip anything. Therefore the time for the IO must be a lower bound, predictive branching can't help that.


> LLM-generated code should not be reviewed by others if the responsible engineer has not themselves reviewed it.

To extend that: If the LLM is the author and the responsible engineer is the genuine first reviewer, do you need a second engineer at all?

Typically in my experience one review is enough.


Yeesss this is what I’ve been (semi-sarcastically) thinking about. Historically it’s one author and one reviewer before code gets shipped.

Why introduce a second reviewer and reduce the rumoured velocity gained by LLMs? After all, “it doesn’t matter what wrote the code” right.

I say let her rip. Or as the kids say, code goes brrr.


I disagree. Code review has a social purpose as well as a technical one. It reinforces a shared understanding of the code and requires one person to assure another that the code is ready for review. It develops consensus about design decisions and agreement about what the code is for. With only one person, this is impossible. “Code goes brrr” is a neutral property. It can just as easily take you to the wrong destination as the right one.


yes, obviously?

anyone who is doing serious enough engineering that they have the rule of "one human writes, one human reviews" wants two humans to actually put careful thought in to a thing, and only one of them is deeply incentivised to just commit the code.

your suggestion means less review and worse incentives.


anyone who is doing serious enough engineering is not using LLMS.


More eyes are better, but more importantly code review is also about knowledge dissemination. If only the original author and the LLM saw the code you have a bus factor of 1. If another person reviews the bus factor is closer to 2.


The fundamental problem here is shared memory / shared ownership.

If you assign exclusive ownership of all accounting data to a single thread and use CSP to communicate transfers, all of these made up problems go away.


This is equivalent of using a single global lock (and STM is semantically equivalent, just theoretically more scalable). It obviously works, but greatly limits scalability by serializing all operations.

Also in practice the CSP node that is providing access control is effectively implementing shared memory (in an extremely inefficient way).

The fundamental problem is not shared memory, it is concurrent access control.

There's no silver bullet.


Yes, multithreaded problems go away on a single thread.

Is there any way for an external thread to ask (via CSP) for the state, think about the state, then write back the new state (via CSP)?

If so, you're back to race conditions - with the additional constraints of a master thread and CSP.


That would be shared ownership again.


So then I would sell STM to you from the "other end".

Everyone else has multiple threads, and should replace their locks with STM for ease and safety.

You've got safe single-thread and CSP, you should try STM to gain multithreading and get/set.


CSP suffers from backpressure issues (which is not to say its bad, but it's not a panacea either)


Your reasoning seems counter intuitive as back in 2012 Facebook rewrote their HTML5 based app to native iOS code, optimized for performance, and knowingly took the feature parity hit.

https://engineering.fb.com/2014/10/31/ios/making-news-feed-n...


Reminds me of this 2013 story where they moved to native Java for Android and hit limits with e.g. too many methods and instead of refactoring or just not bloating their app they hacked some internals of the Davlik VM while it's running during app install: https://engineering.fb.com/2013/03/04/android/under-the-hood...


Mobile is where the users are. Desktop users are vanishing before our eyes as a market segment.


For some application certainly. Instant messaging of course has many strong point in term of what is to be dealt with. Short messages, photos, quick visios.

But to edit large document, visualize any large corpus with side by side comparison, unless we plug our mobile on a large screen, a keyboard and some arrow pointer handler, there is no real sane equivalent to work with on mobile.


Yeah, but the majority of people who would've been daily desktop or at least laptop users some 10 to 15 years ago now make do with a phone. Most people do not need to visualize any large corpus or edit large documents. Similarly, there's a great deal of phone users who's first interaction with computers was via a smartphone.


A 2012 iPhone and a 2025 Windows PC shouldn't be assumed to have the same tradeoff set just because "web vs native" is found in each description.


It's a tradeoff, different companies are allowed to chose differently or even to change their mind after some time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: