Hacker Newsnew | past | comments | ask | show | jobs | submit | diek's commentslogin

From the description I thought the expression was a function of only 't', and there was no (for instance) accumulation of the previously computed byte. Then in the image I saw the same value of 't' evaluating to different values:

t=1000: 168 t=1000: 80

Reading the source: https://github.com/KMJ-007/zigbeat/blob/main/src/evaluator.z...

It does look like the expression is a pure function of 't', so I can only assume that's a typo.


You're correct. If you have:

  public int someField;
 
  public void inc() {
    someField += 1;
  }
that still compiles down to:

  GETFIELD [someField]
  ICONST_1
  IADD
  PUTFIELD [somefield]
whether 'someField' is volatile or not. The volatile just affects the load/store semantics of the GETFIELD/PUTFIELD ops. For atomic increment you have to go through something like AtomicInteger that will internally use an Unsafe instance to ensure it emits a platform-specific atomic increment instruction.


They made the Bolt EV from 2016-2023, and they're revamping it to use the new Ultium platform.

You could buy a 2023 Bolt starting at $26,500, and they're great cars.


Bolt is not a sedan. It is a subcompact SUV.


Aren't you referring to the Bolt EUV, which is distinct from the subcompact hatchback Bolt?

https://en.wikipedia.org/wiki/Chevy_Bolt

vs.

https://en.wikipedia.org/wiki/Chevrolet_Bolt_EUV


We're kinda splitting hairs here. The significance, to me, is that the bolt does not have a very compelling drag coefficient compared to e.g. the Tesla M3 or the Ioniq 6.


This about sums it up, doesn't it?

>"Give us a budget EV that does this or that and we will buy it!

>"How about the Chevy Bolt?"

>"Sorry, the drag coefficient isn't as low as I'd like."

It's always something. This is why auto companies aren't building cars for these people: they have lots of opinions, but don't actually buy cars.


Wtf are you talking about? I said budget sedan. Bolt is not a sedan. It doesn't compete with vehicles like the Tesla Model 3 and Hyundai Ioniq 6, which have much lower drag coefficients and consequently significantly longer range despite comparable battery sizes.

You're generalizing, and it is disingenuous.


>Wtf are you talking about? I said budget sedan. Bolt is not a sedan. It doesn't compete with vehicles like the Tesla Model 3 and Hyundai Ioniq 6

I'm talking about things exactly like this: "these cars don't compete because one you put the luggage in the trunk!"


It's 2024. How are sedans still a thing?


Plenty of people don't want the higher fuel consumption, worse visibility, and higher cost associated with larger vehicles.


The OP use Tesla as an example.

The Bolt is smaller and lighter than the Model 3.


And for that smaller size and lower weight you'd expect much better range... Right? Right??

Also we'll see what the new LFP bolts weigh...


>And for that smaller size and lower weight you'd expect much better range... Right? Right??

You called it a "larger vehicle". In no way is it "larger" than a Model 3. Right? Who said anything about range?


Literally the first thing I put in my comment was "higher fuel consumption".


>It's 2024. How are sedans still a thing?

It's an irrelevant distinction, agreed.


I love my Bolt, but it's not a sedan. That said, it's not terribly surprising GM is not interested in making an electric sedan given that it's been declining in popularity over time. Ford isn't even making any sedans at all apart from the Mustang.


It's too bad the 2023 Bolts are for the most part unavailable new, even though the used 2023 bolts have super low miles, and thus not qualified for the IRA rebate.


Postgres is great as a queue, but this post doesn't really get into the features that differentiate it from just polling, say SQL Server for tasks.

For me, the best features are:

  * use LISTEN to be notified of rows that have changed that the backend needs to take action on (so you're not actively polling for new work)
  * use NOTIFY from a trigger so all you need to do is INSERT/UPDATE a table to send an event to listeners
  * you can select using SKIP LOCKED (as the article points out)
  * you can use partial indexes to efficiently select rows in a particular state
So when a backend worker wakes up, it can:

  * LISTEN for changes to the active working set it cares about
  * "select all things in status 'X'" (using a partial index predicate, so it's not churning through low cardinality 'active' statuses)
  * atomically update the status to 'processing' (using SKIP LOCKED to avoid contention/lock escalation)
  * do the work
  * update to a new status (which another worker may trigger on)
So you end up with a pretty decent state machine where each worker is responsible for transitioning units of work from status X to status Y, and it's getting that from the source of truth. You also usually want to have some sort of a per-task 'lease_expire' column so if a worker fails/goes away, other workers will pick up their task when they periodically scan for work.

This works for millions of units of work an hour with a moderately spec'd database server, and if the alternative is setting up SQS/SNS/ActiveMQ/etc and then _still_ having to track status in the database/manage a dead-letter-queue, etc -- it's not a hard choice at all.


SQL Server has included a message queue capability for a while now. It’s called SQL Server Service Broker:

https://learn.microsoft.com/en-us/sql/database-engine/config...

I haven’t had the opportunity to use it in production yet - but it’s worth keeping in mind.

I’ve helped fix poor attempts of “table as queue” before - once you get the locking hints right, polling performs well enough for small volumes - from your list above, the only thing I can’t recall there being in sql server is a LISTEN - but I’m not really an expert on it.


Also Azure is adding SQL Server trigger support

https://learn.microsoft.com/en-us/azure/azure-functions/func...


This stuff has a latency measured in minutes though, limiting the usecases a lot.


Came here to mention Service Broker. I've used it in production in multi-server configurations for a number of years. It works really well but it's terribly obscure. Nobody seems to know it's even there.

The learning curve is steep and there are some easy anti-patterns you can fall into. Once you grok it, though, it really is very good.

The LISTEN functionality is absolutely there. Your activation procedure is invoked by the server upon receipt of records into the queue. It's very slick. No polling at all.


> * use LISTEN to be notified of rows that have changed that the backend needs to take action on (so you're not actively polling for new work)

> * use NOTIFY from a trigger so all you need to do is INSERT/UPDATE a table to send an event to listeners

Could you explain how that is better than just setting up Event Notifications inside a trigger in SQL Server? Or for that matter just using the Event Notifications system as a queue.

https://learn.microsoft.com/en-us/sql/relational-databases/s...

> * you can select using SKIP LOCKED (as the article points out)

SQL Server can do that as well, using the READPAST table hint.

https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-tr...

> * you can use partial indexes to efficiently select rows in a particular state

SQL Server has filtered indexes, are those not the same?

https://learn.microsoft.com/en-us/sql/relational-databases/i...


It's better because it's not SQL Server


> Could you explain how that is better than just setting up Event Notifications inside a trigger in SQL Server.

Why? The article wasn't about that. Hear me out here, but there's value in having multiple implementations for the same idea.


The argument of the OOP I was responding to was about how Postgres was better than other SQL solutions due to 4 reasons, with SQL Server being explicitly named. I was merely wondering whether his reasoning actually considered the abilities of SQL Server.


Admittedly I used SQL Server pretty heavily in the mid-to-late-2000s but haven't kept up with it in recent years so my dig may have been a little unfair.

Agree on READPAST being similar to SKIP_LOCKED, and filtered indexes are equivalent to partial indexes (I remember filtered indexes being in SQL Server 2008 when I used it).

Reading through the docs on Event Notifications they seem to be a little heavier and have different deliver semantics. Correct me if I'm wrong, but Event Notifications seem to be more similar to a consumable queue (where a consumer calling RECEIVE removes events in the queue), whereas LISTEN/NOTIFY is more pubsub, where every client LISTENing to a channel gets every NOTIFY message.


I agree that SQL Server has similar functionality but Service Broker is pretty clunky compared to LISTEN.


Thanks for the links, I was wondering if SQL Server supports similar features.


Using the INSERT/UPDATES is kind of limiting for your events. Usually you will want richer event (higher level information) than the raw structure of a single table. Use this feature very sparingly. Keep in mind that LISTEN should also ONLY be used to reduce the active polling, it is not a failsafe delivery system, and you will not get notified of things that happened while you were gone.


For my use cases the aim is really to not deal with events, but deal with the rows in the tables themselves.

Say you have a `thing` table, and backend workers that know how to process a `thing` in status 'new', put it in status 'pending' while it's being worked on, and when it's done put it in status 'active'.

The only thing the backend needs to know is "thing id:7 is now in status:'new'", and it knows what to do from there.

The way I generally build the backends, the first thing they do is LISTEN to the relevant channels they care about, then they can query/build whatever understanding they need for the current state. If the connection drops for whatever reason, you have to start from scratch with the new connection (LISTEN, rebuild state, etc).


> Usually you will want richer event (higher level information) than the raw structure of a single table.

JSONB fields in Postgres are pretty awesome for this. You can query the JSON fields, index them, and all that.

Is that what you mean?


I use a generic subsystem modeled loosely after SQS and Golang River.

I have a visible_at field which indicates when the "message" will show up in checkout commands. When checked out or during a heartbeat from the worker this gets bumped up by a certain amount of time.

When a message is checked out, or re-checked out, a key(GUID) is generated and assigned. To delete the message this key must match.

A message can be checked out if it exists and the visible_at field is older or equal to NOW.

That's about it for semantics. Any further complexity, such as workflows and states, are modeled in higher level services.

If I felt it mattered for perf and was worth the effort I might model this in a more append-only fashion taking advantage of HOT updates and etc. Maybe partition the table by day and drop partitions older than longest supported process. Use the sparse index to indicate deleted.. Hard to say though with SSDs, HOT, and the new btree anti-split features..


Thanks for the comprehensive reply, does the following argument stand up at all? (Going on the assumption that LISTEN is one more concept and one less concept is a good thing).

If I have say 50 workers polling the db, either it’s quiet and there's no tasks to do - in which case I don't particularly care about the polling load. Or, it's busy and when they query for work, there's always a task ready to process - in this case the LISTEN is constantly pinging, which is equivalent to constantly polling and finding work.

Regardless, is there a resource (blog or otherwise) you'd reccomend for integrating LISTEN with the backend?


In a large application you may have dozens of tables that different backends may be operating on. Each worker pool polling on tables it may be interested on every couple seconds can add up, and it's really not necessary.

Another factor is polling frequency and processing latency. All things equal, the delay from when a new task lands in a table to the time a backend is working on it should be as small as possible. Single digit milliseconds, ideally.

A NOTIFY event is sent from the server-side as the transaction commits, and you can have a thread blocking waiting on that message to process it as soon as it arrives on the worker side.

So with NOTIFY you reduce polling load and also reduce latency. The only time you need to actually query for tasks is to take over any expired leases, and since there is a 'lease_expire' column you know when that's going to happen so you don't have to continually check in.

As far as documentation, I got a simple java LISTEN/NOTIFY implementation working initially (2013?-ish) just from the excellent postgres docs: https://www.postgresql.org/docs/current/sql-notify.html


What happens when the backend worker dies while processing?


Usual way is you update the table with a timestamp when the task was taken. Have one periodic job which queries the table looking for tasks that have outlived the maximum allowed processing time and reset the status so the task is available to be requeued.


Excellent comment. Thank you for taking the time to write it.


There's actually an extension called "tcn" or trigger change notification that provides such a trigger put of the box.


What you're describing is just doing a 'squash' merge


The golden rule is "do not rewrite history of a public branch". Rebase/squash your PR branches to your heart's content, but once it's merged that's it.

You get clean history by not merging branches with 50 intermediary "fiddling with X" commits in them.


Related, it is a bad idea to use long-running per team branches. Merge early, merge often, get commits to trunk ASAP. With best practices on unit testing, code review, and so on, this scales to many thousands of developers. And will save a lot of pain over time.


> destroying Commit information just to keep the graph tidy is a bad idea in my opinion

The commit information I see when telling teams to squash their branches on merge is not valuable.

* "fixing whitespace" * "incorporate review comments" * "fix broken test" * "fix other broken test"

(note, the broken tests were broken by the changes in the PR)

As soon as that PR is merged those commits are worthless. And there are branches with dozens of those "fixing X" commits that would otherwise pollute the commit graph.


> * "fixing whitespace" * "incorporate review comments" * "fix broken test" * "fix other broken test"

Things like this should not be standalone commits though, they should be incorporated into the previous branch by amending the original work. It takes some effort to have a useful git history, it does not just happen on its own.


Sounds like six vs half-dozen. Why does it matter if somebody amends vs squashes?


It does not matter if you have one commit. If your change is split into few commits for increased readability, in that case it does matter.

Do you really believe that if, for example, this change to btrfs filesystem https://lore.kernel.org/linux-btrfs/cover.1699470345.git.jos... would be squashed, nothing of value would be lost?


You can very easily rewrite your commit message on GitHub when squash merging. Since the organizations I work exclusively use squash merge, I often just update the commit to be more valuable, listing the important changes it contains. (And of course the PR in GitHub will contain the commit history of the branch that was squashed, as well as any discussion.)

IMO, this is a lot simpler and easier to do than rebasing your branch to have a flawless history.


I rather strongly disagree here.

Having whitespaces mucks up commit, causing you to lose focus of what's actually important.

I have `git blame` aliased to `git blame -w` which ignores whitespace-only changes.

You can also reblame when you come across this formatting commits.


Yep, intermediate commits on a branch tend to be completely worthless. I'd much rather have "git blame" point to the commit that contains the entire change together.


Agree strongly, it's nice in theory to view the intermediate commits but in practice have never needed to look at them


Those commits would be the bathwater one casts out alongside the useful commits in using squash merges.


If the useful commits are the "baby" in your bathwater analogy, all the useful information in those commits is in the squashed commit.

This assumes a branch being merged in represents one logical change (a feature/bugfix/etc) that is "right sized" to be represented by one commit.


Yes, but now it's mixed with the bathwater, and now morph into another metaphor as it become the needle in the haystack.

It's okay to have 'low information' commits one can easily ignore in your history, as long as the 'high information' ones stay readable and coherent.


You can usually see that in whatever tool youre using anyway. Blame -> find the PR -> see commit history.


> I die a little bit each time I try to understand what changes were related to a line when tracking down a bug

A change/feature/bug is a branch, which is squashed into a commit on your main branch, right? So your main branch should be a linear history of changes, one change per commit.

How does that impact the ability to git blame?


Because unless it's the most trivial of features, you'll break it up into smaller commits which each explain what they are doing and make reviewing the change easier.

As a simple example, I recently needed to update a json document that was a list of objects. I needed to add a new key/value to each object. The document had been hand edited over the years and had never been auto-formatted. My PR ended up being three commits:

1. Reformat the document with jq. Commit title explains it's a simple reformat of the document and that the next commit will add `.git-blame-ignore-revs` so that the history of the document isn't lost in `git blame` view.

2. Add `.git-blame-ignore-revs` with the commit ID of (1).

3. Finally, add the new key/value to each object.

The PR then explains that a new key/value has been added, mentions that the document was reformatted through `jq` as part of the work, a recommends that the reviewer step through the commits to ignore the mechanical change made by (1).

A followup PR added a pre-commit CI step to keep the document properly linted in the future.


In general I agree with you, there are absolutely times where you want to retain commit history on a particular branch (although I try to keep the source tree from knowing about things like commit IDs).

I would argue that those are by far the minority of PRs that I see. As I mentioned in another comment, _most_ PRs that I see have a ton of intermediary commits that are only useful for that branch/PR/review process (fixing tests, whitespace, etc). Generally the advice I give teams is, "squash by default" and then figure out where the exceptions to that rule are. That's mainly because, in my opinion, the downsides of a noisy commit graph filled with "addressing review comments" (or whatever) commits are a much bigger/frequent issue than the benefits you talk about. It really depends on the team.


> As I mentioned in another comment, _most_ PRs that I see have a ton of intermediary commits that are only useful for that branch/PR/review process (fixing tests, whitespace, etc).

Right, but that's only because developers don't amend and force push their commits to the PR branch as they receive feedback. Which is largely encouraged by GitHub being a terrible code review tool.

To me, git is part of the development process, it's not an extra layer of friction on top. So I compose my commits as I go. I find it helpful for recording what I'm thinking as I write the code. If I wait till the very end, I'll have forgotten some important bit of context I wanted to include. So during the day I may use the commits like save points. But before I push anything I'll often check out a new branch and create and incremental set of commits that have the change broken down into digestible pieces. And if I receive feedback, I'll usually amend those changes into the PR and force push it.

I'd like to add that I spend a lot of time cleaning up tech debt. And I deal with a ton of commits and PRs that don't explain themselves. So I'm really biased toward a clean development workflow because I hope to make the lives of those who come after me easier.

I was also trained on this workflow by being an early git contributor and it had extremely high standards for documenting its work. There's a commit from Jeff King that's a one line change with about six paragraphs of explanation.

There's no right answer here. I value the "meta" part of writing code. Not everyone does and that's okay.


When the word "force" is involved, it's time to take a step back and re-evaluate things.


It's due to GitHub lacking change set support. With Gerrit, force pushing isn't required.


> only useful for that branch/PR/review process (fixing tests, whitespace, etc).

I have had bugfix cases where, digging through the repo history, both of those examples accidentally introduced the bug (the first because the person who made the original change didn't completely understand a business rule so it changed both the code and the test, the second because of a typo in python that only affected a small subset of the data). Keeping the commit separate let me see very quickly what happened and what the intent actually was.


Because now instead of having a line changed within a granular level of changes, it's lost with the other changes from the same feature branch, which is a more macro level. So if a change in config is needed for the feature, the part when this config change actually need to be handled, or would impact the data-flow is harder to evaluate now that you mix it with template changes, style changes, new interactions needed for the users, etc...

EDIT: On top of that, there's usually a bit of 'related' work you need for a task, by example when you find an edge case related to your feature, and now you also needed to fix a bug, or you did a bit of refactoring on a related service, or needed to change the data on a badly formatted JSON file.

Unbeknownst to you, you added a bug when refactoring the related service, a bug that is spotted a few months after, only on a very specific edge case. If the cause is not obvious, you might want to reach for git bisect, but that won't be very useful now that everything I've talked about is squashed into a single commit.


> EDIT: On top of that, there's usually a bit of 'related' work you need for a task, by example when you find an edge case related to your feature, and now you also needed to fix a bug, or you did a bit of refactoring on a related service, or needed to change the data on a badly formatted JSON file.

I agree that's related work, but I'd argue that work doesn't belong in that branch. If you find a bug in the process of implementing a feature, create a bugfix branch that is merged separately. If you need to refactor a service, that's also a separate branch/PR.

That's actually the most common pushback I get from people when I talk about squashing. They say "but then a bunch of unrelated changes will be lumped together in the same commit", to which I respond, "why are a bunch of unrelated changes in the same branch/PR?"


I agree with you in principle, but it's usually because of process and friction. In the place I'm working right now, that would result in days lost as I need to create a new Jira ticket, which obviously require a team meeting for grooming (because Agile!), and then going after colleagues so that the PR is accepted, which best case still need for CI/CD pipeline to finally deploy, and then merge it to the dev branch, and finally rebase the current feature branch... and all this multiple times.


Because sometimes a PR touches more code than a single commit, and you lose the more granular context surrounding the more granular changes. You can always ask git to make the log more coarse, but once you “destroy” the granular history it is for all intents and purposes gone.


> Rather than waiting for the client to download a JavaScript bundle and render the page based on its contents, we render the page’s HTML on the server side and attach dynamic hooks on the client side once it’s been downloaded.

The fact that they don't make a reference like, "hey, ya know, how _everything_ worked just a few years ago" tells me they think this is somehow a novel idea they're just discovering.

They then go on to describe a convoluted rendering system with worker processes and IPC... I just don't know what to say. They could have built this in Java, .Net, Go, really any shared memory concurrency runtime and threading, and would not run into any of these issues.


It's actually not how things worked just a few years ago.

How things worked a few years ago: you wrote SSR pages with one set of tools (like Django Template Language), then hooked into it with another set of tools. If your pages are complex enough, you end up with weird brittleness because the "initial page load" is not handled the same way as modifications of that page.

Now it's much closer to using the same set for the initial load and subsequent edits. This is a net win for people working on the frontend, in theory.

The more nuanced thing is that frontend tooling is so lacking in terms of performance, despite being something that theoretically should work very fast. In particular, having a bunch of language tooling written in Javascript is the JS ecosystem's billion dollar mistake IMO.


Why do you need all that client-side interactivity in the first place? Most interactions can easily be handled with a full refresh, as proven by the very site you're reading this on (hackernews).

Server-side web frameworks even have modern component-based UI templating now, and features like maps can be layered on top as progressive enhancement without this bloated frontend mess.


Something like a collaborative document editor or forms with non-trivial logic.

Also, the interface of HN works because of its demographic.


Yep. I think these examples are probably fine for the SPA style approach. What gets me is the other 98% of web apps out there, that are not collaborative document editors, calendars, or the handful of other use-cases where this approach can make some sense. It feels like a lot of complexity that has been cargo-culted into the mainstream.

Much of the "progress" in web development in the past decade feels like it's just fixing problems that only exist because we're building web applications this way. React, Redux, Typescript, Server Side Rendering.. these are solutions to problems we created for ourselves by using an architecture with a dubious value proposition in most of our use-cases.


Yeah, anything like what you described are going to be web apps and need all that logic. That being said for the majority of the web (news sites, simple message boards, shopping sites, etc) couldn't they all be rendered on the server?

I have to wonder how much of the client-side rendering framework use is mainly from people forming a cargo-cult around "it's what Facebook does" when at the end of the day what Facebook does is way different from your use case.

By strictly numbers, most engineers don't work at one of those tech giants, but at small firms making CRUD apps that don't really need all that responsiveness.


"Majority of the web" is _not_ news sites and message boards and shopping sites. It's bank websites, ERP software, various CRMs (granted, like news sites...), time tracking, logistics sofware....

And, importantly, a lot of those websites have complex admin-side backends to deal with a lot of things as well! Of course in those cases you might say that the UX requirements are lower (true), but when you are basically serving as a DB frontend, you want to be able to do things quickly, even on a really bad connection.

There's a lot of cool work into partials for SSR, but React and friends are also very nice because you can build out complex UIs that operate through an API, without having to figure out a bunch of workarounds for stateless HTTP/HTML (form wizards anyone?)


All those sites have content, which require elaborate, interactive, collaborative editing and reasonably complex, flexible data models. Meaning you’re going to want to re-use much of the work you did for the editing/publishing part and you want those things to be integrated to such a degree that changing and extending things is reasonably ergonomic.


That is why for 20 years SSR frameworks have supported components.


Collaborative editors are the 1% of sites that are actually applications and need a SPA.

But Yelp is nothing like that. What part of the interface is so advanced? It's just a few links and buttons to navigate to pages and submit reviews. A few JS event handlers can handle AJAX/partial updates without an entire React frontend.

There's even stuff like https://alpinejs.dev/ and https://htmx.org/ to make this incredibly easy now.


Personally I think htmx is an interesting strategy because it unifies the rendering tech on the server (you still gotta be careful to properly scope your incremental changes).

"Advanced forms" come up all the time, and are usually when you have even a bit of non-trivial business logic. For example "if people pick this set of options, show this other option" (but you want to avoid having a form wizard, cuz then you're having to track state across multiple page loads). There's also stuff like in business applications, previewing calculations and the like.

"A few JS event handlers" can handle a lot of tiny things, but many B2B SaaS have a handful of pages with a loooot of these things, and at the end of the day you could have your hacks, or you could try to be principled about loading it.

Though it's also about just finding the right mix, there are people who can architecture their stuff "correctly", completely SSR + some JS flavoring. But it requires having people who are really good at backend and frontend figuring out the right pieces and putting it together. That diligence is hard!


More than a few years ago, but ASP.NET WebForms with the AJAX Control Toolkit did this. It was terrible for many reasons, but it did allow you to use one set of tools for everything.


I started my career with .NET and used/abused the AJAX controls too. Some of those sites are still running today, and still remain fast and responsive and simple to maintain.

The new cutting edge with Blazor is even more impressive and a serious contender for non-JS frontends. Similar advancements with Elixir/Liveview and Ruby/Hotwire


Just like JSF frameworks, with Prime being the best one.


> They could have built this in Java, .Net, Go, really any shared memory concurrency runtime and threading, and would not run into any of these issues.

It is funny how you can get away with using the wrong tool for the job in Software in a professional environment. Like imagine if you were hired to build a house, and you decided to build the foundation out of modeling clay because that's what you're used to working with. And then you started to come up with novel methods of hardening modeling clay when it proved not to be fit for purpose.

I guess you can get away with it in software because these kinds of decisions normally only manifest as increased server costs, or moments of users' lives lost to performance issues, which are much less evident to the outside observer.


Funny you should take this example. Unless you're building a skyscraper, for 1-2 floors clay is plenty good: in particular when mixed with wheat or other fibrous bodies (straw type), it's how many traditional houses have been built, and has many interesting properties:

- sourced from local material (use a different type of clay or straw if you need), no need for chemicals or for sand imports from depleted beaches on the other side of the world - recyclable to almost infinity (need a bigger/better house? just tear it down and reuse the materials) - cool in summer, warm in winter if you design it well - lasts for decades if your structure is designed well: i'm not aware of really old examples but it would be a surprise if the structure outlives us all (does someone have resources on this topic?)

So turns out your example was more interesting and less absurd than you originally thought. Just like server-side rendering uses an order of magnitude less resources (for n clients it's O(1) with caching, whereas client-side rendering is O(n)), it turns out clay is the perfect material to use an order of magnitude (or even more orders?) less resources to build your house than if you used concrete source from various polluting industries and endangered sand deposits.


Fair point re clay foundations, but I was talking about the tooling choice for server-side rendering, not server-side rendering itself. For instance if they had implemented this in Erlang or Go they likely wouldn't have run into this GC bottleneck they had to engineer around to be able to exceed 50 requests per second.

I'm a big fan of server-side rendering and am sure it could be used way more often to great effect.


They did acknowledge that, albeit subtly,

> After a string of production incidents in early 2021, we realized our existing SSR system was failing to scale as we migrated more pages from Python-based templates to React.

Apparently their old server-rendered Python app (not called SSR though for obvious reasons) was scaling just fine before the migration.


Yeah... and then this:

> We evaluated several languages when it came time to implement the SSR Service (SSRS), including Python and Rust. It would have been ideal from an internal ecosystem perspective to use Python, however, we found that the state of V8 bindings for Python were not production ready

Wow, that is some convoluted architecture

It seems like the problem here is React


Yeah, you have to do a lot to offset the problem that, out of the box, rendering React on the server is slow.


I see a lot of React hate which I can understand, but don't forget that React allows you to release faster as a big company.

React is a defacto standard that product people know and understand. They can ship complicated features faster across platforms (for example React Native), while the more technical engineers get to solve these more generic problems (dependency hells, integrations, CI/CD, performance).

Alternative solutions would potentially require more technical engineers to build features and thus slow down product development for a big company.


> "I see a lot of React hate"

Pointing out badly constructed solutions using the wrong tech is valid criticism. It's not "hate" to say that React and this SSR setup is used being used unnecessarily for a site that doesn't need it.

> "React allows you to release faster as a big company."

This is absolutely not a rule in any way. Again, using the proper tech for the situation is what matters. Large teams with complex frontends can sometimes move faster with React, but there are million other factors that go into this.


I dont want to name names, but do any tech company actually apologise after their high evangelism to the world and industry and walk back 70% of their decision five years later?

And for some strange reason this mostly happens to Web Development in general.


1. Create and evangelize over-engineered solution (React, Kubernetes...)

2. Watch potential competitors chase their tails & burn budgets adopting hot new thing

3. Quietly drop or sideline solution a few years later


Why would they? They weren’t responsible. The people who championed these approaches have left for new jobs.


I've seen that a few times now, really intelligent people over-engineering and selling a new system, moving up in the company then out. They sell technology that doesn't age well, get a fat paycheck, and never have to live with the consequences. I'm kinda done with that.

Mind you, sometimes it's inadvertent. I'm currently the only web developer at my company building a big system in Go and React. I don't believe they are very difficult or esoteric technologies, but I'm still not sure if they will be able to find a replacement if I decided to move on.

But I don't know what the alternative would have been. Probably keep trudging on with the old PHP + Dojo bombsite, but it would have the same problem because who would want to work with that tech stack? Who would be able to be productive in a 200K LOC pile of shit? I mean even if it wasn't shit, it's still 200K LOC representing a decade of work, dozens of domains and hundreds / thousands of individual features.

Which is where technology choices come in again; use simple and few tools, the problem to be solved is difficult enough already.


They will keep spitting out frontend frameworks that push computation out to the consumer. It’s probably cheaper to develop new frontend frameworks than it is to pay for server side rendering.


Dev nowadays talk about SSR like a new found panacea. Yea if we look close enough it is different from a Java spring or python Django apps, but only by a slight amount imo.

I think it is because a lot of tutorials nowadays talk about how to do X with tool Y, without telling the historical context as to why Y is in use in the first place. Usually the tool is to solve a specific problem in mind. When people discover that the tool doesn't solve their very own problem, workarounds based on the very same tool is devised.

Other common examples are kubernetes and microservices. I have seen startup jumped onto the bandwagon before having real customers where the tool in question is meant for scalability purpose


Soon they will be where we are, shaking their canes as the youngsters of tomorrow discover cgi-bin. Live and let live, appreciate them for their journey, the lessons learned and skills gained.


Not only that this statement "Server Side Rendering is a technique used to improve the performance of JavaScript templating systems (such as React). " can only be understood as a joke.

Really? A technique invented for JavaScript?!?


Yes, they could have rewritten their stack and used different technology on server and client.

Instead, this blog post shows they made small changes that resulted in a much better performing site with less server resources needed.

The idea of dropping server side rendering if the site is temporary overloaded is a good one that you can’t do if built in Java, .Net and Go.

A fork exec web server is not convoluted.


There have been way too many such nonsense in front-end developments. Putting lots of efforts to build client rendered web apps is already weird enough. And now they want to server rendering apps supposed to be client rendered? Then here comes the question: why not simply choose traditional server rendered apps?


The amount of damage you could do with simple string interpolation of HTML/JS/CSS source provided by a plain-ass HTTP server is pretty remarkable if you can use your imagination for 5 seconds.

Getting the desired plaintext documents across the network has never been such a clusterfuck in my experience.


Client side rendering is needed for apps that work offline.

SSR improves the user experience for first time use of those apps and enables SEO.

What you say is subjective but I somewhat agree with your opinion, but only for web sites, not web apps.

Overall I agree Yelp should have stuck with the existing system as their product functions as a site.


> Client side rendering is needed for apps that work offline.

Any .html page works offline. You don't need any JS or framework for that. If your JS-powered page doesn't work offline, it means either it requires online connectivity to solve problems, or its badly designed and does not respect "progressive enhancement" principles.

> SSR improves the user experience for first time use of those app

SSR improves UX for everyone. Seriously most "web" pages these days takes dozens of seconds to load and use a non-negligible percentage of our CPU/RAM. If you want to know what real-world conditions look like for literally over a billion people, run tests from a core2duo (or a similar VM) with 2GB RAM with simulated 10% packet loss and 1Mbit/s bandwidth.


I'm talking specifically about SPAs, I feel I was clear about that.

Yes, people usually reach for an SPA when they just need a site, but SPAs have their uses.


> Client side rendering is needed for apps that work offline.

Couldn't you built a native application and NOT have this problem? It seems like yet another self inflicted problem.


Sometimes you don't have the resources to build a native app, or you need a web app specifically, or you need both.

What's with other's telling people what technologies is best for them when they aren't in the scenario they are in?


> They could have built this in Java, .Net, Go, really any shared memory concurrency runtime and threading, and would not run into any of these issues.

Right, they'd have brand new issues because they were dealing with shared memory.


Like what? Server-side frameworks have existed for decades. It's not difficult to run, and modern languages make concurrency very easy.

JS doesn't have an advantage here, it's just limited to being single-threaded.


Are you genuinely telling me there are no issues with sharing memory between different threads?

Node.js is 13 years old now. I get that a lot of people think it's some new thing that only dumb frontend programmers who only know JS use, back in my day, grumble grumble. But at this point it's established, mature technology, es6 is an expressive language (much, much more so than Go, C# or Java). The multi process concurrency model is sometimes a pain in the ass and sometimes just want you want.

I've used ASP.NET and node.js in anger, have you? Or do you just attack it because you hate the JS of 2010?


> Node.js is 13 years old now.

> But at this point it's established, mature technology

I have to disagree. Day to day I maintain a variety of node.js, scala and kotlin services. By far, the most gotchas have been from the node.js services.

Simple things like validating incoming payloads are made ridiculously complex. You can get a library to do that, but which do you use? And then most of them don't export the typescript type signature so you don't get any type safety unless you re-define the type in typescript.

Let's say you want a tiered caching library? I've spent weeks of my life debugging concurrency issues because the memory store shared objects whereas the redis store didn't (because they were serialised and sent to redis). That's on the most voted caching library for node.

The quality of node.js libraries in general is far below that of scala/kotlin, and the maintenance cost is way higher. You can't easily tell if signatures for the apis of the libraries you import have changed as well. The ecosystem is still moving very fast, and while I agree that es6+ makes it a much better language, it causes its own issues when interacting with older code/libraries.

Expressive as the language may be, in production systems it leads to poorly understood code because of all the ways to do simple things, and the gotchas associated (something i believe it shares with scala).

> The multi process concurrency model is sometimes a pain in the ass and sometimes just want you want.

Give elixir a serious go and see if you ever say this again. Concurrency should be managed, much like garbage collection has been for 30 years.


Simple things like validating incoming payloads are made ridiculously complex. You can get a library to do that, but which do you use? And then most of them don't export the typescript type signature so you don't get any type safety unless you re-define the type in typescript.

I strongly recommend Zod[0]. You write your schemas in code, it validates incoming JSON at runtime, and you end up with either an error or well typed code. The schemas themselves are powerful (you can check properties of strings, etc), and the derived typescript type comes out for free and looks like a human wrote it. A very powerful tool, with a very intuitive interface.

Give elixir a serious go and see if you ever say this again.

Ah you hit me where it hurts - really need to seriously try the Erlang ecosystem.

[0]https://github.com/colinhacks/zod


appreciate the tip on Zod :) It looks like what i've been looking for!


> "Are you genuinely telling me there are no issues with sharing memory between different threads?"

Where did I say that?

I use C#, Go, Java, and lots of JS/Typescript. I like them all. I find C# far more expressive then ES6, but that's just subjective preference. The point is that the vast majority of web frameworks simplify everything to a URL route that runs some backend logic and returns some response (HTML/JSON/etc). Requests are already well isolated and you don't need to worry about threads and memory.

I can't even think of what shared memory issues a typical website like Yelp would have. Can you provide an example?

However if you do need to worry about complex multithreading, memory access and concurrency, then Node is a poor choice. The other language stacks are not only faster but have the proper data structures and ergonomics to handle it while Node/JS is single-threaded, requiring more work and creating more bugs.


I remember from using Tapestry (a Java framework) in the day, it had really aggressive in-memory caching, and you had to be really careful to disable it for components which rendered personalised or user data. We had some huge privacy leaks (which luckily we caught quickly) because we didn't realise it had this behaviour, as it didn't generally show up in local development.


Yes this this this this.

SSR ... you mean like PHP did it ... and most all of the tech did it before.

How did this even make it to Hnews...


This is a fun blast from the past.

My dad was one of the "original six" engineers that worked on Iridium within Motorola, so my 90s childhood was filled with Iridium posters and satellite footprint tracking software.

I've mentioned it previously here when talking about SpaceX and Starlink. Iridium launched with satellite-to-satellite links. None of this "bent pipe" stuff where the satellite can only route between itself and a ground station in its same coverage area. As long as you could talk to an Iridium satellite, your data could be routed _in orbit_ to its destination along those inter-satellite links.


Iridium was nothing short of an engineering marvel. Your dad must be a pretty badass engineer.


You arguably can't build something like Iridium or Starlink without it - the thing that.. makes me question Starlink was the choice of optical sat to sat links.

There are other issues with minimum spot beam sizes that make me wonder if Starlink can make money, but that choice alone made me ask a whole bunch of questions right out of the gate.


Clearly there is a difference between the things you need to do to deliver text messages and telephony and the things you need to deliver useful internet connectivity.


You cannot deliver meaningful over the ocean coverage without it, Starlink was designed to have sat to sat routing, using optical links, which is.. somewhat nonsensical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: