Hacker News new | past | comments | ask | show | jobs | submit login
Hotwire: HTML over the Wire (hotwire.dev)
1073 points by samename 26 days ago | hide | past | favorite | 545 comments



Okay this is a bit meta, but the whole cluster of "everything old is new again", "the pendulum of fashion has swung", "nothing new under the sun" takes is ignoring what tends to drive this sort of change: relative costs.

The allure of xmlhttprequest was that over connections much slower than today, and with much less powerful desktop computers, a user didn't have to wait for the whole page to redownload and re-render (one can argue that focusing on better HTTP caching on the server and client might have been smarter) after every single user interaction. This was also much of the draw of using frames (which were also attractive for some front-end design use-cases later re-solved with CSS).

As apps got more complex, clients got more compute, bandwidth grew, and as web audiences grew, offloadingl much of the page rendering to the client helped to both contain server-side costs and increase or maintain responsiveness to user interactions.

Now, as desktop client performance improvement slows down (this isn't just the slowing of computer speeds, computers are also replaced less frequently), average bandwidth continues to grow, app complexity and sophistication continues to grow, but as server compute cost falls faster than audience size grows, shifting rendering to HTML back to the server and sending more verbose pre-rendered HTML fragments over the wire can make sense as a way of giving users a better experience.


> The allure of xmlhttprequest was that over connections much slower than today

As someone who implemented a SPA framework prior to "SPA" being a word much less React or Angular, I have to say for my company, it was all about state management.

Distinguishing between web apps (true applications in the browser), and web pages (NYT, SEO, generally static content), state management was very hellish at the time (~2009).

Before that, pages were entirely server rendered, and JavaScript was so terrible thanks to IE and a single error message (null is null or not an object) that it was deemed insanity to use it for anything more than form validation.

However, with the advent of V8, it became apparent as an ASP.NET developer that a bad language executing at JIT speeds on the browser was "good enough" to not send state back and forth through a very complex mesh of cookies, querystring parameters, server-side sessions, and form submissions.

If state could be kept in one place, that more than justified shifting all the logic to the client for complex apps.


I don't know about you, but "cookies, querystring parameters, server-side sessions, and form submissions" to me are an order of magnitude simpler, though dated and not very flexible, than any modern JS client-side state and persistency layer.


Form submissions are brutally bad the moment a back button comes in. I remember so many "The client used the back button in a multi-page form and part of the form disappeared for them" bugs.


They're not if coded correctly and using the correct redirect/HTTP response code.


Yeah. I remember dozens of edge cases that involved errors and back buttons and flash scope and on at least one occasion, jquery plugins.

Or back buttons and CSRF tokens and flash scope...

Or, let's talk about a common use case. Someone starts filling in a form, and then they need to look at another page to get more information. (This other page may take too long to load and isn't worth putting in the workflow, or it was cut from scope to place the information twice.) So, they go out to another page, then back, and are flustered because they were part way through the work.

So, if you want this to work, you're going to need state management in the client anyway. (Usually using SessionStorage these days, I'd presume?) So, then, we've already done part of the work for state management. You are then playing the "which is right, the server or the client" game.

You accumulate enough edge cases and UX tweaks, and you're half way down the SPA requirements anyway.

Now, hopefully Hotwire will solve a large number of these problems. I'm going to play with it, but the SPA approaches have solved so many of the edge cases via code and patterns.


> but the SPA approaches have solved so many of the edge cases via code and patterns.

Part of the problem has also been ameliorated by larger screens and browser tabs.


Yes. You redirect after POST.


See: mobx


> If state could be kept in one place, that more than justified shifting all the logic to the client for complex apps.

Reminds me of saving Rich Text content on the server side. It was nightmare.

Also reminds me of the Microsoft RTF format. It's basically a memory dump of the GUI editor.

The state binded on a tree was never a good idea to start with.


I don't think app complexity and sophistication grew that much. Most of the problems common apps are solving can be dealt with standard CRUD-like interfaces and old boring tech and it works just fine.

I think what drives this crazy train of overengineered solutions of SPAs and K8s for hosting single static page is deep separation of engineeres from the actual business problems and people they are trying to help. When all you have are tickets in Jira or Trello you don't know why you should do or if they actually benefit someone then it's natural to invent non-existant tech problems which are suddenly interesting to solve. That is natural for curious engineers and builders. Then mix in 1% of big apps and companies which actually do have these tech problems and have to solve them and everybody just wants to be like them and start cargo culting.


Curious what you make of this:

I recently wrote a SPA (in React) that, in my opinion, would have been better suited as a server-side rendered site with a little vanilla js sprinkled on top. In terms of both performance and development effort.

The reason? The other part of the product is an app, which is written in React Native, so this kept a similar tech stack. The server component is node, for the same reason. And the app is React Native in order to be cross-platform. We have ended up sharing very little code between the two, but using the same tech everywhere has been nice, in a small org where everyone does everything.


Agree. The transformation of software into ever-smaller and more specialised 'ticketing' is as much a result of managerialism encroaching in. Basecamp have this bit in their Shape Up handbook about responsibility and how it affects their approach to software:

----------

Making teams responsible

Third, we give full responsibility to a small integrated team of designers and programmers. They define their own tasks, make adjustments to the scope, and work together to build vertical slices of the product one at a time. This is completely different from other methodologies, where managers chop up the work and programmers act like ticket-takers.

Together, these concepts form a virtuous circle. When teams are more autonomous, senior people can spend less time managing them. With less time spent on management, senior people can shape up better projects. When projects are better shaped, teams have clearer boundaries and so can work more autonomously.

----------


You've just explained the adoption of microservices in the enterprise.

I wonder if other industries suffer from the same problem, bored engineers.


I've never seen one of these "logic in html-attributes" systems take error checking seriously. In stimulus they start to mention it in "Designing For Resilience" (though only for feature-checking), but in "Working With External Resources" where it uses calls network/IO bound calls they never mention how to handle errors or if the framework just leaves it up to you. Stimulus is also where you need to write your own js code, so I guess you could handle it yourself but in turbo when I skimmed the handbook I find no mentions of what/how to handle errors (or even what happens when turbo gets one), and when loading stuff over the network that is pretty much crucial.

From the turbo handbook: "An application visit always issues a network request. When the response arrives, Turbo Drive renders its HTML and completes the visit." Using the phrase "When the response arrives" begs the question of what happens if it doesn't arrive, or if it takes a minute for it to arrive, or if it arrives but with a faulty status code.


Counterpoint: is there any error handling in the majority of SPAs today? From my experience, SPAs can crap out in all kinds of interesting ways when the underlying network connection is flaky and I often end up stuck on some kind of spinner that will never complete (nor give me a way to abort & retry the operation when I already know it won't complete and don't want to wait for the ~30-second timeout, if there is a timeout even).

Not saying this is better from an error handling perspective, but at least the whole idea of Hotwire and its peers (Turbolinks, etc) is that there is no state and it should thus be safer and quicker to reload the page should things go wrong.


I agree that most SPA apps do it badly too, but hiding the opportunity to do it well certainly does not help.

> there is no state and it should thus be safer and quicker to reload the page should things go wrong.

That's not exactly true since there are non-idempotent HTTP methods and while the browser will prompt you if you want to resend a non-idempotent HTTP request when refreshing a normal form POST I don't think that turbo/turbolinks/similar will allow you to prompt or resend.

On refresh should turbo retry a POST? The "right way" is to keep the state of the last POST and prompt the user for confirmation, but it seems like it is undocumented as to what it does. I'm guessing it either does not retry or it retries and hopes effect will be idempotent.

No one (SPAs, traditional webpages and "spiced" webpages like this included) is doing everything right, but my objection to this framework is that it seems to try to say things are simple or easy when they clearly aren't.


You're correct in that the only standards-based way to retain a POST in the session history is to not disturb an existing entry. However:

> it seems to try to say things are simple or easy

That's an unfair mis-characterisation. The developers are not pitching a universal panacea that solves all your problems and handles every edge case. They are offering an architecture that simplifies many common scenarios, and one that is thoroughly developer-friendly when it comes to supplying observability and integration hooks for edge cases.

For this latter purpose it merely remains to bother with reading the (clean & elegant) source code to enlighten oneself.

> it seems like it is undocumented

On the contrary, the behavior w.r.t full-page replacement on non-idempotent verbs is extensively discussed in the Turbolinks repo.

The "Turbo Drive" component appears to me as essentially unchanged behaviour in Turbo 7.0.0beta1 from Turbolinks versions 5.x. Turbolinks was introduced in 2013, has many years of pedigree and online discussion, and is well understood by a large developer community. Turbolinks was always maintained, even being ported to TypeScript (from the now venerable CoffeeScript) ca. two years ago with no change in behaviour. Turbo Drive is, practically, just a slightly refactored rebrand of the TypeScript port.

The stuff everyone is so excited about are Turbo Frames and Turbo Streams. These are new, and may be used without adopting Turbo Drive: as with practically everything from Basecamp, the toolkit is omakase with substitutions. They are, nevertheless, complementary, so you get all three steak knives in one box.


I believe the only place you'd use a POST with Turbolinks is in response to an explicit user action like pressing a button. In this case, if it fails, you'd refresh the root page (which embeds the button) at which point the state of that page would reflect whatever the server has, so it would display the new data or may not even have the button anymore if the initial POST actually did make it to the server.


Facebook does this constantly for me. It's a crapshoot whether I'll be able to open notifications or messages without a couple of refreshes, or if I'll just get the fake empty circle loading UI indefinitely until I hit F5.


This is my experience in Reddit generally these days on mobile Safari


This combined with them intentionally breaking the mobile web experience has almost entirely stopped me from using Reddit.


I was trying to quit Reddit for years and their intentional breaking of mobile web (as well as making email address required even for already existing accounts) is what finally enabled me to.

Of course now I just go on Hacker News and Twitter instead.


I refresh SPA apps more than other apps because of these problems.


Me too. However, this also doesn't work properly on a lot of SPAs xD


Also the app looking fine immediately after refresh (when it's been server-side rendered), then crashes a second later when the JS framework hydrates the HTML and hits a client-side bug.


Which I find frustrating because literally the only reason I find compelling for making an SPA in the first place is to deal with flaky networking situations.

If I know the network is always there, why bother.


A very good point! Presumably the appeal of a system like this is the potential for graceful degradation where if sockets aren’t working or some requests are failing then the default html behavior should still work: links will just take you to the original destination, but there’s no indication that this is actually what happens.


This is an isomorphic fetch. The original href already is the visited URL, so I'm not sure that trying that again is wise, or appropriate, unless the user chooses to reload.

The entire design philosophy here is to mimic apparent browser behaviour, or to delegate to it. Hence, to GP's question; you should expect the appearance of browser-like behaviour in any circumstance, modulo anything Turbo is specifically trying to do different. Deviation from baseline browser semantics was certainly a basis for filing bugs in its predecessor (Turbolinks).

As for what Turbo actually does, I checked the source. Good news, even for a first beta, they're not the cowboy nitwits alleged; it gracefully handles & distinguishes between broken visits and error-coded but otherwise normal content responses, and the state machine has a full set of hooks, incl. back to other JS/workers, busy-state element selectors, and the handy CSS progress bar carries over from Turbolinks.


I’ve used intercooler with browser-side routing and, the strategy for error recovery that makes sense in that context is “if something goes wrong, reload the page”: the server is designed to be able render the whole page or arbitrary subsets and, so, reloading should usually be safe.


in htmx we trigger an htmx:responseerror:

https://htmx.org/events/#htmx:responseError

in general, the right approach in HTML-oriented, declarative libraries appears to be triggering error events and allowing the client to handle them, since it is too hard to generalize what they would want


Yes!

1. What if something goes wrong?

2. How do I test for handling success/error?

They never address this stuff.


They do already address error handling. GP is shooting the breeze here, evidently has no specific knowledge of Turbolinks family API or behaviour.


...also what is when response/request "items" are not handled chronologically due to load. I once wrote a real-time application with that pattern (HTML over AJAX). It worked but it was not enjoyable at all. Also literally every larger feature change would break the code because you had all these weird corner-cases.


As others have noted, seems reasonably similar to LiveView, Livewire and Blazor. I’m somewhat bullish on these approaches - server side rendered monoliths (Rails, Django, etc.) are SO productive, at least for the first few years of development, but lack of interactivity is a big issue, and this solves it well.

However, another big issue is the dominance of mobile. More and more, you’ve got 2-3 frontends (web and cross-platform mobile, or explicitly web, iOS and Android), and you want to power them all with the same backend. RESTful APIs serving up JSON works for all 3, as does GraphQL (not a fan, but many are). This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative. Mobile is a big part of why SPAs have dominated - you use the same backend and overall approach/architecture for web and mobile.

I’d strongly consider this for a web-only product, but that’s becoming more and more rare.


> I’d strongly consider this for a web-only product, but that’s becoming more and more rare.

They have accompanying https://github.com/hotwired/turbo-ios and https://github.com/hotwired/turbo-android projects to bridge the gap.


Everyone who is talking about how the route the industry took with SPAs was just a silly mistake, and that we should go back to the good old days of PHP are forgetting that at the end of the day the most important thing is to choose the best tool for the job at hand.

This, while very interesting and might have a preferable set of constraints for some projects, is simply not a good fit for many others, as you mentioned in your comment. This looks amazing, and I would definitely try it for a project in which it would fit, but I don't really see a reason to disparage the work others have been doing over the past decade. We need those other tools too!

(sorry for the rant)


In the case of Rails if you're happy with a RESTfull API it handles serving different kinds content such as json pretty seamlessly via the respond_to method. i.e. if you want JSON ask for JSON if you want rendered html ask for that.

However I think that for iOS they're still offering server side rendering via Turbo-iOS and Turbo-Android so you can build quickly and then replace that later if you need to.


^ extremely underrated comment.

This is one of the primary promises of MVC in the first place: views can be rendered independently of controllers and models. For a given controller method call, a view can be specified as a parameter.

In this case, swap "view" for JSON sent back over the wire...


This! I am not a fan of SPA everything. This looks like a great framework for web only. But as you said, what about mobile development? Sure they are still going publish Strada. But will it also work, for example, with Flutter without additional friction? How about services that need to consume output from each other? I believe parsing HTML is more expensive than JSON.


> More and more, you’ve got 2-3 frontends (web and cross-platform mobile, or explicitly web, iOS and Android), and you want to power them all with the same backend.

RESTful APIs serving up JSON works for all 3, as does GraphQL [...]. This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative.

I bet someone will produce a native client library that receives rendered SPA HTML fragments and pretends it's a JSON response. They might even name it something ironic like "Horror" or "Cringe".

That said, an ideal API for desktop web apps looks rather different than one for mobile web or native clients. Basically, for mobile you want to minimize the number of requests because of latency (so larger infodumps rather than many small updates) and minimize the size of responses due to bandwidth limitations and cost (so concise formats like Protocol Buffers rather than JSON).

It is definitely possible to accommodate both sets of requirements at the same API endpoint, but pretending that having a common endpoint implies anything else about the tech stack is rather disingenuous. If you want server-side rendering and an API that delivers HTML fragments instead of PB or JSON, that can be done too.


Isn't GraphQL supposed to solve this problem? You have one GraphQL API and each client requests only the information it needs. Maybe the responses are still JSON but I would think you would come very close to an API that serves all the clients.


Even without GraphQL, you can accommodate both sets of needs. I said as much. I'm also saying that the argument about the user-facing tech stack is bogus.


I can't think of a worse idea than telling people their mobile app should be scrapping their desktop page for data.


Not the page, the API (which returns HTML fragments for incorporating into the page).

And if you can't think of anything worse, you're not trying very hard.


For mobile, they used Turbolinks to release and maintain ios and android basecamp apps based on minimal os-specific chrome code and back-end web-based "pages".

Really, an incredible bang for the buck.


> RESTful APIs serving up JSON works for all 3, as does GraphQL (not a fan, but many are). This however is totally web-specific

HTML is a machine-readable format, like XML and JSON. Have your back end represent a given resource as microformatted-semantic markup, send it gzipped over the wire, and you've got the data exchange you need, even if your mobile app isn't already dressed-up webview.


Are you still referring to dedicated API routes, or are you talking about annotating your UI to the point where it can serve as the API as well? I remember the latter being the vision behind things like RDFa, but those approaches never took off, for a variety of reasons.


> annotating your UI to the point where it can serve as the API as well

At that point you might as well serve XML and use an XSLT transform (+ CSS) to render the view on the client (yes, this is still possible without JavaScript).


Either. Or both. :)

Generally the projects I've felt best about have two features:

1) The API knows how to represent resources across multiple media types, usually including at least markup and JSON.

2) UI is well-annotated enough that developers and machines find it easy to orient themselves and find data.

But you're quite right that this isn't common. I have my own guesses on the reasons why. My observation's been that the workflow and stakeholder decision making process on the UI side places semantic annotation pretty low on the priority side; most places you're lucky if you can get a style guide and visual UI system adopted. And there has to be cooperation and buy-in at that level in order for there to be much incentive to engineer and use a model/API-level way of systematic representing entities as HTML, which often won't happen.

And TBH it is extra effort.


We gotta wait and see what Strada has in store. Looks like Basecamp and Hey mobile apps are fairly good.


What's old is new again. I recall ASP.NET had some interesting tech around this in the 2000s where it could dynamically update parts of the page.

If I recall correctly, this made use of that new technology of the time called "XMLHttpRequest" (/s) which pretty much jump-started web 2.0.


My thought exactly, though I fully support that. I often rant about how the modern web is billions of layers of duck tape over duck tape and it has become an unmanageable mess of libraries, frameworks, resources, all while javascript remains the most outrageous and absurd language ever created. I'm by no means a fan of rails or ruby for that matter but I think things like these are a considerably better alternative than all the ridiculous libraries and frameworks everyone uses, which result megabytes of javascript and require corporate-grade bandwidth and at least an 8-th gen i7 and at least 8gb of memory to open. And all that to open a website which has 3 images and a contact form. I mean someone should create a package that analyzes websites and creates a minimum requirements manifest. It's good to see that there are people who are trying to bring some sanity.


Preach! Websites don’t seem all that much better to me than they did 10 years ago [^fn], so what are we gaining with all these much more complex and fragile tools?

[fn]: Arguably, the web is worse with chat bots, sticky headers, and modals constantly vying for your attention.


> Arguably, the web is worse with chat bots, sticky headers, and modals constantly vying for your attention.

We can blame this on the MBA types. I've literally never heard a software engineer say "hey, let's make this pop-up after they've already been looking at the page for a minute!" or anything like it.


Engineers typically aren’t tasked with increasing revenue/engagement.


Unfortunately I have to disagree - if there weren't any engineers around to implement the dark patterns they wouldn't be as prevalent. Maybe this calls for an equivalent of the Hippocratic Oath but in the tech world?


Brick layers. Developers develop, designers design was more my point, though of course this line is blurred in many organisations.


There is plenty of duck tape yes,

But there is suprisingly little layers on layers. Part of what has been amazing about the web is that the target remains the same. There is the DOM. Everyone is trying different ways to build & update the DOM.

Agreed that there are better alternatives than a lot of what is out there. We seem to be in a mass consolidation, focusing around a couple very popular systems. I am glad to see folks like Github presenting some of the better alternatives, such as their Catalyst tools[1] which speed up (developer-wise & page-wise (via "Actions") both) & give some patterns for building WebComponents.

The web has been unimaginably stable a platform for building things, has retained it's spirit while allowing hundreds of different architectures for how things get built. Yes, we can make a mess of our architectures. Yes, humanity can over-consume resources. But we can also, often, do it right, and we can learn & evolve, as we have done so, over the past 30 years we've had with the web.

[1] https://github.github.io/catalyst/


If by surprisingly little, you mean 4 pages and 500mb of requirements for a "hello world" project with the "modern" web, then yes. The DOM has always been a mess, much like javascript. And the fact that no one has tried to do something about it contributes to the mountains of duckt tape. It was bad enough when angular showed up, but when all the other mumbo jumbo showed up like react, vue, webpack and whatnot is when it all went south. I refuse to offend compilers and call this "compiling", but the fact that npm takes the same amount of time to "compile" it's gibberish as rustc to compile a large project(with the painfully slow compilation that comes with rust by design), is a clear indication that something is utterly wrong.


Again, you are attributing to the web what a pop-culture is doing with it.

While willfully ignoring all the people doing better.

Maybe we are- as you fear- stuck, forever, in thick JS-to-JS transpilers & massive bundles & heavy frameworks. Maybe. I don't think so.


> If by surprisingly little, you mean 4 pages and 500mb of requirements for a "hello world" project with the "modern" web, then yes.

React is well under 20k.

FWIW when optimizing my SPA, my largest "oops" in regards to size were an unoptimized header image, and improperly specified web fonts.

There are some bloated Javascript libraries out there, yes. But if you dig into them you will often find that they are bloated because someone pulled in a bunch of binary (or SVG...) assets.


Ah react, the biggest crap of them all. A 12 year old with basic programming skills is definitely capable of designing a better "framework". Yes, everything frontend is quoted because it's nothing more than a joke at this point. Back to react, and ignoring all the underlying problems coming from the pile of crap that is js, let's kick things off with jsx. The fact that someone developed their own syntax(I'm not sure if I should call it syntax or markup or something else) makes it idiotic: It's another step added to the gibberish generation. It's full of esoteric patterns and life cycles which don't exist anywhere in the real cs world. React alone provides little to nothing so again you need to add another bucket of 1000 packages to make it work. Compare it to a solid backend framework that isn't js: all the ones I've ever used come with batteries included. The concept of components and the idiotic life cycles turn your codebase into an unmanageable pile of of callbacks and sooner rather than later you have no clue what's coming from where. Going into the size, the simple counter example on the react page is 400kb. Do I need to explain how much stuff can be packed in 400kb? For comparison, I had tetris on my i486 in the early 90's which was less than 100kb, chess was a little over 150kb. Christ, there was a post here on HN about a guy who packed a fully executable snake game into a qr code.


> Ah react, the biggest crap of them all. A 12 year old with basic programming skills is definitely capable of designing a better "framework".

You're literally a stereotypical Hacker News commenter. I also find the modern frontend a bit too complicated but this is just an unreasonable statement.


You'd be right if I had not given arguments for my statement. I did as a matter of fact.


> let's kick things off with jsx.

Of all the problems I have with React, and I do have a few, JSX is not one of them.

If you are going to be using a language to generate HTML, you are either going with a component approach that wraps HTML in some object library that then spits out HTML, or you are stuck with a templating language of some sort. (Or string concatenation, but I refuse to consider that a valid choice for non-trivial use cases.)

JSX is a minimal templating language on top of HTML. Do I think effects are weird and am I very annoyed at how they are declaration order dependent? Yup. But the lifecycle stuff is not that weird, or at least the latest revision of it isn't (earlier editions... eh...). The idea of triggering an action when a page is done loading has been around for a very long time, and that maps rather well to JSX's lifecycle events.

> React alone provides little to nothing

Throw in a routing library, and you are pretty much done.

Now another issue I do have is that people think React optimizes things that it in fact does not, so components end up being re-rendered again and again. Throw Redux in there and it is easy to have 100ms latency per key press. Super easy to do, and avoiding that pitfall involves understanding quite a few topics, which is unfortunate. The default path shouldn't lead to bad performance.

> The concept of components and the idiotic life cycles

Page loads, network request is made. Before React people had listeners on DOM and Window events instead, no different.

Components are nice if kept short and sweet. "This bit of HTML shows an image and its description" is useful.

> Do I need to explain how much stuff can be packed in 400kb?

No, I've worked on embedded systems, I realize how much of a gigantic waste everything web is. But making tight and small React apps is perfectly possible.

And yes, if you pull in a giant UI component library things will balloon in size. It is a common beginner mistake, I made it myself when I first started out. Then I realized it is easier for me to just write whatever small set of components I need myself, and I dropped 60% of my bundle app size.

In comparison, doing shit on the backend involves:

1. Writing logic in one language that will generate HTML and Javascript 2. Debugging the HTML and Javascript generated in #1.

And then someone goes "hey you know what's a great idea? Let's put state on the back end again! And we'll wrap it up behind a bunch of abstractions so engineers can pretend it actually isn't on the back end!"

History repeats itself and all that.

SPAs exist for a reason. They are easier to develop and easier to think about. And like it or not, even trivial client side functionality, such as a date picker, requires Javascript (see: https://caniuse.com/input-datetime).

SPAs, once loaded, can be very fast and scaling the backend for an SPA is a much easier engineering task (not trivial, but easier than per user state).

Is all of web dev a dumpster fire? Of course it is. A 16 year old with VB6 back in 1999 was 10x more productive than the world's most amazing web front end developer now days. Give said 16yr old a copy of Access and they could replace 90% of modern day internally developed CRUD apps at a fraction of the cost. (Except mobile support and all that...)

But React isn't the source of the problem, or even a particularly bad bit of code.

axegon_ 25 days ago [flagged]

jsx is a retarded idea because it adds an abstraction over something brutally simple(html). Abstractions are good when you are trying to make something complex user-friendly and simple.

> Throw in a routing library, and you are pretty much done.

Ok routing library, now make an http request please without involving more dependencies....

> Throw Redux in

See, exactly what I said: we are getting to the endless pages of dependencies.

> 100ms latency per key press

100ms latency??!?!?!? In my world 100ms are centuries.

> 1. Writing logic in one language that will generate HTML and Javascript 2. Debugging the HTML and Javascript generated in #1.

I don't have a problem with that. At the end of the day you know exactly what you want to achieve and what the output should be, whereas react it's a guessing game each time. We are at a point where web "developers" wouldn't be able to tell you what html is. With server-side rendering, from maintenance perspective you have the luxury to use grep and not rely on post-market add ons, plugins and ide's in order to find and change the class of a span.

The term SPA first came to my attention when I was in university over 10 years ago. My immediate thought was "this is retarded". Over a decade later, my opinion hasn't changed.


your posting is tribalistic & cruel & demeaning, it attacks and attacks and attacks. this is so hard to grapple with, so so aggressive & merciless & disrespectful. I beg you to reassess yourself. don't make people wade through such mudslinging. please. there's so few better ideas anywhere, & so much heaped up, so much muck you rake us take through. please don't keep doing this horrible negative thing. it's so unjust & so brutally harsh.

> jsx is a retarded idea because it adds an abstraction over something brutally simple(html).

what programming languages do it better?

one of reacts greatest boons, it's greatest innovations, in my mind, is that it gave up the cargo cult special purpose templating languages that we had for almost two decades assumed we needed. it brought the key sensibility of php to javascript: that there was no need, no gain, by treating html as something special. it should be dealt with in the language, in the code.

if you have other places that have done a good job of being ripe for building html directly, without intermediation, as you seem to be a proponent of, let me/us know. jax seems intimately closer to me to what you purport to ask for than almost any other language that has come before! your words are a vexing contradiction.

> Ok routing library, now make an http request please without involving more dependencies....

please stop being TERRIFIED of code. many routing libraries with dependencies aren tiny. stop panicking that there is code. react router v6 for example is 2.9kb. why so afraid bro?

this is actually why the web is good. because there are many many many problems, but they are decoupled and a 2kB library builds a wonderful magical consistent & complete happy environment that proposes a good way of tackling the issues. you have to bring some architecture in but anyone can invent that architecture, the web platform is in opinionated ("principle of least power" x10,000,000), and the solutions tend towards tiny.

redux is 2kB with dependencies as well.


The only thing I'm openly disrespecting beyond the unholy mess that is the web of 21-st century, is react and potentially it's designers(I was raised not to be offended but if they are, good riddance). The web is in the worst shape it's ever been. I'm not terrified of code, I've been writing code for over 20 years. I hate antipatterns and spaghetti code, which is what the modern web is, top to bottom, frameworks and libraries included. The main idea behind javascript was to give small and basic interactivity which is entirely self-contained. Is it now? The fact that the modern web relies on tons of known and unknown projects, complex ci/cd pipelines and gigabytes of my hard drive is a clear indication that the js community messed it up. Very similar situation as php, which was intended to be a templating engine (and for that purpose it is brilliant) but just like js, it was turned into Frankenstein's monster(still to a lesser degree). And don't get me started on the endless security issues npm poses. I'm blown away by the fact that those are exploited so little - 15 year old me would have been in heaven if given those opportunities, along with half of my classmates. I seriously wonder what teenagers do these days.


stop deleting flagged posts!! we will never ever improve if you keep deleting antithesis!!!!!! bad arguments are necessary!


> 100ms latency??!?!?!? In my world 100ms are centuries

Yup that's crappy. The ease of it happening, the Work At A Startup page used to have this issue (may still, haven't looked lately) shows that it isn't hard to make accidentally happen.

As I said, it is a weakness of the system.

> sx is a retarded idea because it adds an abstraction over something brutally simple(html)

Have you seen how minimal of an abstraction jsx is? It is a simple rewrite to a JS function that spits out HTML, but JSX is super nice to write and more grep-able than the majority of other templating systems.

I have a predisposition to not liking templating systems, but JSX is the best part of React.

Notably it doesn't invent it's own control flow language, unlike most competitors in this space.

> My immediate thought was "this is retarded".

Well the most famous SPA is gmail and it's rather popular, you may have heard of it. It is bloated now, but when it first debuted it was really good. Webmail sucked, then suddenly it didn't.

Google maps. Outlook web client. Pandora. Online chat rooms, In browser video chat, (now with cool positional sound!)

SPA just means you are just fetching the minimum needed data from the server to fulfill the user's request, instead of refetching the entire DOM.

They are inherently an optimization.

Non-SPAs can be slow bloated messes as well, e.g. the Expedia site.


appreciated your previous tweet but I am falling asleep here & none of your rebuttals feel like they address the topics raised in a genuine/direct fashion. so many stronger points to make against this


> jsx is a retarded idea

> My immediate thought was "this is retarded".

It's generally frowned upon to use retarded in this manner. Not only is it insulting to people, it brings down the overall tone of your argument.


npm does not compile, it is just a package manager. That said, I understand your frustration.


Just wanted to point out that it is called duct tape, just to avoid misunderstandings since i had a similiar spelling error as a non-native speaker :)


Interestingly, waterproof fabric-based tape was originally called "duck tape" (for its waterproof quality). The same kind of tape was later also called duct tape, but it's actually pretty terrible for ducts. You want to use the all-aluminum tape for ducts. https://www.mentalfloss.com/article/52151/it-duck-tape-or-du...


Oh wow, so there actually is something called Duck Tape. TIL


This error is understandable as there is a popular brand of duct tape called Duck Tape.


...which is named after cotton-duck material tape once used for repairing ducts.

https://en.wikipedia.org/wiki/Cotton_duck


"Duck tape" originally referred to tape made from duck cloth. They started using it for duct work, and began to call it "duct tape", to the point where "duck" fell out of common use and was able to be trademarked. They've also stopped using it for ducts.

So call it either one and people will know what you're talking about.


Check out Gaff Tape as a replacement for the duct-tape at home use-case.

And for actual ducts you'll want to use foil-tape because temperature changes wreck the adhesion of duct-tape, then the moisture leaks into the walls/ceiling which is $$$$ bad.


> And for actual ducts you'll want to use foil-tape because temperature changes wreck the adhesion of duct-tape, then the moisture leaks into the walls/ceiling which is $$$$ bad.

This strongly depends on the type of duct. Flex ducts that are a plastic skin over a wire coil don't work so well with aluminum tape.


Oh, yes. I'm referring to rigid steel/tin works only.


it's funny how culture shapes our designations of the same thing. In (specially us) english it's duct tape as the product is primarily known for installation work, and often referred to as "duck tape" for it's well known brand, whereas in (german speaking) europe we also employ an English word for it, but call it "gaffer tape", for it's usage by light operators in the event business (so called gaffers)


Gaffers tape is a very different kind of tape similar only in appearance. It’s got a soft cotton content and is used to keep wires taped to things like rugs seamlessly so that people walking around your studio/tv-station/theatre etc don’t accidentally trip on it, potentially damaging the extremely expensive attached equipment in the process. If you’ve ever tried using “duct tape” as gaffers tape, you’d have a bad time, as it wouldn’t be that great at keeping wires down AND it’s likely to leave a residual adhesive on the floor when you take it off.

Duck tape is the stuff developed for the US army to waterproof things closed. Post-war, they made it silver instead of green and marketed it for use with ducts (since being waterproof made it also SEEM like a good candidate for the job in heating systems), but its pretty terrible for this purpose since temperature changes degrade the adhesive rapidly.

The tape you actually want to use for ducts is foil-backed tape.

In short, it was and still is a great marketing gimmick, but Duck Tape was only ever “ok” at keeping things water proof and only looks like gaffers tape or the tape you want to use on ducts.


I don’t think gaffer’s tape and duct tape are the same thing. Gaffer’s tape needs to be easily removable, which duct tape generally is not.


I'm a native speaker.

I use alot of Duck Tape .

From the little language study I've done, English is one of the most flexible. You can discard entire parts of speak and it still works.

Saying , ,'ey you woke up yet', is ok in many contexts.


Javascript as a language is actually pretty decent these days, your criticism probably applies more to certain parts of the ecosystem.


No, I'm talking about js as a whole. Standard library is crap, inconsistent, even the most basic of naming conventions are not followed anywhere. The fact that the standard library jumps between camel case, pascal case, snake case and unicase at random is a perfect example. The list of absurdities is beyond ridiculous[1].

[1] https://github.com/denysdovhan/wtfjs


Who on this thread was saying what's old is new? 100% of what you said has also been lobbed at PHP recently and I've heard similar complaint about PERL and MS-SQL (that I remember well) and I'm sure others (one of those Delphi product too)


What's been strange to me though is I've heard JS advocates lobbing those criticisms at PHP, making the case for, say, why 'Node is awesome, PHP sucks'. Conflating a framework vs a language, then pointing out PHP 'issues' that also exist in JS... there's generally little point in trying to engage/correct at that point (context: primarily conference hallway conversations and meetup groups back when those actually happened).


JS is the new PHP. Part of the problem with massive popularity is it attracts also the lower ability devs and the ecosystem slowly degrades due to this. This cascades.


What standard library?


Surely your point could be made better without the hyperbole?

"Most outrageous and absurd language ever," "Megabytes of javascript", "corporate-grade bandwidth", "8th-gen i7 and 8GB of memory" to open "3 images and a contact form."

I'm sure you can find one or two poorly-optimized sites that have 2MB of javascript to download, but it's by no means the necessary outcome of using "ridiculous libraries and frameworks," and not even a particularly common one.


> it's by no means the necessary outcome of using "ridiculous libraries and frameworks," and not even a particularly common one

The real world disagrees with you; go check out any major website and observe as your laptop's fans spin up.

However I think the main problem here isn't the symptom (websites are bloated) but the root cause of the problem. I'm not sure if it's resume-driven-development by front-end developers or that they genuinely lost the skill of pure CSS & HTML but everyone seems to be pushing for React or some kind of SPA framework even when the entire website only needs a handful of pages with no dynamic content.


> one or two poorly-optimized sites

Try every old media site and most e-commerce.


Oh yeah like the 'new' reddit. It makes even the best machines cry.


AjaxContentPanels or something to that effect. Those things were a nightmare. At the time, asp.net pretended to be stateful by bundling up the entire state of the page into "ViewState" and passing it back and forth client to server. Getting that to work with those panels was more work than just ajax-ing the content and injecting it with jquery.

In the Microsoft-verse, this might also draw some comparisons to the more modern server-side blazor.


<UpdatePanel /> see here: https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.up...

I used it 13 years ago. It was fancy.


Oh yeah. I remember that ViewState could reach 100s of KBs on a page if you weren't careful. It was a huge juggling act between keeping state in your input fields vs ViewState.


Glad I'm not the only one seeing that parallel. I'd be hesitant to use this for that reason, but maybe that's bias on my part? Just seems like you'd get stuck in a similar mess of "special" updatepanels aka hotwire frames that are trying to "save you from having to write javascript". Except it still uses javascript under the covers, so you still have whatever issues that may entail, only now it's more removed from the developer to be able to solve.


Interesting bit of history (and yes, I see the /s), XMLHttpRequest was actually invented by Microsoft in Explorer because the Outlook team needed better responsiveness for the web email client.


Is this that web framework from Microsoft that hid the transaction-orientedness of HTTP from you by letting you set server-side click listeners on buttons and generated all the code needed to glue it all together? At the time, I didn't feel good about it because it abstracted away too much, and required Windows on the server. Little did I know about all the ways people would start abusing JS in 10 years.


jquery also had a function for doing swaps. Then there's Rails turbolinks, and now Phoenix LiveView in Elixir.

I don't know if it's so much 'everything old is new again' as it is a problem of market penetration.


good ol' ASP.NET 2.0


Yep, was using this approach in Java server-side frameworks 12 years ago.


webforms is dead, long live webforms.


We need something like XMLHttpRequest right now. Google should integrate Dart into Chrome.


This is so exciting to see, especially for older folk like me.

Almost 20 years ago, one of my professors told us before graduation that hot tech is mostly about the idea pendulum swinging back and forth. I immediately chalked it up to 65+ above white wise men snobbery.

However, this is exactly that. We started with static pages, then came Ajax and Asp.net and the open source variants, then we went full SPA, now we are moving back to server side because things are too complicated.

Obviously tech is different, better, more efficient but the overall idea seems to be the same.


I’m glad this technique is making a comeback. The last 10 years of JavaScript on the client have been an utter shit show that left me wondering wtf people were thinking.


You have the benefit of hindsight at this time. You can draw parallel to history of flight and all the crazy contraptions that people attempted. Great technology can emerge from the combination of numerous shit shows. The whole is greater than the sum of the parts.


> You have the benefit of hindsight at this time.

People have been pointing out it's a shit show with no end in sight for the entire duration of the phase. Pointing out the performance impact and cost to end users, how diabolical it is for those on lower latency or poorer network connectivity (i.e. most of the world), and so on.

Same thing as always happens with these pendulum swings, newer engineers come in convinced everyone before them is an idiot, are capable of building their new thing and hyping it up such that other newer engineers are sold on it while the "old guard" effectively says "please listen to me, there's good reasons why we don't do it this way" and get ignored. Worse, they'll get told they're wrong, only to be proven right all along.

I'm not denying there are obstructionist greybeard types that just refuse to acknowledge merits in new approaches, but any and all critique is written off as being cut from the same cloth.

It's perfectly possible to iterate on new ideas and approaches while not throwing away what we've spent decades learning ('Those who do not learn history are doomed to repeat it'), but tech just seems especially determined not to grow up.


I guess I've become a grey beard. I've done the whole journey from CGI everything to a bit of js to SPA. As much as I'd really like to be nostalgic about the good old days, there are reasons everything got pushed into the client. One of those reasons is maintaining state.

"HTML over the wire" isn't really a return to the good ol' days. It's still the client maintaining state and using tons of js to move data back and forth without page reloads. It just changes the nature of 1/2 the data and moves the burden of templating back to the server.

It is amusing that they make a claim that reads a lot like "eliminate 80% of your Javascript and replace it with Stimulus". What is Stimulus? A Javascript framework.


They mean JavaScript that you write.


I'm not a front end engineer, but it always seemed crazy to me. I remember testing out the Google Web Toolkit when it came out more than a decade ago, and the craziest thing about it to me wasn't the Java --> JavaScript compilation, it was that the server just dumped an empty page and filled everything in with JavaScript on the client.

Then, remember the awful awful #! URLs? Atrocious, and seemed like obviously a terrible idea from the start, yet they spread, and have mostly died, thankfully. But even with the lessons from these bad tech designs, new frameworks come out that repeat mistakes, yet get incredible hype.


Hashbang URLs are gone because of the PushState API, not because people have given up on the idea.


Hashbangs preclude even the option of the server even seeing the state that the client wants from the first request, necessitating severa hops. PushState, though it allows URL transitions without a full reload, is an entirely better and different idea.


Around the time that GWT came out, offshoring was a big thing. And most of the contractors only knew Java. Also Java was the trusted language and javascript was not.


The only big GWT project I've ever been on was a governmental project that I won't go into (because it's Danish and I would have to describe a bunch of stuff that everyone in Denmark knows and nobody outside would probably care about), but the company providing it was porting their Java version to JavaScript and had a significantly large codebase to leverage.


AWS used to rely on JWT for their consoles (haven't for a few years now, most folks migrated away some 5 years ago)

It's why they used to be horrendously bloated with large javascript bundles that took so long to process on the client side.

Roughly speaking the idea was "We don't have any Javascript developers, but we do have Java developers. JWT allows us to bridge that divide". Neat in theory, and an understandable decision, but diabolical in practice!


I believe you mean “GWT” and not “JWT”.


Yup you're right, that was quite a brainfart :)


Very true. I think that mostly, web dev mainstream has taken a rational path. It’s with the benefit of hindsight as you say, or the yoke of unusual requirements, that people now say ”we did it all wrong”.


except we could fly before ;D


> The last 10 years of JavaScript on the client have been an utter shit show

A fair number of people would disagree. I'd say it's advanced a lot, considering the limited role of JavaScript on the client in the past.


Same. And I'm still amazed that people loved JS so much they put it on the SERVER too! And now node/npm is everywhere.


I think it's less about loving JS so much, and more about not having any options on the client. If there were any better client options they might have won!


Server side javascript was one of its original intended uses.

Here's the Netscape Enterprise Server manual from 1998[1]. Sorry I couldn't find an earlier version.

1. https://docs.oracle.com/cd/E19957-01/816-6410-10/816-6410-10...


Node has a better runtime than ruby or python, so why not? and you also have TS.


The node runtime is actually pretty quick (It's roughly as fast as an optimizing compiler from 10-15 years ago, give or take, which is fairly impressive), but even TS is still makeup on a wart - you can't escape JS's more bizarre semantics that easily.


I think a huge amount of that success is because JS is cheap to hire, i.e. Node is fast enough so why bother using a good language (cynically).

Anecdotally, most people I see who really really love JavaScript are kids who haven't really don't much else.


Yikes, what a condescending comment. I’d argue why this comment is beyond misguided but I’d be wasting my time.


Couldn’t have come soon enough. I’m exhausted


I’m glad I’m not the only one who thought this.


Yeah people shouldn't make applications in programming languages. If your application can't be made with html/css then it's bloatware. All this java, .net and C are totally unnecessary.


I don’t think the problem is with programming languages, but with JavaScript specifically, since it was never designed to be stretched this far. TypeScript is an improvement, but if you could write C or whatever on the client side and run it as easily as JS I think more people would go that route.


C/C++ and Rust actually run very fine client side nowadays. However, it's just more cumbersome to make UI in those languages, so it's mostly used to port existing libs or for performance.


Literally nobody is saying this


They are, look at the context of what they're saying. JS is simply a programming language in a VM like plenty of others, there's nothing inherently bad about it. But every thread here's the uneducated hate for it, completely misunderstanding that html/css static pages don't solve the problems a programming language does.

I'd like to see these people make applications in pure XML. No programming.


No, the argument has never been "replace all JS with static HTML/CSS". The argument is "JavaScript frontends are becoming unnecessarily bloated, slow, and complicated, and we can do better". Solutions like the one Basecamp is proposing with Hotwire include pushing as much rendering logic as possible to the server, where you're using a language like Ruby for logic. Nobody thinks you can just remove all logic from a web application unless it's literally just static content.

And even with Hotwire, you're not getting rid of JavaScript entirely. You can write it with Stimulus. The idea is just that frontend web development has become a mess, and it's possible to simplify things.

> there's nothing inherently bad about it

Disagree.


“Bloated.” This is actually the opposite. Go into a language like python and install numpy (30mb) and then come back to complain to me about a 2mb js bundle.

This argument is so bogus if you look at alternative language dependency sizes.


While there are issues within Python side, I think it is quite unfair to use numpy as an example.

My reasoning is that the numpy project is meant for scientific and prototyping purposes, but many times people are using it as an shortcut and include the whole thing into their project.

That being said, the quality in these packages does vary depending on who developed them. But I think this is a problem that exists with all languages where publishing packages is relatively straightforward.


numpy doesn’t get sent to the client if you use it on the server.

> JavaScript frontends are becoming unnecessarily bloated


People often conflate use with abuse.

The bad experiences stick out to people, whereas all the well behaved JS-heavy apps out there likely don't even register as such to most people.

Even with SPAs, it's very possible (and really not that hard) to make them behave well. Logically, even a large SPA should use less overall data than a comparable server-rendered app over time. A JS bundle is a bigger initial hit but will be cached, leaving only the data and any chunks you don't have cached yet to come over the wire as you navigate around. A server-rendered app needs to transmit the entire HTML document on every single page load.

Of course, when you see things like the newer React-based reddit, which chugs on any hardware I have to throw at it, I can sort of see where people's complaints come from.


I'd take it back even further:

We started out on mainframes.

Then things moved to the desktop, with some centralized functionality on servers (shared drives, batch jobs).

The processing moved to centralized web servers via the web, SAAS, and the cloud.

Then more moved into the client through React & similar.

And now things are moving back to the server.

Tick. Tock.

These changes are not just arbitrary whims of fashion, though. They're driven by generational improvements in technology and tooling.


I got a chuckle out of Apple M1 chip touting having shared video memory as a big step forward. (Which it is, but is still amusing to me how it might have sounded like a groundbreaking innovation to a layperson.)


In the PC industry, that's called UMA and has been around for a few decades, synonymous with ultra-low-cost (and performance) integrated graphics. To hype it so much as a good thing, Apple marketing is really genius.


Apple takes Cue From Original Xbox with Latest Chipset.


Or; Apple takes cue from own Macintosh IIsi from three decades ago?


Yes. I have thought about this a lot. There are cycles...

Like thin client (VT100), to thick (client/server desktop app), to thin (browser), etc.

Similarly, console apps (respond to a single request in a loop), to event-driven GUI apps, to HTTP apps that just respond to a simple request, back to event-driven JS apps.

It depends on how you define the boundaries, but history rhymes.


Virtual machines, containers, very similar to partitions and spaces on mainframes as well.


Virtual machines are not a recent invention. They were already being used on IBM mainframes starting in the early 1970s:

https://en.wikipedia.org/wiki/VM_(operating_system)

Notably, the VM operating system could run an instance of itself in one of its own virtual machines.


Is it really a pendulum, or is it more that this was always an idea with merit that's now finally seeing wider adoption because it's become more widely available? (In part, I understand, due to some IBM patents that expire 10 or so years ago)


And serverless is an anemic CICS executing non-transactions.


Serverless is kind of like Apache running PHP scripts in virtual hosts.


The greatest trick the devil ever pulled is convincing people that shared hosting is preferable to dedicated, and then charging them way more money for it.


The cloud is just someone else's computer in a data center in new Jersey about to be hurt by a hurricane.


They're driven by generational improvements in technology and tooling.

I'd say they're driven by corporate greed. Cloud computing is basically renting time, and so the more you use them, the more $$$ they make.


Every time the pendulum returns, it returns profoundly changed. And it returns because the changes makes the coming back possible.


So when and how does the p2p / distributed pendulum swing back? When do we stop using AWS mainframes for everything?

I sense that you're right about swings requiring change to older techniques. But I think there's also a component of being fed up with the direction things are currently facing.


Unfortunately p2p computing is hindered badly by the copyright industry. The research is still active and we have a lot of ideas for distributed computing and p2p for more than file exchange. A lot is used today to distribute a mainframe infrastructure instead of creating truely distributed network.


I completely agree with this. For reference, I'm a relatively new developer - 3.5+ years of experience in my first developer position.

At the beginning of college everyone was SUPER into NoSQL. All my friends were using it, SQL was slow, etc.

Nearing the end of college and the beginning of my job I began seeing articles saying why NoSQL wasn't the best, why SQL is good for some things over NoSQL, etc.

Technology is cyclical. 10 years from now I expect to read about something "new" only to realize that it was something old.


The NoSQL trend was so terrible. Anyone starting out right in that time frame where mongo and other NoSQL DBs were getting popular was really done a disservice.

I sit in design meetings all the time where people with <5 years experience go out of their way to avoid using a relational database for relational data because "SQL is slow". They will fight tooth and nail, shoe-horning features in to the application which are trivial to do with a single SQL command.

I helped out on one project lead by a few younger devs who chose FireStore over CloudSQL for "performance reasons" (for an in-house tool). They had to do a pretty major rewrite after only a few weeks once they got around to deleting, because one of their design requirements was to be able to delete thousands of records; a trivial operation in SQL, but with FireStore, deleting records requires:

> To delete an entire collection or subcollection in Cloud Firestore, retrieve all the documents within the collection or subcollection and delete them. If you have larger collections, you may want to delete the documents in smaller batches to avoid out-of-memory errors. Repeat the process until you've deleted the entire collection or subcollection.

> Deleting a collection requires coordinating an unbounded number of individual delete requests.

Turns out, once they started needing to regularly delete thousands-millions of records, the process could run all night. Luckily, moving over to CloudSQL didn't take very long...


> I sit in design meetings all the time where people with <5 years experience go out of their way to avoid using a relational database for relational data because "SQL is slow"

I mean, this is just dumb. I have less than 5 years experience and I understand that SQL isn't "slow", there are just different tradeoffs between SQL and NoSQL databases and you have to pick the right tool for the job.


Or just... use indexes.


NoSQL is very much like the databases that were around in the 1960s ("navigational" databases, nested sets of key-value pairs). E. F. Codd proposed a database of tables (which he, a mathematician, called "relations") to solve a number of problems that these primitive databases were having, one of which was speed.


The funniest part is seeing companies jump onto the distributed NoSQL bandwagon with their fundamentally relational and transactional data structures and then reinvent the transactional relational database.


Sql has always been faster in querying. Faster in development though is another thing depending on the project and experience.


You should be very glad you saw the utter pile of crap in technology fashion show at such a young age.


The Rails people never went for SPAs though. Releasing another server-rendering AJAX thing for rails (previous was TurboLinks) no more represents "the pendulum swinging back" than a new version of COBOL that runs on mainframes represents the pendulum swinging back to mainframes. If this approach gains market share against React etc., then that will be meaningful - but don't hold your breath, there are legitimate reasons for the move to SPAs and also an enormous amount of institutional inertia behind it.


I don't think that's entirely accurate. Lots of Rails users went the SPA route the second stuff like Backbone came out. Wycats was big in the Rails community at this time and he spearheaded Emberjs. The Shopify guys were (and are still) big in the Rails community and they created their own Batman.js. It's just that the Rails core devs made a decision to not go that route. They were even working on their own front end framework at one point and after some time they decided to kill it in favor of just using pjax/turbolinks. You can get your 80% case accomplished with these technologies with substantially less effort. There are definitely reasons to go SPA, but the dev community at large has jumped on the hype train here without really identifying that using these technologies are a good idea for their use case. I mean, there's a lot of people doing CRUD with React. That's crazy.


Lots of rails back end applications power SPAs on the front end. Sometimes for good reasons, often enough just because it was more "modern" - but much less efficient in terms of programming.


The interesting thing is that if you don't think of the browser as just another runtime, nothing more than The VM That Lived (where applets and flash died), but actually think of your applications as Web Applications, then you get the ideas behind this faster.

JSON is just a media type that a resource can be rendered as. HTML is another media type for the same resource. Which is better? Neither, necessarily, it depends on the client application. But if you are primarily using JSON to drive updates to custom client code to push to HTML, well, that should give you something to think about.


You went to a school with professors and dismissed their final advice to you as "65+ above white wise men snobbery"?

Which part of that is supposed to be a reasonable thing to say?


> Which part of that is supposed to be a reasonable thing to say?

None of it - that's the point. They are self-deprecatingly pointing out how naïve and judgemental their younger-self was.


Lol so did I. Ageism is a thing, and it's everywhere. At least when you're young, you don't have the excuse of already having been in the other age class. That being said, several of my older professors were entirely full of snobby shit. The older I get, the more I see how they were not trying to impart knowledge, but to gain some kind of status as "hard-ass" old men with the younger generation.


Nobody is doing what they claim. It's all ego and posturing. I'm getting tired of humanity.


I read it as a self-deprecating dig at his / her younger self.


Exactly, it is kind of like Clarke's first law:

https://en.wikipedia.org/wiki/Clarke%27s_three_laws

Youth are always writing off the oldies - I did it, and now that I am old, I see it happening to me - and that is ok - we need that passion to shake things up, even if they end-up eerily similar to the way things were done before...


I would point out that those are older also tend to write off the younger. I think it's just perspective mismatch; If I can emulate another person's perspective in my head, I can anticipate their decisions (and reasoning), so I can decide if they are being reasonable.

However if I can't understand their perspective, I have a very hard time in understanding and judging their reasonableness (because I'm basing my judgement solely off of my own experiences and memories that are similar to their circumstances).

This lack of understanding translates to seeing a lack of credibility in them. "Maybe if they were more like me, they'd make more sense, be more reasonable". This type of thinking is common in most types of prejudice.

It's why young people write off older people: "They're too older to remember what it's like being my age, or to understand how things are now".

Why the opposite occurs: "They're still too young to understand how life works yet".

Why people of very different cultures tend to be prejudiced: "Their kind are ignorant of how the world works", and the opposite: "They've never been through what I've been through, they don't understand me or mine".

All of these statements evaluate down to: "If they were more like me, they would be reasonable". Which is of course true, if "they" were more like "you", their systems of reasoning and value be more similar to yours, and vice versa.


In computer science, it's particularly tempting for the yutes to write off the oldies, because technologies change so rapidly — I sometimes frighten the kids by mentioning that I got my first degree before the WWW was invented, and I'm far from retirement age.


> I immediately chalked it up to 65+ above white wise men snobbery.

Perhaps this can be the opportunity for you to look through your past and consider and reevaluate other ideas you discarded because of your own bigotry.


This doesn't apply to all tech. Web just doesn't have a good solution because it's so complex. You'll always be making compromises. Some compromises are trendier than others at any given time. IMO, simple tech you don't have to think about, it's usage doesn't have "pendulum" effects. You forget it's there.


> then we went full SPA

Rails never did, and even actual SPA frameworks (e.g., React) have had SSR support/versions for quite a while. Basecamp introducing yet another iteration of front-end-JS dependent mostly-SSR for Rails isn't a pendulum swinging anywhere.


I think I'd be more impressed by this idea if their server wasn't currently down.


It's back up. If you check out the traffic to it, we hugged it to death.


It's not just that things are too complicated... the JS being sent to browsers is large and a lot of work. That requires more bandwidth, processing, and power usage on client devices. This eats phone, tablet, and laptop batteries.


But... that's one of the pro's of not having to do the rending cycle on the server. Also caching of framework libraries off CDN's and such.

I don't see much merit in moving back to server side rendering aside from obfuscation & helping SEO ratings (web crawlers have a hard time with SPA)


> But... that's one of the pro's of not having to do the rending cycle on the server. Also caching of framework libraries off CDN's and such.

This doesn't save battery life on a device. If someone downloads a few meg of JS their browser has to parse and execute that JS locally. This use of processing uses power. If that same person had half as much JS to parse and execute it would use less power.

A CDN does not save from this happening.

When power use happens on a server it's more on the server but less on devices with batteries. Batteries aren't used up as quickly (both between recharges and in their overall life).

A server side setup can cache and even use a CDN to only need to render parts that change.

My points are that it's not all cut and dry along with considering batteries.

Oh, and older systems (like 5 year old ones)... surfing the web on an older system can be a pain now because of JS proliferation.


> Oh, and older systems (like 5 year old ones)... surfing the web on an older system can be a pain now because of JS proliferation.

This matters because of the poor, the elderly (on a fixed income), and those who aren't in first world countries don't have easy access to money to keep getting newer computers.

Then there is the environmental impact of tossing all those old computers.

So, there is both a people and environment impact.


I think there's some kind of weird mentality among web devs that client-size computations are free, but server-side ones cost resources because you do more of them the more users you have.


That’s not so weird. It’s like IKEA shipping you disassembled furniture: they don’t have to pay for assembly (nor for shipping as much air). The client bears the cost of assembly, so if you don’t pay the client’s costs, it’s free.


They are free, just not to the client.


You're right it's not all cut and dry.

The two things that use the most battery in a phone are the radio and the screen.

If you can do most of the work client side, the phone can turn off the radio and save battery. The amount of battery savings of course depends greatly on what the application is actually doing.


An interesting development is that the argument "common libraries will be cached in the browser" is no longer true. Chrome and other browsers are starting to scope their caches by domain, to mitigate tracking techniques that used 304 request timing to identify if the client had visited arbitrary URLs.

Yes, I'm aware that "it will be cached" lost most of its glory when bundling became mainstream, but I still hear it as an argument when pulling things from common CDNs.


In a few tests I ran, I found rendering to be fast and lightweight. If you already have prepared the associative array of values, then the final stage of combining it with a template and producing HTML doesn't strain the server, and so it doesn't help your server much to move that part to the client.

The server's hardest work is usually in the database: scanning through thousands of rows to find the few that you need, joining them with rows from other tables, perhaps some calculations to aggregate some values (sum, average, count, etc.). The database is often the bottleneck. That isn't to say I advocate NoSQL or some exotic architecture. For many apps, the solution is spending more time on your database (indexes, trying different ways to join things, making sure you're filtering things thoroughly with where-clauses, mundane stuff like that). A lot of seasoned programmers are still noobs with SQL.

Anyway, if rendering is lightweight, then why does it bog down web browsers when you move it there? I don't think it does. If all you did was ship the JSON and render it with something like Handlebars, I think the browser would be fine, and it would be hard to tell the difference between it and server-side rendering.

I think what causes apps to get slow is when you not only render on the client but implement a single-page application. (It's possible to have client-side rendering in a multipage application, where each new page requires a server roundtrip. I just don't hear about it very much.) Even client-side routing need not bog down the browser. I've tested it with native JavaScript, using the History API, and it is still snappy.

I guess what it is, is that the developers keep wanting to bring in more bells and whistles (which is understandable) especially when they find some spiffy library that makes it easier (which is also understandable). But after you have included a few libraries, things start to get heavy. Things also start to interact in complex ways, causing flakiness. If done well, client-side code can be snappy. But a highly interactive application gets complicated quickly, faster than I think most programmers anticipate. Through careful thought and lots of revision, the chaos can be tamed. But often programmers don't spend the time needed, either because they find it tedious or because their bosses don't allot the time --- instead always prodding them on to the next feature.


And state is not reflective of reality in the database which is a terrible idea for most apps


Indeed, this is exactly how web chats worked in late 1990s, except for the use of WebSocket (they used infinite load instead). They even seem to revive frames, another staple of 1990s design!


>65+ above white wise men snobbery

Nice! Casual ageism and racism mixed into one post.


Conventional wisdom is discrimination against privileged groups such as white men is less offensive because they’ve endured so much less of it.

On one hand, it’s true. It’s part of white privilege which is tangible.

On the other hand, however less often people in a privileged class are realistically impacted by discrimination, it’s still > 0.0%. Since it usually costs nothing more to include everyone it seems useful.

But I think the biggest reason it’s important to care about discrimination wherever it shows up and not let people off the hook is that it’s unifying.

There’s a story out of Buddhism that suggests it’s important to think equally kindly about rich people, kind of similar in that they’re a privileged class.

I know it’s a hard sell. I don’t do it justice here. However a powerful argument can be made that not disparaging privileged classes, actually helps us all in the long run/big picture.

If I get down voted I understand, that’s ok. If it makes a difference I don’t mean to minimize the 10,000 year history of pain suffered by any humans due to discrimination.


> However a powerful argument can be made that not disparaging privileged classes, actually helps us all in the long run/big picture.

The powerful argument is that you should treat everyone well, period, and not do some kind of calculation to decide how cruel you're allowed to be to them.


Racist people like you make peaceful protests and working for change against racism so much harder. You're just out for revenge and your rhetoric shows it.

Edit: I've had just about enough of people using "white privilege" to justify violence and blatant discrimination because "they haven't been exposed to enough". Its just another way to justify racism. Plenty of white people live in poverty. Its not ok in either direction.


Racism is singling out white people as the source of all evil, and then backing it up with statistics which don’t tell the whole story.


Did you read their entire comment? It sounds like you agree with them


The pre and post edit makes it seem like they could be on either side of the argument.


So at first you're sympathising with discriminating against those that you see as suffering less and then your big idea is treating people equally and that's a 'hard sell'. That is like being a basic good fucking human, it's not a novel idea.


Sorry if it was unclear that’s not what I meant to imply.

First, acknowledging the thinking behind a common opinion is not the same as sympathizing with it. It’s only stating a concept I disagree with.

Secondly, it’d be nice to take credit for this, big fucking idea, but unfortunately it’d be thousands of years too late. I explicitly mentioned the source.

Finally, I don’t see how it’s not a novel idea. If you started asking people to think kindly about rich Wall Street bankers or cable company executives would everyone he instantly on board?

I know those are extreme examples but that was the point of the story. What’s indeed not novel is to say, think well of all people.

The hard part is when you try to actually apply it equally, including to less popular but highly privileged classes of people.

I don’t claim that I can do it all the time, I’m sure I don’t in fact. However for any ideal shouldn't it be ok to try and work towards it over time?


Seriously. That's a WTF from me, dawg...


Rough guess OP is making fun of himself with this now.


Yep.


[flagged]


.


[flagged]


Probably because they are relating an anecdote from their past, and self-deprecatingly pointing out how naïve and overly-judgemental they were _back then_.


I think the same.


I don't think this addresses all of what SPAs is used for. It seems to assume full-stack control.


It's specifically built for Rails, so yeah, it definitely assumes full-stack control.

And there are definitely applications I would prefer to write as an SPA over the Hotwire approach. But given that the vast majority of websites are just a series of simple forms, I prefer this approach over the costs you incur from building an entire complex SPA.


While it works with Rails... some of the parts are just JavaScript and will work with any underlying platform.


Spoiler, it's just Ajax but it pushes the data through your templates before sending it to the client. We were doing this literally over a decade ago in the early days of XHR.


The goal is to have a productive set of patterns for the programmer to follow for dynamic updates.

Less boilerplate. More reuse. Consolidation of app state to the server.

Feel free to post your code from a decade ago so that we can do a bake-off and compare implementations side-by-side.


Look at any of Wicket 1.4's Ajax examples (bonus if you use a better JVM language to reduce the boilerplate - I was using Scala). It's a great technique, it works great, I'm just slightly salty that the industry felt the need to switch to JS SPAs for, as far as I could see, very little tangible advantage.


Isn't the point that it's a framework to accomplish this, rather than "look what we can do!"?

I have a small website and I have one page with an order form and I show a modal with the result of the order. This is how I do it. The Ajax calls gets back HTML that it shoves into the modal. It was just easier than writing JS with templates and parsing JSON. It always felt icky to me because that's not the way you're "supposed" to do it, but it works quite well.


A client of mine has an entire internal Line of Business application built on that mechanic: backend returns javascript to shove HTML somewhere. The trick is that the js is built automatically based on components. Devs rarely have to touch js.

It's lightweight on the frontend and a pleasure to develop for since forms are entirely coded on the backend with components. On rare exceptions more complex pages have some sprinkled lodash.js.

They have around 400 CRUD pages with complex business validations. Up to 1.5k concurrent users. At that point MySQL starts to sweat a bit and p95 increases above 200ms. All running on 2 machines: a nginx+phpfpm and a MariaDB.

The instant no-compilation feedback loop of edit-save-refresh is orgasmic. That system opened my eyes to a lot of preconceptions and buzzwords I once held sacred.


Oh no, a decade ago!! please show me some hot new Next/Redwood/Svetle code so I can calm my nerves!


It sounds a lot like Laravel Livewire[1]

Also a lot like React Server Components that we saw on HN yesterday[2]

It seems like this is the next wave of web apps. Hopefully once the hype settles, we'll be able to decide which approach is best for which project.

[1] https://laravel-livewire.com/

[2] https://news.ycombinator.com/item?id=25497065


Livewire, of course, having been heavily inspired by Phoenix LiveView https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html


I've just been getting into it, and am completely loving it, especially the elixir part. It feels like the whole OTP/erlang part of it (basically single codebase microservices and patterns that come with it) has proper engineering and principles behind it, and it's something I've been missing for a long time in our profession


Tho, LiveWire is AJAX, while LiveView and Stimulus Reflex are Sockets. LW is a pleasure to work with for small pieces of interactivity.


I believe (but could be wrong) that liveview degrades to http long poll.


It does indeed!


still not sure if the degrading code is there, but you can definitely configure it to USE long-poll.


How does Hotwire compare to Phoenix LiveView?


> How does Hotwire compare to Phoenix LiveView? It seems the same to me.

It's much different based on a preliminary reading of Hotwire's docs.

Live View uses websockets for everything. If you want to update a tiny text label in some HTML, it uses websockets to push the diff of the content that changed. However you could use LV in a way that replaces Hotwire Turbo Drive, which is aimed at page transitions, such as going from a blog index page to contact form. This way you get the benefits of not having to re-parse the <head> along with all of your CSS / JS. However LV will send those massive diffs over websockets.

Hotwire Turbo Drive replaces Tubolinks 5, and it uses HTTP to transfer the content. It also has new functionality (Hotwire Turbo Frames) to do partial page updates too instead of swapping the whole body like Turbolinks 5 used to do. Websockets is only used when you want to broadcast the changes to everyone connected and that's where Hotwire Turbo Streams comes in.

IMO that is a much better approach than Live View, because now only websockets get used for broadcast-like actions instead of using it to render your entire page of content if you're using LV to handle page transitions. IMO the trade off of throwing away everything we know and can leverage from HTTP to "websocket all the things" isn't one worth making. Websockets should be used when they need to, which is exactly what Hotwire does.

I could be wrong of course but after reading the docs I'm about 95% sure that is an accurate assessment. If I'm wrong please correct me!


Fwiw, you can use long polling for LiveView if you wanted. That could completely remove websockets as everything happens over http.

Hotwire will benefit from caching better than LiveView, because frames are distinct URLs. But I haven't personally need that.


> Fwiw, you can use long polling for LiveView if you wanted.

How does that work for page transitions? The docs don't mention anything about this or how to configure it.

With Turbolinks or Hotwire Turbo Drive, the user clicks the link to initiate a page transition and then the body of the page is swapped with the new content being served over HTTP. With Turbo Frames the same thing happens except it's only a designated area of the page. In both cases there's no need to poll because the user controls the manual trigger of that event.

How would LV do the same thing over HTTP? Everything about LV in the docs mentions it's pretty much all-in with websockets.

Then there's progressive enhancement too as another difference. Turbo is set up out of the box to use controllers which means you really only need to add a tiny amount of code (2-3 lines) to handle the enhanced experience alongside your non-enhanced experience. For example you could have a destroy action remove an item from the dom when enhanced using Turbo Stream or just redirect back to the index page (or whatever) for the non-enhanced version.

There's an example in the Turbo docs for that at https://turbo.hotwire.dev/handbook/streams if you search for "def destroy".

But with LV wouldn't you need to create both a LV and a regular controller? That's a huge amount of code duplication.

Although to be fair I would imagine most apps would require JavaScript to function so that one is kind of a non-issue for most apps, but it's still more true to the web to support progressive enhancement and the easier you can do this the better.


> But with LV wouldn't you need to create both a LV and a regular controller? That's a huge amount of code duplication.

You just do LiveView instead of a regular controller. No duplication.

When you request a page, it is render on the server and all of the HTML is returned over HTTP as usual.

After the client has received the HTML updates, live updates can go over a websocket. For instance you start typing in a search field, this is sent to the server over websockets. Then the server might have a template for that page that adds search suggestions in a list under that search field. The server basically automatically figures out how the page should be rendered on the server side with the suggestions showing. By "re-rendering" the template with changed data used with the server side template. Then it sends back a diff to the client over websockets. The diff adds/changes the search suggestions to the page. The diff is very small and it's all very fast.


> You just do LiveView instead of a regular controller. No duplication.

Yes but this is only in the happy case when the client is fully enhanced no?

What happens if you hook up a phx click event to increment a like counter.

After the page is loaded, if you click the + to increment it while having JavaScript disabled it's not going to do anything right?

But with Hotwire Turbo, if you have a degraded client with no JS, clicking the + would result in a full page reload and the increment would still happen. That's progressive enhancement. It works because it's a regular link and if no JS intercepts it, it continues through as a normal HTTP request.


Yes, the `phx-click` doesn't automatically get translated to a link or form submission. You can still design the page to work without javascript. For instance by having a "+" button be a normal form or link and then have phx-click intercept it when javascript is enabled. This can be done with one LiveView module without having to also have a separate regular controller.

One way to do it would be to in the `mount` function handle normal non-javascript params being sent and a `handle_event` function handle `phx-click`.

I don't know if there is already a way to have `phx-click` with fallback to HTTP in a less "manual" way. It should be possible to make.


I've found that the vast majority of clicky stuff I do leads to a URL change anyways, and these are just proper links to the new URL that LV then intercepts.

In your Counter example, it's true that for the 'degraded' version to work, the link would have to be a proper link and not a phx-click. But in the (IMO very unlikely) case where this fallback is necessary, solving it with a proper link/route does not require duplication, just a different approach.

What you would do is create a LiveView that handles both the initial page and the 'increment' route. If LV is 'on', it intercepts the click and patches the page. if LV is 'off', your browser would request the 'increment' route, and the same LV would handle all this server-side and still display the new, incremented counter.

The LV is both the server-side controller /and/ the client-side logic. That's part of what makes it so appealing, but, admittedly, also something that can take a while to wrap your head around.

I've more than once reflexively gone for phx-click solutions where the LV would receive the event and 'do' something, only to later realize that it would be much better to use a proper url/routing solution (where LV is still the 'controller'). In hindsight it's often a case of treating LiveView too much like just 'React on the server', basically.


It uses long polling over http. To be clear it's not restful http, but it's not websockets. I believe that Chris doesn't believe it's important for most people so there are no directions right now. Could be wrong there, I'm not Chris.

Page changes are still initiated by the client in LiveView (although can be server initiated)

LiveView is just channels under the hood. Once you consider that, long polling may seem more obvious


Since LiveView is built on phoenix channels, it's the same story. Simply pass the `transport: LongPoll` option to the LiveSocket constructor and you're now using long polling with LV :)


> LiveView is just channels under the hood. Once you consider that, long polling may seem more obvious

It's not obvious to me for user invoked page transitions because when I think of long polling, I think of an automated time based mechanism that's responsible for making the request, not the user. But a page transition is invoked by the user at an undetermined amount of time / interval (it might happen 2 seconds after they load the page or 10 minutes).


Your idea of "long polling" sounds more like periodic polling (repeated requests within a frequency), though that's not what long polling is or how it works.


> Your idea of "long polling" sounds more like periodic polling (repeated requests within a frequency), though that's not what long polling is or how it works.

Right isn't long polling keeping the connection open for every client on the server and then the server is doing interval based loops in the background until it gets a request from the client or times out?

It wouldn't be doing a separate HTTP request every N seconds with setInverval like the "other" type of polling but it's still doing quite a bit of work on the server.

In either case, LV's long polling is much different than keeping no connection open, no state on the server and only intercepting link clicks as they happen for Turbo Drive and Turbo Frames.

I don't think that's necessarily a big deal with Elixir (I wouldn't not pick it due to this alone, etc.), but this thread is more about the differences between Hotwire and LV, and no state being saved on the server for each connection is a pretty big difference for the Drive and Frames aspects of Turbo.


Yes there is a lot of stuff different here. I don't want to take that away. Just pointing out that websocket only is not accurate.


For one, LiveView doesn't send HTML over the WebSocket channel. It sends a highly optimized diff structure that is applied to HTML.


I wonder how recruiting companies will continue selling frontend react rockstars and backend nodejs warriors who can write endpoints. And yes, that's how they differentiate between frontend and backend, because knowing how to format your json urls (which is not real REST) is backend work and react is frontend.


> not real REST

Rarely admitted, but absolutely truth.


Caleb Porzio (creator of Livewire) made videos about "Server-Side Applications": https://laracasts.com/series/javascript-techniques-for-serve...

His work inspired me to build my app (TravelMap) with SSR views: https://clem.travelmap.net


Off topic from the main thread, but Travel map looks really good. Reminds me of all the travel blogs I used to read when I was in high school / college dreaming about getting out and exploring the world. Great work!


Very good work!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: