Hacker News new | past | comments | ask | show | jobs | submit login
Unpoly is what Hotwire should have been (randomwik.org)
37 points by shouldcode 14 days ago | hide | past | favorite | 20 comments



This was true, but the new Rails Turbo Page Morphing in Turbo 8 changes the game entirely.

I've been using it. The experience is amazing!

Some highlights:

* In 7 lines of code, with no javascript, I had my index and view pages live-updating whenever a model changed on the server! It is spooky good. Nothing against JavaScript, but now I only maintain one ERB view and no SPA. This is a large dev speed multiplier.

* If you have a page with a parent object where you add/delete child items, you make your actions delete/add the child on the server, and once the object is updated, the page just re-renders with no visible redirect/reload.

https://dev.37signals.com/turbo-8-released/

A diff with comments: https://github.com/basecamp/turbo-8-morphing-demo/pull/4/fil...


I worked with unpoly for a bit and it had a few rough edges (I can't remember what, it's been a while). My major concern at the time was that the entire project was javascript written inside erb files, who then were compiled into js and erb was used for some sort of compile step. It wad really hard to contribute/extend.

I have no idea if this changed


I'm a happy unpoly user! I prefer it from other similar solutions (ie htmx) because of the layers/modals functionality.


From the main unpoly page:

> Get powerful new HTML attributes to build dynamic UI on the server. Works with any language. Gracefully degrades without JavaScript.

I wouldn't call being completely dependent on the server "graceful degradation". App server and web server should not be conflated. A degradation should imply the site is still up but some interactions don't work or work intermittently.

You can trivially achieve that with a static web server in front of an app server that only has to concern itself with providing a data-only API. Frontend is frontend and backend is backend. Separation of concerns is important in real world production environments. Client-server paradigm is the last thing I would ever question about web apps or any apps really.

Also, please don't invent arbitrary standards and create lock-in. This isn't 2010 anymore. Htmx and Hotwire are just as terrible as this. No point in comparing when they're all fundamentally flawed.


What's your go-to stack for new web projects today?

I've never seen a mature project using a full-stack JS framework that avoids being dependent on the server. Not only are they dependent on the build time and bundler, the latest trend has been having the build split out routes, data loaders, APIs, etc. to RPC calls that are often entirely dependent on the server with a specific build hash.

I totally agree there's a right balance to where logic lives and how clients and servers interact, but the modern JS frameworks have swung so hard to the end of trying to blur the lines between client and server that, in my experience, they end up with a rats nest of bundled and minified code that's nearly impossible to debug and will run into painful upgrades every 6-18 months.


If you want to call it a stack, a simple nginx or apache web server hosting web pages and being the gateway to the backend.

The backend is usually non-negotiable legacy stuff in my experience, but you can add your own layer of business logic in node or python gluing all those other endpoints together.

I don't really care about the frontend frameworks used as long as they're truly frontend frameworks. That build should live its life as a folder of static html/css/js on a web server.

What I'm saying is that this was a solved problem a very long time ago going back to almost the beginning of the internet with CGI (as long as your output response from those scripts were xml or json). Any attempt to deviate from this architecture will be, as you say, a rat's nest. All forms of server side rendering are terrible.

The meaningful depths of full stack are past that gateway and can go all the way down to kernel modules if it has to, but it usually doesn't. Frontend is the surface and should stay on the surface. Javascript should not be used deeper than nodejs. Nodejs should not be used to render html. Nothing should render html on the backend. Who is this not obvious for? What year is it?


Why is it better to render JSON on the server, read that JSON in a separate client app that you also have to write, and then do a bunch of manual DOM calls in Javascript, rather than rendering HTML on the server and letting the browser's blazing-fast compiled HTML parser turn it into DOM for you?

Because you want to offload all the work of rendering onto the client. And in fact you'd often want the rendering to be sensitive to the browser context. Different types of devices may require a different UI or arrangement of content. All that logic would be client logic. Why would you treat a browser different from any other client?

The presentation aspect is simply not the concern of the server in the first place. Data is not presentation and servers should be able to support more than one client. Data should be not be encapsulated in several layers of presentation. Having to scrape data from a web page is ugly and dumb.


Web technology is heavily invested in the goal of rendering content on the server and shipping markup and styling. The client browser still has to take the markup and render it properly, but the whole point of web architecture to begin with was to render data into markup on the server so the browser can be a pretty thin, standards-based rendering engine.

There are absolutely situations where web apps need to render most or all of the data into UI in the client, but that should be the outlier rather than the norm. Why do you assume that there are no situations when rendering HTML from the server is a valid, or even ideal, approach?

Even with native applications, the UI is a combination of content that is rendered on and off the client. An iOS app will end up with screens that are designed and rendered by the developer, the navigation menu and other chrome elements are likely rendered in the build, while some screens in the app do fetch data and render it to the screen. Again its very much the outlier to have a native app where the app entirely fetches data and renders all the content in the client.


here the "graceful degredation" is referring to progressive enhancement:

https://developer.mozilla.org/en-US/docs/Glossary/Progressiv...

this is something that unpoly excels at: it is designed such that the operations that it performs via javascript will work when javascript is disabled in the standard manner (form submission & link navigation) and things like layers revert to plain old pages

htmx does not focus on this nearly as much, i tried to generalize hypermedia controls and leave choosing to progressively enhance a website to htmx users: it's possible but it's not automatic-for-the-most-part like unpoly

it's very true that, without a server, any hypermedia approach is going to be a poor choice


Using a SPA framework like React does not eliminate the need for server uptime. Modern build systems do code splitting, scripts are served by servers, and forms are handled server-side for security. Client-side draft persistence also seems like the perfect use-case for progressive enhancement, if that is a concern.


Not everything needs to be an SPA. Separation of concerns is important but not always required.


I suppose I'll need to check my full stack privilege, but I just know starting a project like this eventually becomes someone else's tech debt nightmare.

If it wasn't already difficult enough to bootstrap a project especially these days. To any entrepreneurs reading, if this is the junk your devs come up with run away as fast as you can.


This is an extraordinary amount of magic. I think I'll just stick to Vite/React, thanks.


The shear amount of magic boilerplate vite creates to make react app work is quite uncomprehensible. I'm not a senior frontend dev by any means but it's always a damned job to do migration of existing project to a next magic bundler.


> The shear amount of magic boilerplate vite creates to make react app work is quite uncomprehensible.

What?

pnpm create vite

pnpm i

pnpm run dev

...you're done? Delete all files in `src` and make a single `main.tsx` with `ReactDOM.render` if you really wanna start from scratch.


I do like Vite, and other frameworks built on it like Astro, but to be fair Vite's magic really isn't at the initial project stage.

Virtual modules are really tricky to understand IMO, especially if you need to debug one. There are subtle differences between dev and production that make sense for a bundler but can be really confusing to a dev. Imports also don't work quite the same, mainly when you start using features like `?raw` imports to get the contents of an SVG file or something like that.


And now please only install vite and try to create all boilerplate by yourself.

Ya rails is going a bit backwards. Dhh is too opinionated and stubborn for his own good. The laravel ecosystem on the other hand has moved forward by being forward thinking and pragmatic. Dhh is now saying no to typescript and moving to only JavaScript.


He’s just cutting out the JS build step. Typescript-native runtimes are coming to browsers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: