Hacker News new | past | comments | ask | show | jobs | submit login
Hotwire: HTML over the Wire (hotwire.dev)
1073 points by samename on Dec 22, 2020 | hide | past | favorite | 545 comments

I've never seen one of these "logic in html-attributes" systems take error checking seriously. In stimulus they start to mention it in "Designing For Resilience" (though only for feature-checking), but in "Working With External Resources" where it uses calls network/IO bound calls they never mention how to handle errors or if the framework just leaves it up to you. Stimulus is also where you need to write your own js code, so I guess you could handle it yourself but in turbo when I skimmed the handbook I find no mentions of what/how to handle errors (or even what happens when turbo gets one), and when loading stuff over the network that is pretty much crucial.

From the turbo handbook: "An application visit always issues a network request. When the response arrives, Turbo Drive renders its HTML and completes the visit." Using the phrase "When the response arrives" begs the question of what happens if it doesn't arrive, or if it takes a minute for it to arrive, or if it arrives but with a faulty status code.

Counterpoint: is there any error handling in the majority of SPAs today? From my experience, SPAs can crap out in all kinds of interesting ways when the underlying network connection is flaky and I often end up stuck on some kind of spinner that will never complete (nor give me a way to abort & retry the operation when I already know it won't complete and don't want to wait for the ~30-second timeout, if there is a timeout even).

Not saying this is better from an error handling perspective, but at least the whole idea of Hotwire and its peers (Turbolinks, etc) is that there is no state and it should thus be safer and quicker to reload the page should things go wrong.

I agree that most SPA apps do it badly too, but hiding the opportunity to do it well certainly does not help.

> there is no state and it should thus be safer and quicker to reload the page should things go wrong.

That's not exactly true since there are non-idempotent HTTP methods and while the browser will prompt you if you want to resend a non-idempotent HTTP request when refreshing a normal form POST I don't think that turbo/turbolinks/similar will allow you to prompt or resend.

On refresh should turbo retry a POST? The "right way" is to keep the state of the last POST and prompt the user for confirmation, but it seems like it is undocumented as to what it does. I'm guessing it either does not retry or it retries and hopes effect will be idempotent.

No one (SPAs, traditional webpages and "spiced" webpages like this included) is doing everything right, but my objection to this framework is that it seems to try to say things are simple or easy when they clearly aren't.

You're correct in that the only standards-based way to retain a POST in the session history is to not disturb an existing entry. However:

> it seems to try to say things are simple or easy

That's an unfair mis-characterisation. The developers are not pitching a universal panacea that solves all your problems and handles every edge case. They are offering an architecture that simplifies many common scenarios, and one that is thoroughly developer-friendly when it comes to supplying observability and integration hooks for edge cases.

For this latter purpose it merely remains to bother with reading the (clean & elegant) source code to enlighten oneself.

> it seems like it is undocumented

On the contrary, the behavior w.r.t full-page replacement on non-idempotent verbs is extensively discussed in the Turbolinks repo.

The "Turbo Drive" component appears to me as essentially unchanged behaviour in Turbo 7.0.0beta1 from Turbolinks versions 5.x. Turbolinks was introduced in 2013, has many years of pedigree and online discussion, and is well understood by a large developer community. Turbolinks was always maintained, even being ported to TypeScript (from the now venerable CoffeeScript) ca. two years ago with no change in behaviour. Turbo Drive is, practically, just a slightly refactored rebrand of the TypeScript port.

The stuff everyone is so excited about are Turbo Frames and Turbo Streams. These are new, and may be used without adopting Turbo Drive: as with practically everything from Basecamp, the toolkit is omakase with substitutions. They are, nevertheless, complementary, so you get all three steak knives in one box.

I believe the only place you'd use a POST with Turbolinks is in response to an explicit user action like pressing a button. In this case, if it fails, you'd refresh the root page (which embeds the button) at which point the state of that page would reflect whatever the server has, so it would display the new data or may not even have the button anymore if the initial POST actually did make it to the server.

Facebook does this constantly for me. It's a crapshoot whether I'll be able to open notifications or messages without a couple of refreshes, or if I'll just get the fake empty circle loading UI indefinitely until I hit F5.

This is my experience in Reddit generally these days on mobile Safari

This combined with them intentionally breaking the mobile web experience has almost entirely stopped me from using Reddit.

I was trying to quit Reddit for years and their intentional breaking of mobile web (as well as making email address required even for already existing accounts) is what finally enabled me to.

Of course now I just go on Hacker News and Twitter instead.

I refresh SPA apps more than other apps because of these problems.

Me too. However, this also doesn't work properly on a lot of SPAs xD

Also the app looking fine immediately after refresh (when it's been server-side rendered), then crashes a second later when the JS framework hydrates the HTML and hits a client-side bug.

Which I find frustrating because literally the only reason I find compelling for making an SPA in the first place is to deal with flaky networking situations.

If I know the network is always there, why bother.

A very good point! Presumably the appeal of a system like this is the potential for graceful degradation where if sockets aren’t working or some requests are failing then the default html behavior should still work: links will just take you to the original destination, but there’s no indication that this is actually what happens.

This is an isomorphic fetch. The original href already is the visited URL, so I'm not sure that trying that again is wise, or appropriate, unless the user chooses to reload.

The entire design philosophy here is to mimic apparent browser behaviour, or to delegate to it. Hence, to GP's question; you should expect the appearance of browser-like behaviour in any circumstance, modulo anything Turbo is specifically trying to do different. Deviation from baseline browser semantics was certainly a basis for filing bugs in its predecessor (Turbolinks).

As for what Turbo actually does, I checked the source. Good news, even for a first beta, they're not the cowboy nitwits alleged; it gracefully handles & distinguishes between broken visits and error-coded but otherwise normal content responses, and the state machine has a full set of hooks, incl. back to other JS/workers, busy-state element selectors, and the handy CSS progress bar carries over from Turbolinks.

I’ve used intercooler with browser-side routing and, the strategy for error recovery that makes sense in that context is “if something goes wrong, reload the page”: the server is designed to be able render the whole page or arbitrary subsets and, so, reloading should usually be safe.

in htmx we trigger an htmx:responseerror:


in general, the right approach in HTML-oriented, declarative libraries appears to be triggering error events and allowing the client to handle them, since it is too hard to generalize what they would want


1. What if something goes wrong?

2. How do I test for handling success/error?

They never address this stuff.

They do already address error handling. GP is shooting the breeze here, evidently has no specific knowledge of Turbolinks family API or behaviour.

...also what is when response/request "items" are not handled chronologically due to load. I once wrote a real-time application with that pattern (HTML over AJAX). It worked but it was not enjoyable at all. Also literally every larger feature change would break the code because you had all these weird corner-cases.

As others have noted, seems reasonably similar to LiveView, Livewire and Blazor. I’m somewhat bullish on these approaches - server side rendered monoliths (Rails, Django, etc.) are SO productive, at least for the first few years of development, but lack of interactivity is a big issue, and this solves it well.

However, another big issue is the dominance of mobile. More and more, you’ve got 2-3 frontends (web and cross-platform mobile, or explicitly web, iOS and Android), and you want to power them all with the same backend. RESTful APIs serving up JSON works for all 3, as does GraphQL (not a fan, but many are). This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative. Mobile is a big part of why SPAs have dominated - you use the same backend and overall approach/architecture for web and mobile.

I’d strongly consider this for a web-only product, but that’s becoming more and more rare.

> I’d strongly consider this for a web-only product, but that’s becoming more and more rare.

They have accompanying https://github.com/hotwired/turbo-ios and https://github.com/hotwired/turbo-android projects to bridge the gap.

Everyone who is talking about how the route the industry took with SPAs was just a silly mistake, and that we should go back to the good old days of PHP are forgetting that at the end of the day the most important thing is to choose the best tool for the job at hand.

This, while very interesting and might have a preferable set of constraints for some projects, is simply not a good fit for many others, as you mentioned in your comment. This looks amazing, and I would definitely try it for a project in which it would fit, but I don't really see a reason to disparage the work others have been doing over the past decade. We need those other tools too!

(sorry for the rant)

In the case of Rails if you're happy with a RESTfull API it handles serving different kinds content such as json pretty seamlessly via the respond_to method. i.e. if you want JSON ask for JSON if you want rendered html ask for that.

However I think that for iOS they're still offering server side rendering via Turbo-iOS and Turbo-Android so you can build quickly and then replace that later if you need to.

^ extremely underrated comment.

This is one of the primary promises of MVC in the first place: views can be rendered independently of controllers and models. For a given controller method call, a view can be specified as a parameter.

In this case, swap "view" for JSON sent back over the wire...

> More and more, you’ve got 2-3 frontends (web and cross-platform mobile, or explicitly web, iOS and Android), and you want to power them all with the same backend.

RESTful APIs serving up JSON works for all 3, as does GraphQL [...]. This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative.

I bet someone will produce a native client library that receives rendered SPA HTML fragments and pretends it's a JSON response. They might even name it something ironic like "Horror" or "Cringe".

That said, an ideal API for desktop web apps looks rather different than one for mobile web or native clients. Basically, for mobile you want to minimize the number of requests because of latency (so larger infodumps rather than many small updates) and minimize the size of responses due to bandwidth limitations and cost (so concise formats like Protocol Buffers rather than JSON).

It is definitely possible to accommodate both sets of requirements at the same API endpoint, but pretending that having a common endpoint implies anything else about the tech stack is rather disingenuous. If you want server-side rendering and an API that delivers HTML fragments instead of PB or JSON, that can be done too.

Isn't GraphQL supposed to solve this problem? You have one GraphQL API and each client requests only the information it needs. Maybe the responses are still JSON but I would think you would come very close to an API that serves all the clients.

Even without GraphQL, you can accommodate both sets of needs. I said as much. I'm also saying that the argument about the user-facing tech stack is bogus.

I can't think of a worse idea than telling people their mobile app should be scrapping their desktop page for data.

Not the page, the API (which returns HTML fragments for incorporating into the page).

And if you can't think of anything worse, you're not trying very hard.

This! I am not a fan of SPA everything. This looks like a great framework for web only. But as you said, what about mobile development? Sure they are still going publish Strada. But will it also work, for example, with Flutter without additional friction? How about services that need to consume output from each other? I believe parsing HTML is more expensive than JSON.

For mobile, they used Turbolinks to release and maintain ios and android basecamp apps based on minimal os-specific chrome code and back-end web-based "pages".

Really, an incredible bang for the buck.

> RESTful APIs serving up JSON works for all 3, as does GraphQL (not a fan, but many are). This however is totally web-specific

HTML is a machine-readable format, like XML and JSON. Have your back end represent a given resource as microformatted-semantic markup, send it gzipped over the wire, and you've got the data exchange you need, even if your mobile app isn't already dressed-up webview.

Are you still referring to dedicated API routes, or are you talking about annotating your UI to the point where it can serve as the API as well? I remember the latter being the vision behind things like RDFa, but those approaches never took off, for a variety of reasons.

> annotating your UI to the point where it can serve as the API as well

At that point you might as well serve XML and use an XSLT transform (+ CSS) to render the view on the client (yes, this is still possible without JavaScript).

Either. Or both. :)

Generally the projects I've felt best about have two features:

1) The API knows how to represent resources across multiple media types, usually including at least markup and JSON.

2) UI is well-annotated enough that developers and machines find it easy to orient themselves and find data.

But you're quite right that this isn't common. I have my own guesses on the reasons why. My observation's been that the workflow and stakeholder decision making process on the UI side places semantic annotation pretty low on the priority side; most places you're lucky if you can get a style guide and visual UI system adopted. And there has to be cooperation and buy-in at that level in order for there to be much incentive to engineer and use a model/API-level way of systematic representing entities as HTML, which often won't happen.

And TBH it is extra effort.

We gotta wait and see what Strada has in store. Looks like Basecamp and Hey mobile apps are fairly good.

What's old is new again. I recall ASP.NET had some interesting tech around this in the 2000s where it could dynamically update parts of the page.

If I recall correctly, this made use of that new technology of the time called "XMLHttpRequest" (/s) which pretty much jump-started web 2.0.

My thought exactly, though I fully support that. I often rant about how the modern web is billions of layers of duck tape over duck tape and it has become an unmanageable mess of libraries, frameworks, resources, all while javascript remains the most outrageous and absurd language ever created. I'm by no means a fan of rails or ruby for that matter but I think things like these are a considerably better alternative than all the ridiculous libraries and frameworks everyone uses, which result megabytes of javascript and require corporate-grade bandwidth and at least an 8-th gen i7 and at least 8gb of memory to open. And all that to open a website which has 3 images and a contact form. I mean someone should create a package that analyzes websites and creates a minimum requirements manifest. It's good to see that there are people who are trying to bring some sanity.

Preach! Websites don’t seem all that much better to me than they did 10 years ago [^fn], so what are we gaining with all these much more complex and fragile tools?

[fn]: Arguably, the web is worse with chat bots, sticky headers, and modals constantly vying for your attention.

> Arguably, the web is worse with chat bots, sticky headers, and modals constantly vying for your attention.

We can blame this on the MBA types. I've literally never heard a software engineer say "hey, let's make this pop-up after they've already been looking at the page for a minute!" or anything like it.

Engineers typically aren’t tasked with increasing revenue/engagement.

Unfortunately I have to disagree - if there weren't any engineers around to implement the dark patterns they wouldn't be as prevalent. Maybe this calls for an equivalent of the Hippocratic Oath but in the tech world?

Brick layers. Developers develop, designers design was more my point, though of course this line is blurred in many organisations.

There is plenty of duck tape yes,

But there is suprisingly little layers on layers. Part of what has been amazing about the web is that the target remains the same. There is the DOM. Everyone is trying different ways to build & update the DOM.

Agreed that there are better alternatives than a lot of what is out there. We seem to be in a mass consolidation, focusing around a couple very popular systems. I am glad to see folks like Github presenting some of the better alternatives, such as their Catalyst tools[1] which speed up (developer-wise & page-wise (via "Actions") both) & give some patterns for building WebComponents.

The web has been unimaginably stable a platform for building things, has retained it's spirit while allowing hundreds of different architectures for how things get built. Yes, we can make a mess of our architectures. Yes, humanity can over-consume resources. But we can also, often, do it right, and we can learn & evolve, as we have done so, over the past 30 years we've had with the web.

[1] https://github.github.io/catalyst/

If by surprisingly little, you mean 4 pages and 500mb of requirements for a "hello world" project with the "modern" web, then yes. The DOM has always been a mess, much like javascript. And the fact that no one has tried to do something about it contributes to the mountains of duckt tape. It was bad enough when angular showed up, but when all the other mumbo jumbo showed up like react, vue, webpack and whatnot is when it all went south. I refuse to offend compilers and call this "compiling", but the fact that npm takes the same amount of time to "compile" it's gibberish as rustc to compile a large project(with the painfully slow compilation that comes with rust by design), is a clear indication that something is utterly wrong.

Again, you are attributing to the web what a pop-culture is doing with it.

While willfully ignoring all the people doing better.

Maybe we are- as you fear- stuck, forever, in thick JS-to-JS transpilers & massive bundles & heavy frameworks. Maybe. I don't think so.

> If by surprisingly little, you mean 4 pages and 500mb of requirements for a "hello world" project with the "modern" web, then yes.

React is well under 20k.

FWIW when optimizing my SPA, my largest "oops" in regards to size were an unoptimized header image, and improperly specified web fonts.

There are some bloated Javascript libraries out there, yes. But if you dig into them you will often find that they are bloated because someone pulled in a bunch of binary (or SVG...) assets.

Ah react, the biggest crap of them all. A 12 year old with basic programming skills is definitely capable of designing a better "framework". Yes, everything frontend is quoted because it's nothing more than a joke at this point. Back to react, and ignoring all the underlying problems coming from the pile of crap that is js, let's kick things off with jsx. The fact that someone developed their own syntax(I'm not sure if I should call it syntax or markup or something else) makes it idiotic: It's another step added to the gibberish generation. It's full of esoteric patterns and life cycles which don't exist anywhere in the real cs world. React alone provides little to nothing so again you need to add another bucket of 1000 packages to make it work. Compare it to a solid backend framework that isn't js: all the ones I've ever used come with batteries included. The concept of components and the idiotic life cycles turn your codebase into an unmanageable pile of of callbacks and sooner rather than later you have no clue what's coming from where. Going into the size, the simple counter example on the react page is 400kb. Do I need to explain how much stuff can be packed in 400kb? For comparison, I had tetris on my i486 in the early 90's which was less than 100kb, chess was a little over 150kb. Christ, there was a post here on HN about a guy who packed a fully executable snake game into a qr code.

> Ah react, the biggest crap of them all. A 12 year old with basic programming skills is definitely capable of designing a better "framework".

You're literally a stereotypical Hacker News commenter. I also find the modern frontend a bit too complicated but this is just an unreasonable statement.

You'd be right if I had not given arguments for my statement. I did as a matter of fact.

> let's kick things off with jsx.

Of all the problems I have with React, and I do have a few, JSX is not one of them.

If you are going to be using a language to generate HTML, you are either going with a component approach that wraps HTML in some object library that then spits out HTML, or you are stuck with a templating language of some sort. (Or string concatenation, but I refuse to consider that a valid choice for non-trivial use cases.)

JSX is a minimal templating language on top of HTML. Do I think effects are weird and am I very annoyed at how they are declaration order dependent? Yup. But the lifecycle stuff is not that weird, or at least the latest revision of it isn't (earlier editions... eh...). The idea of triggering an action when a page is done loading has been around for a very long time, and that maps rather well to JSX's lifecycle events.

> React alone provides little to nothing

Throw in a routing library, and you are pretty much done.

Now another issue I do have is that people think React optimizes things that it in fact does not, so components end up being re-rendered again and again. Throw Redux in there and it is easy to have 100ms latency per key press. Super easy to do, and avoiding that pitfall involves understanding quite a few topics, which is unfortunate. The default path shouldn't lead to bad performance.

> The concept of components and the idiotic life cycles

Page loads, network request is made. Before React people had listeners on DOM and Window events instead, no different.

Components are nice if kept short and sweet. "This bit of HTML shows an image and its description" is useful.

> Do I need to explain how much stuff can be packed in 400kb?

No, I've worked on embedded systems, I realize how much of a gigantic waste everything web is. But making tight and small React apps is perfectly possible.

And yes, if you pull in a giant UI component library things will balloon in size. It is a common beginner mistake, I made it myself when I first started out. Then I realized it is easier for me to just write whatever small set of components I need myself, and I dropped 60% of my bundle app size.

In comparison, doing shit on the backend involves:

1. Writing logic in one language that will generate HTML and Javascript 2. Debugging the HTML and Javascript generated in #1.

And then someone goes "hey you know what's a great idea? Let's put state on the back end again! And we'll wrap it up behind a bunch of abstractions so engineers can pretend it actually isn't on the back end!"

History repeats itself and all that.

SPAs exist for a reason. They are easier to develop and easier to think about. And like it or not, even trivial client side functionality, such as a date picker, requires Javascript (see: https://caniuse.com/input-datetime).

SPAs, once loaded, can be very fast and scaling the backend for an SPA is a much easier engineering task (not trivial, but easier than per user state).

Is all of web dev a dumpster fire? Of course it is. A 16 year old with VB6 back in 1999 was 10x more productive than the world's most amazing web front end developer now days. Give said 16yr old a copy of Access and they could replace 90% of modern day internally developed CRUD apps at a fraction of the cost. (Except mobile support and all that...)

But React isn't the source of the problem, or even a particularly bad bit of code.

jsx is a retarded idea because it adds an abstraction over something brutally simple(html). Abstractions are good when you are trying to make something complex user-friendly and simple.

> Throw in a routing library, and you are pretty much done.

Ok routing library, now make an http request please without involving more dependencies....

> Throw Redux in

See, exactly what I said: we are getting to the endless pages of dependencies.

> 100ms latency per key press

100ms latency??!?!?!? In my world 100ms are centuries.

> 1. Writing logic in one language that will generate HTML and Javascript 2. Debugging the HTML and Javascript generated in #1.

I don't have a problem with that. At the end of the day you know exactly what you want to achieve and what the output should be, whereas react it's a guessing game each time. We are at a point where web "developers" wouldn't be able to tell you what html is. With server-side rendering, from maintenance perspective you have the luxury to use grep and not rely on post-market add ons, plugins and ide's in order to find and change the class of a span.

The term SPA first came to my attention when I was in university over 10 years ago. My immediate thought was "this is retarded". Over a decade later, my opinion hasn't changed.

your posting is tribalistic & cruel & demeaning, it attacks and attacks and attacks. this is so hard to grapple with, so so aggressive & merciless & disrespectful. I beg you to reassess yourself. don't make people wade through such mudslinging. please. there's so few better ideas anywhere, & so much heaped up, so much muck you rake us take through. please don't keep doing this horrible negative thing. it's so unjust & so brutally harsh.

> jsx is a retarded idea because it adds an abstraction over something brutally simple(html).

what programming languages do it better?

one of reacts greatest boons, it's greatest innovations, in my mind, is that it gave up the cargo cult special purpose templating languages that we had for almost two decades assumed we needed. it brought the key sensibility of php to javascript: that there was no need, no gain, by treating html as something special. it should be dealt with in the language, in the code.

if you have other places that have done a good job of being ripe for building html directly, without intermediation, as you seem to be a proponent of, let me/us know. jax seems intimately closer to me to what you purport to ask for than almost any other language that has come before! your words are a vexing contradiction.

> Ok routing library, now make an http request please without involving more dependencies....

please stop being TERRIFIED of code. many routing libraries with dependencies aren tiny. stop panicking that there is code. react router v6 for example is 2.9kb. why so afraid bro?

this is actually why the web is good. because there are many many many problems, but they are decoupled and a 2kB library builds a wonderful magical consistent & complete happy environment that proposes a good way of tackling the issues. you have to bring some architecture in but anyone can invent that architecture, the web platform is in opinionated ("principle of least power" x10,000,000), and the solutions tend towards tiny.

redux is 2kB with dependencies as well.

The only thing I'm openly disrespecting beyond the unholy mess that is the web of 21-st century, is react and potentially it's designers(I was raised not to be offended but if they are, good riddance). The web is in the worst shape it's ever been. I'm not terrified of code, I've been writing code for over 20 years. I hate antipatterns and spaghetti code, which is what the modern web is, top to bottom, frameworks and libraries included. The main idea behind javascript was to give small and basic interactivity which is entirely self-contained. Is it now? The fact that the modern web relies on tons of known and unknown projects, complex ci/cd pipelines and gigabytes of my hard drive is a clear indication that the js community messed it up. Very similar situation as php, which was intended to be a templating engine (and for that purpose it is brilliant) but just like js, it was turned into Frankenstein's monster(still to a lesser degree). And don't get me started on the endless security issues npm poses. I'm blown away by the fact that those are exploited so little - 15 year old me would have been in heaven if given those opportunities, along with half of my classmates. I seriously wonder what teenagers do these days.

stop deleting flagged posts!! we will never ever improve if you keep deleting antithesis!!!!!! bad arguments are necessary!

> 100ms latency??!?!?!? In my world 100ms are centuries

Yup that's crappy. The ease of it happening, the Work At A Startup page used to have this issue (may still, haven't looked lately) shows that it isn't hard to make accidentally happen.

As I said, it is a weakness of the system.

> sx is a retarded idea because it adds an abstraction over something brutally simple(html)

Have you seen how minimal of an abstraction jsx is? It is a simple rewrite to a JS function that spits out HTML, but JSX is super nice to write and more grep-able than the majority of other templating systems.

I have a predisposition to not liking templating systems, but JSX is the best part of React.

Notably it doesn't invent it's own control flow language, unlike most competitors in this space.

> My immediate thought was "this is retarded".

Well the most famous SPA is gmail and it's rather popular, you may have heard of it. It is bloated now, but when it first debuted it was really good. Webmail sucked, then suddenly it didn't.

Google maps. Outlook web client. Pandora. Online chat rooms, In browser video chat, (now with cool positional sound!)

SPA just means you are just fetching the minimum needed data from the server to fulfill the user's request, instead of refetching the entire DOM.

They are inherently an optimization.

Non-SPAs can be slow bloated messes as well, e.g. the Expedia site.

appreciated your previous tweet but I am falling asleep here & none of your rebuttals feel like they address the topics raised in a genuine/direct fashion. so many stronger points to make against this

> jsx is a retarded idea

> My immediate thought was "this is retarded".

It's generally frowned upon to use retarded in this manner. Not only is it insulting to people, it brings down the overall tone of your argument.

npm does not compile, it is just a package manager. That said, I understand your frustration.

Just wanted to point out that it is called duct tape, just to avoid misunderstandings since i had a similiar spelling error as a non-native speaker :)

Interestingly, waterproof fabric-based tape was originally called "duck tape" (for its waterproof quality). The same kind of tape was later also called duct tape, but it's actually pretty terrible for ducts. You want to use the all-aluminum tape for ducts. https://www.mentalfloss.com/article/52151/it-duck-tape-or-du...

Oh wow, so there actually is something called Duck Tape. TIL

This error is understandable as there is a popular brand of duct tape called Duck Tape.

...which is named after cotton-duck material tape once used for repairing ducts.


"Duck tape" originally referred to tape made from duck cloth. They started using it for duct work, and began to call it "duct tape", to the point where "duck" fell out of common use and was able to be trademarked. They've also stopped using it for ducts.

So call it either one and people will know what you're talking about.

Check out Gaff Tape as a replacement for the duct-tape at home use-case.

And for actual ducts you'll want to use foil-tape because temperature changes wreck the adhesion of duct-tape, then the moisture leaks into the walls/ceiling which is $$$$ bad.

> And for actual ducts you'll want to use foil-tape because temperature changes wreck the adhesion of duct-tape, then the moisture leaks into the walls/ceiling which is $$$$ bad.

This strongly depends on the type of duct. Flex ducts that are a plastic skin over a wire coil don't work so well with aluminum tape.

Oh, yes. I'm referring to rigid steel/tin works only.

it's funny how culture shapes our designations of the same thing. In (specially us) english it's duct tape as the product is primarily known for installation work, and often referred to as "duck tape" for it's well known brand, whereas in (german speaking) europe we also employ an English word for it, but call it "gaffer tape", for it's usage by light operators in the event business (so called gaffers)

Gaffers tape is a very different kind of tape similar only in appearance. It’s got a soft cotton content and is used to keep wires taped to things like rugs seamlessly so that people walking around your studio/tv-station/theatre etc don’t accidentally trip on it, potentially damaging the extremely expensive attached equipment in the process. If you’ve ever tried using “duct tape” as gaffers tape, you’d have a bad time, as it wouldn’t be that great at keeping wires down AND it’s likely to leave a residual adhesive on the floor when you take it off.

Duck tape is the stuff developed for the US army to waterproof things closed. Post-war, they made it silver instead of green and marketed it for use with ducts (since being waterproof made it also SEEM like a good candidate for the job in heating systems), but its pretty terrible for this purpose since temperature changes degrade the adhesive rapidly.

The tape you actually want to use for ducts is foil-backed tape.

In short, it was and still is a great marketing gimmick, but Duck Tape was only ever “ok” at keeping things water proof and only looks like gaffers tape or the tape you want to use on ducts.

I don’t think gaffer’s tape and duct tape are the same thing. Gaffer’s tape needs to be easily removable, which duct tape generally is not.

I'm a native speaker.

I use alot of Duck Tape .

From the little language study I've done, English is one of the most flexible. You can discard entire parts of speak and it still works.

Saying , ,'ey you woke up yet', is ok in many contexts.

Javascript as a language is actually pretty decent these days, your criticism probably applies more to certain parts of the ecosystem.

No, I'm talking about js as a whole. Standard library is crap, inconsistent, even the most basic of naming conventions are not followed anywhere. The fact that the standard library jumps between camel case, pascal case, snake case and unicase at random is a perfect example. The list of absurdities is beyond ridiculous[1].

[1] https://github.com/denysdovhan/wtfjs

Who on this thread was saying what's old is new? 100% of what you said has also been lobbed at PHP recently and I've heard similar complaint about PERL and MS-SQL (that I remember well) and I'm sure others (one of those Delphi product too)

What's been strange to me though is I've heard JS advocates lobbing those criticisms at PHP, making the case for, say, why 'Node is awesome, PHP sucks'. Conflating a framework vs a language, then pointing out PHP 'issues' that also exist in JS... there's generally little point in trying to engage/correct at that point (context: primarily conference hallway conversations and meetup groups back when those actually happened).

JS is the new PHP. Part of the problem with massive popularity is it attracts also the lower ability devs and the ecosystem slowly degrades due to this. This cascades.

What standard library?

Surely your point could be made better without the hyperbole?

"Most outrageous and absurd language ever," "Megabytes of javascript", "corporate-grade bandwidth", "8th-gen i7 and 8GB of memory" to open "3 images and a contact form."

I'm sure you can find one or two poorly-optimized sites that have 2MB of javascript to download, but it's by no means the necessary outcome of using "ridiculous libraries and frameworks," and not even a particularly common one.

> it's by no means the necessary outcome of using "ridiculous libraries and frameworks," and not even a particularly common one

The real world disagrees with you; go check out any major website and observe as your laptop's fans spin up.

However I think the main problem here isn't the symptom (websites are bloated) but the root cause of the problem. I'm not sure if it's resume-driven-development by front-end developers or that they genuinely lost the skill of pure CSS & HTML but everyone seems to be pushing for React or some kind of SPA framework even when the entire website only needs a handful of pages with no dynamic content.

> one or two poorly-optimized sites

Try every old media site and most e-commerce.

Oh yeah like the 'new' reddit. It makes even the best machines cry.

AjaxContentPanels or something to that effect. Those things were a nightmare. At the time, asp.net pretended to be stateful by bundling up the entire state of the page into "ViewState" and passing it back and forth client to server. Getting that to work with those panels was more work than just ajax-ing the content and injecting it with jquery.

In the Microsoft-verse, this might also draw some comparisons to the more modern server-side blazor.

<UpdatePanel /> see here: https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.up...

I used it 13 years ago. It was fancy.

Oh yeah. I remember that ViewState could reach 100s of KBs on a page if you weren't careful. It was a huge juggling act between keeping state in your input fields vs ViewState.

Glad I'm not the only one seeing that parallel. I'd be hesitant to use this for that reason, but maybe that's bias on my part? Just seems like you'd get stuck in a similar mess of "special" updatepanels aka hotwire frames that are trying to "save you from having to write javascript". Except it still uses javascript under the covers, so you still have whatever issues that may entail, only now it's more removed from the developer to be able to solve.

Interesting bit of history (and yes, I see the /s), XMLHttpRequest was actually invented by Microsoft in Explorer because the Outlook team needed better responsiveness for the web email client.

Is this that web framework from Microsoft that hid the transaction-orientedness of HTTP from you by letting you set server-side click listeners on buttons and generated all the code needed to glue it all together? At the time, I didn't feel good about it because it abstracted away too much, and required Windows on the server. Little did I know about all the ways people would start abusing JS in 10 years.

jquery also had a function for doing swaps. Then there's Rails turbolinks, and now Phoenix LiveView in Elixir.

I don't know if it's so much 'everything old is new again' as it is a problem of market penetration.

good ol' ASP.NET 2.0

Yep, was using this approach in Java server-side frameworks 12 years ago.

webforms is dead, long live webforms.

We need something like XMLHttpRequest right now. Google should integrate Dart into Chrome.

Okay this is a bit meta, but the whole cluster of "everything old is new again", "the pendulum of fashion has swung", "nothing new under the sun" takes is ignoring what tends to drive this sort of change: relative costs.

The allure of xmlhttprequest was that over connections much slower than today, and with much less powerful desktop computers, a user didn't have to wait for the whole page to redownload and re-render (one can argue that focusing on better HTTP caching on the server and client might have been smarter) after every single user interaction. This was also much of the draw of using frames (which were also attractive for some front-end design use-cases later re-solved with CSS).

As apps got more complex, clients got more compute, bandwidth grew, and as web audiences grew, offloadingl much of the page rendering to the client helped to both contain server-side costs and increase or maintain responsiveness to user interactions.

Now, as desktop client performance improvement slows down (this isn't just the slowing of computer speeds, computers are also replaced less frequently), average bandwidth continues to grow, app complexity and sophistication continues to grow, but as server compute cost falls faster than audience size grows, shifting rendering to HTML back to the server and sending more verbose pre-rendered HTML fragments over the wire can make sense as a way of giving users a better experience.

> The allure of xmlhttprequest was that over connections much slower than today

As someone who implemented a SPA framework prior to "SPA" being a word much less React or Angular, I have to say for my company, it was all about state management.

Distinguishing between web apps (true applications in the browser), and web pages (NYT, SEO, generally static content), state management was very hellish at the time (~2009).

Before that, pages were entirely server rendered, and JavaScript was so terrible thanks to IE and a single error message (null is null or not an object) that it was deemed insanity to use it for anything more than form validation.

However, with the advent of V8, it became apparent as an ASP.NET developer that a bad language executing at JIT speeds on the browser was "good enough" to not send state back and forth through a very complex mesh of cookies, querystring parameters, server-side sessions, and form submissions.

If state could be kept in one place, that more than justified shifting all the logic to the client for complex apps.

I don't know about you, but "cookies, querystring parameters, server-side sessions, and form submissions" to me are an order of magnitude simpler, though dated and not very flexible, than any modern JS client-side state and persistency layer.

Form submissions are brutally bad the moment a back button comes in. I remember so many "The client used the back button in a multi-page form and part of the form disappeared for them" bugs.

They're not if coded correctly and using the correct redirect/HTTP response code.

Yeah. I remember dozens of edge cases that involved errors and back buttons and flash scope and on at least one occasion, jquery plugins.

Or back buttons and CSRF tokens and flash scope...

Or, let's talk about a common use case. Someone starts filling in a form, and then they need to look at another page to get more information. (This other page may take too long to load and isn't worth putting in the workflow, or it was cut from scope to place the information twice.) So, they go out to another page, then back, and are flustered because they were part way through the work.

So, if you want this to work, you're going to need state management in the client anyway. (Usually using SessionStorage these days, I'd presume?) So, then, we've already done part of the work for state management. You are then playing the "which is right, the server or the client" game.

You accumulate enough edge cases and UX tweaks, and you're half way down the SPA requirements anyway.

Now, hopefully Hotwire will solve a large number of these problems. I'm going to play with it, but the SPA approaches have solved so many of the edge cases via code and patterns.

> but the SPA approaches have solved so many of the edge cases via code and patterns.

Part of the problem has also been ameliorated by larger screens and browser tabs.

Yes. You redirect after POST.

See: mobx

> If state could be kept in one place, that more than justified shifting all the logic to the client for complex apps.

Reminds me of saving Rich Text content on the server side. It was nightmare.

Also reminds me of the Microsoft RTF format. It's basically a memory dump of the GUI editor.

The state binded on a tree was never a good idea to start with.

I don't think app complexity and sophistication grew that much. Most of the problems common apps are solving can be dealt with standard CRUD-like interfaces and old boring tech and it works just fine.

I think what drives this crazy train of overengineered solutions of SPAs and K8s for hosting single static page is deep separation of engineeres from the actual business problems and people they are trying to help. When all you have are tickets in Jira or Trello you don't know why you should do or if they actually benefit someone then it's natural to invent non-existant tech problems which are suddenly interesting to solve. That is natural for curious engineers and builders. Then mix in 1% of big apps and companies which actually do have these tech problems and have to solve them and everybody just wants to be like them and start cargo culting.

Curious what you make of this:

I recently wrote a SPA (in React) that, in my opinion, would have been better suited as a server-side rendered site with a little vanilla js sprinkled on top. In terms of both performance and development effort.

The reason? The other part of the product is an app, which is written in React Native, so this kept a similar tech stack. The server component is node, for the same reason. And the app is React Native in order to be cross-platform. We have ended up sharing very little code between the two, but using the same tech everywhere has been nice, in a small org where everyone does everything.

Agree. The transformation of software into ever-smaller and more specialised 'ticketing' is as much a result of managerialism encroaching in. Basecamp have this bit in their Shape Up handbook about responsibility and how it affects their approach to software:


Making teams responsible

Third, we give full responsibility to a small integrated team of designers and programmers. They define their own tasks, make adjustments to the scope, and work together to build vertical slices of the product one at a time. This is completely different from other methodologies, where managers chop up the work and programmers act like ticket-takers.

Together, these concepts form a virtuous circle. When teams are more autonomous, senior people can spend less time managing them. With less time spent on management, senior people can shape up better projects. When projects are better shaped, teams have clearer boundaries and so can work more autonomously.


You've just explained the adoption of microservices in the enterprise.

I wonder if other industries suffer from the same problem, bored engineers.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact