Hacker News new | past | comments | ask | show | jobs | submit | throwaway284534's comments login

As a web developer, I’d like to think that we’re effectively alchemists who transmute vague ideas into products held together with absurd magic that’s constantly changing.

Can we get a bill going? I can’t decide between “Webmancer” and “www.izard.com”


"It is well known that stone can think, because the whole of electronics is based on that fact"

- Terry Pratchett


I just don’t buy how this is a productive way to build websites. Having the functionality of HTMX natively supported would be nice but you’d still need much of what React does. HTMX’s docs seem to hand wave away front-end state management as something that no longer applies. Simultaneously, they also assume that every API you interact with will return HTML partials.

What could convince anyone to abandon the rich and bountiful lands of JSX and TypeScript? Who would prefer to move into a write-only and stringly typed HTML that competes with PHP for the slot of least performant debugging experience?

Maybe the answer is in the question…


> you’d still need much of what React does

> HTMX’s docs seem to hand wave away front-end state management as something that no longer applies.

I feel like you're begging the question that you need front-end client & state. I have ASP.NET apps still running from 10 years ago. They're fine. I'm adding HTMX to remove the page reloads. Why do I _need_ anything else?

(edited HTML => HTMX)


Define "need".

I've been consulting for a large org where someone decided that every web app needs to use a consistent pattern, and they mandated Angular. This could have been React, Vue, whatever, the point is that they picked a client-side JavaScript framework for all of their web apps.

Turns out that they weren't actually making web "apps". They were making web sites with mostly read-only content and a handful of web forms.

Traditional server-side templating, like Razor pages, is a well-established method for handling this. Something like HTMX adds the tiny bit of client-side interactivity that is actually required. Nothing else is needed.

The article talks about about reducing code size to 2/3rds and you just handwave that away!?

That's the exact same thing I've been telling my customer! They're literally bloating out their codebase three times over (3x!) by using JavaScript client-side rendering instead of plain, ordinary, boring, and simple server-side rendering like they should have.

For every single thing that they do, they need a C# bit of code and a TypeScript bit of code, and a whole API thing to wire the two up. They are forced to use distributed monitoring spread across the browser (untrusted!) and the server! Deployments have to factor in client-side cache expiry! And on, and on, and on.

I did a demo for them recently where I rewrote several thousand lines of code with 50. Not fifty thousand. Fifty.

"Thousands of lines? This is fine" -- says the developer on the hourly contractor rate.


You aren't more productive when building websites but when maintaining them. Suddenly you don't need a way to scale the insanely complex bundling & caching anymore with each feature you add or deal with the js upgrade churn.

The more developers you have, the less React SPAs are scaling in my own experience. In my current company it's even visible on the bundling graph itself over time.


people who want 66% less code, 50% faster load times & 50% less memory use?

https://htmx.org/essays/a-real-world-react-to-htmx-port/

(Of course, it depends: https://htmx.org/essays/when-to-use-hypermedia/ but, if we are going to speak in generalizations…)


Respectfully, those metrics are not proxies for productivity. They don’t seem to be grounded in a statical model either:

>They reduced the code base size by 67% (21,500 LOC to 7200 LOC)

> They increased python code by 140% (500 LOC to 1200 LOC), a good thing if you prefer python to JS

Literally what? So they rewrote their app, which was most definitely in a state of affairs that warranted a refactor, and then concluded it must’ve been the limits of React. Oh, and rewrite the back-end too while we sing the virtues of this library claiming a lower technical investment.

Believe me, I’ve got plenty of gripes with React. It’s very easy to build the wrong things with it. And the ecosystem is an overgrown mess. But I’d still prefer a problem of technical curation over debugging a library which marries HTML and server-side templates with an untyped DOM runtime.


¯\_(ツ)_/¯

there's always going to be an excuse if you want there to be

this is a real world situation (warts and all) where someone rewrote their whole app that had taken them two+ years to build and that was stalled with htmx in two months from a cold start w/no falloff in UX, they simplified the codebase tremendously, they improved performance in both dev & prod and it flipped their entire team to full stack, eliminating a flow bottleneck in their development process

i try to be balanced about things, outlining when hypermedia is a good choice (it was in this case) and when it isn't, but c'mon... if a more conventional reactive library showed this sort of improvement you'd be interested in learning more.

So, maybe it's worth a more serious look, despite your priors? The ideas are interesting, at least:

https://hypermedia.systems


Comparing percentages can be misleading. 8,400 total LOC vs. 22,000 total LOC is an incredible win.


> hand wave away front-end state management as something that no longer applies

Does client-side state often need to exist independently of server-side state? I’m having trouble imagining a shopping cart or email draft being optimal UX-wise without the ability to resume on a different device.

For things like dropdowns and modals, you can bring in _hyperscript, Bootstrap, Alpine, or even CSS hacks (my preferred approach).

> the rich and bountiful lands of JSX and TypeScript

One person’s richness is another person’s needless complexity.

JSX is cool when you first try it, but the novelty wears off (at least for me it did). There are superior templating languages (Django, Jinja, EEx, erb) that don’t require bizarre syntax such as nested ternaries, and they make it feel like you’re just using a slightly-enhanced superset of HTML (not to mention being able to use them to render things other than HTML).

As for TypeScript, with the checks stripped out at runtime, you’ll still need to validate and test the assumptions your typed code is making. Frankly, TS seems like busywork to me.

Finally, Progressive Enhancement is a thing with htmx. You might be able to have it with React, but then you introduce even more complexity into the build system.


Lookup a stenographer’s keyboard. There is a learning curve but a chorded keyboard can exceed typical typing speeds. I imagine a T9 isn’t too different in this regard.


I use one. I don't think that it would be a good substitute for this use case. You can try and do steno on your phone with Dotterel but it's not a good experience - you're better off using a swiping keyboard. I've not used a T9 system in my life, but I can imagine that it's a system that would let you input anything just typing with your thumbs. To have a good time doing steno, you have to exercise all of your fingers on both your hands. That's not quite so nice on your phone.


This is very cool. I suspect the author encountered that 12 pixel offset due to a default value on the canvas’s text baseline property. Setting this to `top` may resolve the issue, or invoking the `measureText` method and calculating the offset from the output. A fixed value for a monospace font is pretty good too!

Shameless plug, I’ve actually built the opposite of what the author has described. Asciify[1] is my very own highly efficient and over-engineered tool to generate animated text art. It started as an excuse to learn more about browser performance and just expanded out from there. I would love it if a greater mind could squeeze another 5 or 10 FPS on the spiral demo[2]. Maybe it’s time to brush up on those WebGL docs again…

- [1] https://asciify.sister.software/

- [2] https://asciify.sister.software/demo/spiral/


6to5…err, I mean Babel, Already accomplished it’s mission to bridge the feature gap between older browser implementations. And like all bureaucratic melanomas, the maintainers made a strange decision to not only expand their domain to ES7, but to ALL FUTURE VERSIONS OF JAVASCRIPT FOREVER.

Babel became Webpackified and splintered into poorly understood preset bundles of the latest revelations of the TC39. A fractal of API documentation could then be written and rewritten again for the next mission: Newer is better. Modularize everything. Maintenance is a virtue.

I’m guessing that the brain trust at Babel HQ saw how the left-pad situation panned out and something clicked — we could turn our discrete task into an indefinitely lucrative operation as a rent seeking dependency for everyone. Every week could be infrastructure week so long as JavaScript kept adding features.

But what their hubris didn’t factor in was a petard hoisting much higher on the food chain — the Chromification of the web. Now that everyone who’s anyone is building a browser on the same engine, there’s no need for a second cabal of feature creatures to get a cut of the action.

It’s the same reason Firefox’s Wikipedia page has to be disambiguated with the term “cuckhold”; the same reason core-js can’t ask for a dime without macro fiscal policy being invoked by armchair techno economists. Why are you running out of money? Simple — We already paid for it!

These projects have transmuted one kind of technical debt into another, and the sooner they’re gone, the better we’ll all be in their absence. I would pray for a cosmic force to come and topple Babel back to earth, but the irony would be lost on them.


I downvoted you because you made multiple bad faith accusations about people involved in these projects. Regardless of Babel's and Firefox's utility your negative snark isn't helping anyone.


I do appreciate your transparency, though I disagree with the sentiment that I’m arguing from a position of bad faith.

The Babel team has not shown a moment of interest in lowering their role in the JavaScript ecosystem to anything short of kingmakers. I think the facts are self-evident, but I can easily back up my claims by citing pretty much any document the team has ever produced. Have a gander at their GitHub README and what do we see?[1]

- “Babel is a compiler for writing next generation JavaScript.” I suppose they left out “indefinitely” to avoid the obvious. Don’t forget, you’re here forever.

- Over a dozen sponsor logos. An embarrassment of riches.

- A literal audio recording of a song in praise of the project. The call is coming from inside the house, people!

The Babel team has a well documented history of their priorities[2], emphasizing the need for a modular approach that has no exit strategy[3]. At best, we have a case of accidental entrenchment and long term dependence on Babel brewing as early as 2017![4] At worst, we have a group of aspiring Carmack-wannabes looking for their big break into the incestuous and lucrative class of technorati standards committees.

Don’t believe me? It doesn’t take an inner-join on the TC39 roster and the Babel maintainers to see our own version of regulatory capture forming right before our eyes.

Compare this infinite circus to the humble but popular Normalize.css, which has the express purpose to stop existing.[5]

If the Babel team wants to raise some money, they can start by putting a plan together that would codify an exit strategy. It’s certainly more noble than their current plan of barnacling onto every NPM package…

- [1] https://github.com/babel/babel

- [2] https://github.com/babel/notes

- [3] https://github.com/babel/notes/blob/master/2016/2016-07/july...

- [4] https://github.com/babel/notes/blob/master/2017/2017-04/apri...

- [5] https://nicolasgallagher.com/about-normalize-css/


How would Babel stop existing though if JavaScript keeps evolving?

Is the goal that we all are using evergreen browsers and versions of Node and thus have no need to support older runtimes?


That is a wonderful question and is exactly the sort of thing that should be on the Babel website. You’ll find no such explanation or even a summary of trade offs that come with adding Babel to your app.

It’s assumed that if you want to support older browsers, the next logical step is to add Babel…forever. An incredible trick happens here, where the developer thinks they added the magic package which only bears a “tax” on the poor sap who’s stuck on Internet Explorer, presumably running eye watering amounts of polyfills on 32 bit limits of RAM.

In my opinion, the Babel team should start looking for a strategy that aligns with a world of evergreen browsers, and untangle the web of feature polyfills from syntax transformations.

It’s also not too wild to think that Babel is a symptom of a larger problem. JavaScript lacks a versioning mechanism when new features are added. A more self-aware Babel could use their connections with the TC39 team do what all successful JavaScript libraries do: become part of the standard library a la jQuery and CoffeeScript.

Alternatively, reconsider the velocity that Babel introduces to the JavaScript ecosystem. These tools might actually be self perpetuating their existence by making new features so readily accessible.


Well, that is the ultimate goal for some people.

With evergreen browsers it's not only about features, but also about security. If you can't update your browser to have arrow functions, you might have security issues. So it is in everyone's best interest that old browsers have yet another reason to be updated.

Also it can be argued that Babel gave IE11 a huge afterlife. IE11 support should have been dropped by the javascript community much sooner, and IE11 should have been used only for legacy apps, as Microsoft tried to. But tools like Babel made it possible for managers to say "c'mon just use Babel".

Also, while it is convenient to have Javascript features before they're available in Browsers, in practice the wait time is not as long as it was. And having a tool removes pressure (including internal pressure) for browsers companies to be fast.

And also: normally, projects using Babel have to pull hundreds of babel-related packages. The biggest complaints you see here on Hacker News about the Javascript ecosystem center around the massive number of packages. Well, guess what: Babel by default on create-react-app needs 133 packages.


The gist of the comment is that scope creep is expensive & mutates the original mission of the organization. Organizations tend to self-perpetuate via scope creep.


IMO, ESBuild is the best option these days. It’s not as magic or batteries included as Webpack, but there’s very little kept secret from you during the compilation process. It’s fast too!

Another tricky alternative is to just use TypeScript’s compiler. Combined with the new import maps spec, you can target most modern browsers and skip bundling all together.


I'd actually recommend Vite over Esbuild directly. It uses Esbuild under the hood, at least for production builds, but during development it uses the native import syntax with some optimisations to bundle dependencies together somewhat. This gives you a really quick development build, and then a well-optimised but still pretty quick production build.

But I think the real benefit is that it's much easier to get right than Webpack ever was. You don't need to start by configuring a thousand different plugins, rather, you just write an index.html file, reference the root CSS and JS file from there, and then it can figure out the rest, including all the optimisation/minification details, applying Babel, autoprefixing, using browserslist, etc. If it doesn't recognise a file (e.g. because you're importing a .vue or .svelte file) then it'll recommend the right plugin to parse that syntax, and then it's just a case of adding the plugin to a config file and it'll work.

I'm a big fan of Parcel, which is a very similar tool for zero-configuration builds, but Vite feels significantly more polished.


I agree - I love esbuild, but Vite is great and will generally give you what you want and more with minimal hassle. The development server and hot reloading are excellent.

I did recently find one thing that didn’t work out of the box in Vite, though. I needed to write a web worker, but Vite didn’t package it into a single .js file, so I had to call esbuild directly to do that.


Safari supports import maps in TP https://caniuse.com/import-maps


This is really incredible work. If you’ve ever tried some of the more esoteric SVG features like filters and animations, you’re bound to have a horror story or three to share. In almost every case, Safari’s SVG performance lags behind Chromium’s offerings.

Last year I was designing an animated cloudy sky with a sunset. I created a reusable pattern for each cloud type, and added groups for different speeds of movement to give the scene a parallax effect as they scrolled from one end to the other. This proved to be borderline impossible to animate smoothly when more than a dozen cloud paths were in motion. The only fix was to instead move the path translation code to CSS. An instant jump in performance.

The next issue was simulating how sunlight would move across the surface of the cloud patterns. Every single attempt to use an SVG filter or light source would have devastating effects on the frame rate. In my experience, the most powerful primitives of SVGs are not suitable for any task that combines their powers, and as far as I can tell, SVG lighting currently has no use beyond a proof of concept.

I ended up using CSS to get something close to a smooth frame rate. Ironically, reaching for something like Three.js might've saved me a lot of headaches. It’s funny to think that a 3rd party runtime library for WebGL would have a better performance than a universally understood DSL for vector graphics.

I’m sure the web is full of low hanging fruit like this. The hard part is figuring out how we can reaching through the thorns of backwards compatibility.


This is honestly very cool. VS Code uses a similar approach to their local file system provider, albeit with a wrapper around IndexDB instead of SQLite. There’s some interesting trade offs too, since IndexDB can store the browser’s native file handlers in a flat map — so there’s no need for a schema.

IMO, the Chrome team is being a bit deceptive with their phrasing on synchronous file handles. The problem is that the entire API is wrapped up in an asynchronous ceremony. `createSyncAccessHandle` is only available in a worker context. So you can only communicate with the worker using an asynchronous postMesssage event dispatcher. And even when you’re in the worker, file handles can only be accessed through methods that return a promise.

I understand the need for such boundaries when working with a single threaded language, but limiting the synchronous APIs to just workers seems like one too many layers of indirection. I recently attempted to write a POSIX-style BusyBox library and this sort of thing was a total show stopper.


>limiting the synchronous APIs to just workers seems like one too many layers of indirection

An unresponsive script is slowing this window down - kill process ot wait?


The file handler is already tucked inside an asynchronous Promise based API. I think it’s reasonable that a mere attempt to get a handler synchronously is made possible. Whether to support synchronous read/write access is another matter.

I would be delighted if handlers were more akin to byte arrays that could be written and read synchronously, albeit with an asynchronous function to persist changes to the disk.


This was a great article, though I felt it was incomplete without the inclusion of the infamous “guy taped to ceiling” photo.

A picture’s worth a thousand ping…

https://thenerdstash.com/lan-party/


I read the title and thought the article was going to be about that exacty photo :)


Ironically, that’s how most email correspondence worked when I lived in Japan. Nearly every message had a paragraph of fluff before getting to the unpleasant details:

Dear Customer,

It seems that the leaves are once again turning to their Fall colors, and the chill of an autumn breeze is once again upon us…

Also we haven’t received your television license fee yet and it would be most appreciated if you could please send us it immediately.

Warmest regards,

-X


That is a notably pleasant way to be taxed, though. I would appreciate the IRS more if their communications were like that.


Personally I'd rather someone tell me directly what I did wrong and what they want than coat it in fluff, which I consider offensively passive-aggressive. I know that is cultural though, and that in some cultures, e.g Japanese, it may be taken very offensively to just come out and say what you want


Yeah, they're set phrases that go on letters. I have a book of them, you have to look up the right one for the situation / time of year and add it to your letter. It would be a faux pas not to.

As an American, yeah, just charge my credit card for the fee. Thanks.


That "some cultures" tends to be known as "high context cultures".

https://sites.psu.edu/global/2020/04/18/japan-high-context-c...

> Just like Saudi Arabia and Spain, Japan is also characterized by high-context communication (R. T. Moran; N. R. Abramson; S. V. Moran, 2014, p. 44). Some of Japan’s traditions, values and norms have supported its high context communication. According to Hofstede’s culture dimension, Japan scores 46 on individualism, indicating that they are more likely to show characteristics of a collectivistic society; such as putting harmony of the group above the expression of individual opinions and people have a strong sense of shame for losing face (Hofstede Insights, n.d.). With this, the Japanese have established an in-direct and non-verbal communication within their inner circle rather than the outside circle of the world. Thus, in Japan, communication goes non-verbally, through subtle gestures, facial expression and voice tones. However, this can be a big challenge for foreigners and westerners that do not understand the Japanese language and communication.

https://kosoadojapan.com/high-context-culture-japan

https://en.wikipedia.org/wiki/High-context_and_low-context_c...

You even get some difference in the cultural context between men and women, and urban and rural, and north and south within the United States

For an example of a low context culture... Switzerland https://www.worldbusinessculture.com/country-profiles/switze...

> On the whole, the Swiss believe in plain speaking and place directness before diplomacy. It is expected and respected that people will speak their minds, without feeling the need to couch any uncomfortable messages in a softer way in order to spare the feelings of the audience. The type of coded language used by the Japanese or the British can be misconstrued in Switzerland as prevarication or even deviousness. Better to say what you mean and mean what you say.

> As has already been stated, however, this directness of approach should not be confused with confrontation or aggression – it is more the result of a desire to get to the truth or the empirically provable right answer.


There's a nice bit in Forster's A Passage to India in which one of the Indian characters reflects on how ill-mannered another character is for taking a polite excuse (which also happens to be a lie) as a problem to solve and not as the firm "no" that any properly-raised person would understand it to be.


100% this. The IRS doesn't provide enough information in their communications, which are already painfully verbose.


I remember getting some tax notification, and attached was some 2 page doc indicating "we've spent a lot of time working on making our documents more understandable, let us know how we're doing"... and... the notice they'd sent me was... more confusing than it needed to be. My accountant didn't quite understand it. I mean, he knew what it was, but hadn't seen the new language, and to top it off, it was months late - indicating I owed money that I'd paid months earlier.

We replied the following Monday, because the notice said we had to reply.

THEN.. 3 months later I got another notice indicating they'd received the first reply, and they needed a bit more time to process.

This was over about $200.

I would love to see them resourced appropriately, but the "let's hire more IRS employees" has been viscously attacked as "87000 more people with guns coming to take all your money!". I've been hearing that propaganda for weeks (months?) now.


> I would love to see them resourced appropriately, but the "let's hire more IRS employees" has been viscously attacked as "87000 more people with guns coming to take all your money!". I've been hearing that propaganda for weeks (months?) now.

Yeah, they've got way fewer people per taxpayer than in the 90s, and I don't think the new hires, the hiring of which will be spread unevenly over a decade, will even bring them back up to that level. Meanwhile the "armed" thing is just transparent bullshit—"here was ONE job posting for the police branch of the IRS (tons of federal agencies have such a branch of armed agents, including many you wouldn't expect), so all these new hires will surely be armed IRS cops coming to bust your door down and take your money!" LOL WUT. But A Certain Set of Terrible News Sources ran with that (knowing it was a lie) so to some chunk of the population, it's true now.


Tldr below.

You are both perfectly fine in wanting it both ways. In communicating with people to be effective it’s best to align with their communication style. Some folks want the long explanation, some folks want the tldr. I always try to accommodate both in my work communications.

Tldr: Different people liked to be spoken to differently and that’s fine and useful to accommodate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: