I miss hand-writing webpages in Notepad and then FTP'ing them to the public_html directory on the server to make them live. The amount of infrastructure, libraries, build tools, pseudo-languages, etc layered on top of each other some developers/orgs use to make a simple form today is crazy to me. It also feels super fragile: deploy a website today, come back to it in 5 years to make some changes, and half the tools and libraries you need will be gone.
Last six months with my last employer I spent immersed in the full Next, React, Prisma ecosystem. I swear that 80% of developers' time was wasted on fighting compatibility issues. Actual work happened in the small crevices of time left between endless stream of build breakages or chasing random bugs caused by changes in transitive dependencies.
I feel like es modules have killed javascript. They have enough momentum that everything supports them (or needs to support them). But they’re not good enough, or compatible enough to actually integrate well.
Recent example: bundler X insists on import statements including file extensions. But the typescript compiler refuses to let you import foo.js and won’t rewrite an import of foo.ts to foo.js when you build. There’s giant GitHub issues about this but nobody wants to fix it on their end. In short, everything is awful.
Writing javascript in the browser used to be just the same as nodejs, with the exception that you needed a bundler like browserify - which mostly just concatenated everything together. Now I dread setting up new projects. Will typescript work with svelte? Will this webassembly module work with rollupjs? Will it pull in my type definitions correctly? Will the url path system magically work or break? Urgh.
I feel like I’m no longer competent enough to get arbitrary tools working properly together. We need bespoke build systems now because it seems like they’re the only thing that works any more. And that means I can’t throw together a simple server side rendering system like I used to be able to do.
Bryan Cantrill once joked that javascript was the failed state of programming languages. That doesn’t feel like much of a joke any more.
The thing you described about tsc not liking file endings ... I wasted a whole free day on this and my whole motivation for a side project. How the f is tsc not able to output whatever module for.at I need in 202X??? And that means you need an extra tool to do the job. Good luck setting up shitty bundlers for another whole day, until you finally get a simple output directory with normal JS file, one per ts file and simply deliverable to the browser. It is such a PITA. It makes me question whether all the type safety of TS is actually worth the trouble. Once I start writing plain JS, I slowly but surely get the feeling of that project being lost, unless I use a proper programming lang and output to JS. Maybe one of these days I will learn Clojure or something. All just because the JS ecosystem of tools sucks so much. And this is what people put up with all day long. How have we strayed so nuch from the path?
I once set out a simple goal, instead of shipping a production Node package with all the js files and node_modules, just have it all in one js file that can be executed by Node. After trying several compilers and messing with their settings, I ultimately failed exactly because of the different ways to import/export modules. My own code was all esm with .mjs files, but many npm libraries aren't so basically you're stuck.
Tried that one too, failed me as well and I recall correctly due to a different reason. However I see there have been new releases, maybe it's time to try again.
Sure, but I can’t run the same code in nodejs for server side rendering.
And wasn’t server push for HTTP/2 removed by web browsers? If the browser needs to do a round-trip to the server to fetch every individual javascript file (and it doesn’t even know which files it needs up front) then the result will be way slower than a bundler. I would expect performance to get worse, not better using this approach.
SSR is typically just a way to pre-render the client app into (critical) static assets that can be delivered to the browser quickly, so it can display at least some content before the client app loads, runs and ”hydrates” the server-generated HTML back into the exactly same interactive app (e.g. a React app) that was used to generate the SSR assets.
I know what SSR is ... been using it for a decade or so, before it became hip again in form of JS frameworks.
That does not change, that in practice people will write their web components all in one file and render a template, which contains <MyFancyNamedTag>{js logic here that should not be written here}</MyFancyNamedTag>. The mere ability to have arbitrary js logic in there and the fact, that rendering MyFancyNamedTag actually renders a component, which itself can contain arbitrary js logic, instead of outputting a real tag named MyFancyNamedTag, leads to people coupling everything tightly together. Traditional server side template rendering encourages people to make up their data beforehand and then render the template.
Maybe you don't do it that way. Maybe you are smarter than the average frontend/js-is all-I-know-because-I-heard-its-all-I-will-ever-need developer. Not saying you specifically make these mistakes it.
Edit: With regards to being able to contain arbitrary js code in braces in the call to render a component, this actually inches close to the mess many people make when using PHP. There is was: open php "tag", php logic, output HTML, treating it as a string, inside HTML open PHP again, output some JS, JS logic contains code for manipulating HTML ...
Well, the way I see it (and the canonical way React sees it), a render function f producing a view V from some data D is defined as
V = f(D)
… which is an oversimplification, since UI commonly has local state, which definitely doesn’t need to be elevated all the way to the master data. Any interactive UI thus certainly uses several sources of truth.
Let’s add some to the previous. Let our previous data D now be D_master since it is our master business data (which we may not change in any way, shape, or form). Then, add state of app routing D_router, user auth state D_auth, and local widget state D_widget.
V = f(D_master, D_router, D_auth, D_widget)
We might have all kinds of access checks regarding the three first ones, but here we are only interested in what this means for rendering a view.
To use all our data, let’s imagine a form where a dropdown select widget is placed inside a tabbed container. The user needs to have a particular user role in their auth data to see this widget. There are many similar widgets in this form, all with fine-grained controls.
- The options for the list come from D_master.
- The state of the tabbed container lives in the URL so that it isn’t lost on refresh, so it is accessed through D_router.
- The user role comes from D_auth.
- And finally, whether the dropdown is open or not is stored in the component’s local state, D_widget.
This is not a far-fetched example, such things are common in more complicated applications.
In any case, it does make intuitive sense to me to allow logic in the render function, as long as that logic operates directly on the render function parameters and not on some on-the-fly transformed intermediate.
Rules of refactoring also apply.
Without logic in the render function, the coupling between frontend and backend would get insane, as every component fetching data would require custom on-the-fly denormalisation for that data in the backend. In our example, we would be pre-generating data for the entire form for a particular user, probably even for unused features.
It would be like generating snippets of HTML in response to AJAX requests, but without the prerendering to HTML.
> Not clear to me that the added complexity is worth it personally.
It used to be pretty easy to set this up. Modern bundlers, jsx, es modules, typescript, webassembly and modern web frameworks which need their own compilers have all made this much more complex. I’m not sure the complexity is worth it any more.
You should almost always use a bundler. ES modules work, yes, but using them naively subjects you to a waterfall (each module file may trigger the loading of others, though you could circumvent this with a sufficiently advanced server that recursively traversed modules in order to collect the whole set and send rel=preload links for each with the initial response), slow loading everywhere and exceedingly slow loading in higher latency, and the downloading and execution of much unnecessary code. For many completely realistic designs that would have been a hundred kilobytes in a one or two files, you’ll end up with it spread over several hundred files, reaching ten or twenty levels deep, so that it downloads and executes multiple megabytes of code, a lot of which is not even used, so that for someone on the other side of the world, what would with proper bundling have taken less than two seconds to load (including initial TLS handshake) will instead take ten or twenty seconds if everything goes smoothly. And I’m not exaggerating in the slightest here.
The situation would certainly be worse in HTTP/1 because of the lack of pipelining and the six connections limit, but HTTP/2 does not make not-bundling in any way reasonable.
The import statement returns a promise, so you can lazily load modules as required (i.e. don't load everything up front, only load on-demand as users are using your app).
You don't need to complicate things - just use vanilla JavaScript features as they were intended.
In practice, you are likely to have to contort your code very significantly to use dynamic imports pervasively. And pervasive usage such as you seem to be describing is emphatically not how dynamic imports were intended to be used, and wouldn’t speed things up much in general anyway (because you still can’t do anything until your dependencies have loaded), and stands a good chance of being slower.
> Use dynamic import only when necessary. The static form is preferable for loading initial dependencies, and can benefit more readily from static analysis tools and tree shaking.
(Also, a quibble over your wording: dynamic imports aren’t a statement, but a call, though not a function call.)
You need to develop with this in mind from the start. I am not suggesting refactoring an existing application to use it for every single dependency as yes that would be a pain.
Instead import your initial generic dependencies up-front as usual, but then notionally break your application up by CUJ and load the CUJ-specific code dynamically as needed.
E.g. for a hypothetical email client you would import the initial "inbox view" code upfront (along with generic common dependencies), but you might not import the code needed for a rich text editor until the user clicks on the "write email" button, saving that network load and code execution for when - or indeed if - the user actually needs it.
Given the context (waterfall stuff), I was assuming you were describing pervasive dynamic imports, since the selective use of dynamic imports for code splitting is fairly standard practice. (And indeed, I know from working on it a few years back and doing most of the porting from a custom module system to ECMAScript modules, Fastmail’s webmail works in this way, though my recollection is that compose specifically is preloaded once the rest of the mail stuff is done, so that when you hit that compose button it’s almost certainly ready to go common fault of code splitting is deferring loading too long, so that you end up slowing users down.)
When I spoke of unused code, I was meaning things like a file containing a bunch of functions, of which you only use one, and so there are the other functions and perhaps deeper imports that weren’t actually necessary; a bundler would remove all the unused stuff. The “solution” in such a case would be to do something like splitting each function into its own file, but that’s worse than before because now your import chains are probably even longer, and even more files have to be loaded (and there is still some filewise overhead, even if HTTP/2 improves it drastically).
I’ll temper my expressed position slightly by saying that bundling is not so significantly advantageous if you have written all of the code (you’re not importing third-party libraries), and keep your file count fairly low and import depth very low. But at that point, perhaps you should have just dropped it all in one file.
Don’t use external dependencies (:D), don’t write code you don’t use, encapsulate imports in custom elements (this is a joke).
Without any runtime boilerplate like module bundling, corejs polyfills and the usual frontend/css-in-js frameworks, all code that gets downloaded is your code, so make it count. 640 kilobytes should be enough for everyone.
For runtime boilerplate: yeah, don’t use webpack, it does indeed add silly boilerplate (and structures things in a way that almost requires source map tooling to understand!); use Rollup instead, which pretty much writes just what you would have written if you had put everything in one file, with no overhead whatsoever. (If you happen not to be familiar with it, take a look at the examples in https://www.rollupjs.org/repl/.) Also ditch core-js as it’s almost certainly unnecessary if ES modules are your baseline.
But even if you’re not using external dependencies, I maintain: by not bundling, you’re slowing things down, and it’s generally the people on the other side of the world from your servers and the disadvantaged with less-powerful computers and more expensive data that will suffer for your whim.
From what I’ve seen, many built-and-bundled apps have run into security issues in dependencies and have been forced to switch to ESM versions which are highly incompatible with CJS-targeting builds and so require significant retooling just to fix trivial security issues.
It’s not npm’s fault for sure, but the ecosystem is fundamentally divergent on this issue and there doesn’t seem to be a universal solution.
I’m confused by your comment, because it doesn’t seem relevant and I don’t understand what you’re saying. My first impression is that you seem to be implying that the security issues in dependencies are in some way linked to the module format (ES/CommonJS), which is not the case. I can’t see why such issues would force a switch in either direction for anyone, given especially that ESM bundlers can generally slurp ECMAScript Modules and CommonJS alike (e.g. Rollup via @rollup/plugin-commonjs) with only rare compatibility issues after at most a little configuration, generally associated with very-badly-written code. Given how much of the npm ecosystem is still on CommonJS modules, there’d be much more outcry if you could’t intermingle the two in a bundler. So if what you had in mind was something like “version x was CommonJS and has a bug, version y is ESM and has fixed the bug”, well, you can probably update to y easily (at least as far as the CommonJS/ESM divide is concerned), and if you were forking or anything, if the bug is clearly identified then patching it onto x is easy too.
In practice, there are packages which have moved on to ESM and which no longer maintain their CJS version (at least as actively), making life difficult for maintainers of projects dependent on those CJS packages with published vulnerabilities.
A CJS-targeting build can’t slurp up ESM dependencies. Not without dubious hacks, at least.
> Recent example: bundler X insists on import statements including file extensions. But the typescript compiler refuses to let you import foo.js and won’t rewrite an import of foo.ts to foo.js when you build. There’s giant GitHub issues about this but nobody wants to fix it on their end. In short, everything is awful.
That's wrong. The latest TS version allows you to import .js files with extensions just fine! In fact, one of their recommendations is that you write the your JavaScript code as you would if they were intended for a browser (which doesn't automatically append any file extension, doesn't support importing directories by appending /index.js, only supports file-relative or absolute imports (outside of import maps) etc).
I wanted to import a typescript file from another typescript file.
If I made my import line be “import .. from ‘./foo.ts’” then typescript was happy, but the compiled output still specifies .ts rather than .js. So the import was broken (that file didn’t exist in tsc’s output directory).
My other option was to “import .. from ‘./foo’”. Then typescript worked but the bundler I was using refused to import the file at all - since it was missing an extension in the import statement. And apparently extensions are technically required by the spec.
The real solution would be for the typescript compiler to rewrite import statements. Like, import foo.ts should be rewritten to import foo.js. There’s a GitHub issue talking about this[1] that was closed because the typescript authors think this is someone else’s problem.
I ended up just giving up and trying a different bundler - which had equally different, but equally frustrating problems.
TypeScript doesn't rewrite (that part of) your code (it downcompiles ES features if necessary, but won't rewrite import specifiers). Write import specifiers as you would in regular JS code intended for the browser (ie. relative or absolute paths, including file extension, no directory imports, …).
> […] but the bundler I was using […]
Trying to write code that's understood by both bundlers AND produces working output when processed directly by TypeScript is a bit too much to ask at this time. Maybe we'll get there, eventually, but unless it's plain ES2022 code (intended directly for browsers, not targeting bundlers or TS or test runners etc) you have to write the code with a specific intent.
When I tried it, the typescript compiler errored at this - because no such file exists. VS Code also got really confused by this approach. Has the situation changed? If this is the recommended way to write typescript code, why doesn’t typescript’s documentation reflect that?
Typescript needs to pick a lane. A compiler that can’t produce working code is broken. And last I tried, typescript broke when I imported foo.js, and every other option didn’t produce spec-compliant javascript (since the extension is officially required in the import statement).
My take is that if typescript can rename my file from .ts to .js, or change import to require(), it should be able to rename my import statements too. But if you want to die on that hill and recommend people import the resulting .js file, then recommend that everywhere so tool authors can fix their tools! Don’t play it both ways then blame users when we can’t make our code work.
> Trying to write code that's understood by both bundlers AND produces working output when processed directly by TypeScript is a bit too much to ask at this time.
What rot. Being able to turn my source code into working software is exactly what I expect from a compiler. If typescript can’t deliver, what use is it?
Edit: I just tried it and "import .. from './foo.js'" seems to work correctly now in both the typescript compiler and vs code. Good! I'll raise tickets in other tools when I run into problems.
You see, it's not that ESM killed JavaScript, all the hacky build tools did. ESM was in the pipeline for an eternity and everyone took their sweet time to support it.
You can use ESM without issue in browsers, Node and using proper ESM bundlers like Rollup, as long as you don't expect `import 'a/b'` to arbitrarily load `./node_modules/a/b.js` or `./node_modules/a/b/index.js`
npm also keeps actively harming the community by not validating packages in any ways whatsoever, so everyone keeps pushing broken crap into the ecosystem.
Maybe the right way to think about it is that ESM is python3 for the javascript world. If javascript had ESM from the start, it would be fine. And maybe once every package in npm is rewritten for ESM, it'll be fine. But in the meantime, we have essentially 2 mutually (sort of) incompatible programming languages which share a name and a package repository.
> You see, it's not that ESM killed JavaScript, all the hacky build tools did.
The problem is that ESM added a massive extra amount of complexity to all those hacky build tools, because suddenly we have 2 different kinds of javascript code and people expect their code to be mutually compatible. That is really complex.
- Typescript's canonical way to import things (import foo from './blah') is incompatible with ESM imports (because imports must have file extensions).
- Nodejs has a plethora of options to configure packages to support combinations of ESM and CJS[1]. Half of those options are deprecated. Lots of tools read package.json - but don't parse all those different options consistently with nodejs's implementation.
- Eg: wasm-pack only knows how to compile code "for the web" or "for nodejs". (You want a package that works everywhere? Adorable.) The web version creates an ESM module, but doesn't specify "type": "module" in package.json - so even though it could work with nodejs, it doesn't.
- Lots of packages in npm are broken in either CJS or ESM mode. And there's no way to tell without trying them.
Essentially, we went from having a simple, extensible system (commonjs + bundlers) to a very complex system. The number of people who understand all that complexity has dropped by orders of magnitude. Now I feel less capable of creating working code than I was a few years ago.
Its an unmitigated disaster for the ecosystem. Maybe in a few years, everything will be using ESM. But until then, I want off this ship.
> Lots of packages in npm are broken in either CJS or ESM mode. And there's no way to tell without trying them.
I blame npm for allowing that to happen. I get that it’s technically a third party, but they never even checked that the file mentioned in `main` was included, even without a “postinstall” step.
They, together with node, could have built a tool that validates main/type/exports on publish, but no. There’s literally no way to validate ESM until you run them or use a full linter. The whole point of ESM was that it’s statically analyzable.
Next is the worst name for a JavaScript framework. It might as well be called "yet another JS framework." It's things like this that make me ponder self deletion. So much pointless work we create for ourselves.
Here, I believe in taking HTML and then finding the absolute minimal extensions. My most work is to embed a mustache like primitives into HTML. For example <div rx:iterate="posts"><div><lookup path="title"> - <lookup path="body"></div><div> will iterate over an array and render the title and body appropriately.
I then take it a step further by making the binding reactive such that updates flow in real-time.
I am having a great deal of fun with this approach as I continue to explore this approach.
The name alone already makes me feel dread. htmx - as if HTML needed an x for extended ... also has a feeling of docx and its ilk. Why do people feel the need to change HTML instead of changing the logic, that outputs HTML? HTML is just the damn view layer! This smells of mixing logic into the view layer again.
htmx is very, very close conceptually to a standard HTML <form> tag. (By the way, the fact that there is a <form> tag immediately tells us that HTML is not just 'the damn view layer'.). htmx just goes slightly beyond it to give <form>-like capabilities to every element, and doesn't need a full page reload. It doesn't even necessarily change or extend HTML–you can just use a `data-` prefix on the attributes that it adds–which makes it completely standards-compliant HTML. It's a very natural fit.
For Lua, just to be safe, I fork all my dependencies and when I work on a project I add those forked dependencies as git submodules. Just to feel a bit more secure any dependencies never disappear.
Its good to have dependencies archived for continued access to them, but there are efficiency issues with doing that. Using git submodules (or vendoring or similar) means that you and everyone who uses the project is stuck on the exact commit you have chosen for each submodule until they go and modify the git tree to use a different commit for each submodule. This makes it higher friction to update a submodule. You can't just go `git pull` or `apt upgrade` to update the submodule, you actually have to go modify every project that uses the submodule, update and redeploy each one, instead of just updating and redploying the submodule itself.
Well, I only use git submodules in private projects. From what I read most devs hate it :)
In the past I did came into contact with git submodules through freelancing for another company, and I agree it was quite a hassle it times using it in a team setting.
Good luck with that approach in the JS world, where a package can have, without exaggeration, a thousand and more dependencies. Hopefully in the Lua world things are saner.
I've found most Lua libraries don't have any other dependencies.
Perhaps this is the reason I've found this approach work well with Lua. I've never tried it with other languages that I use, like Swift, Objective-C, C#.
I did work in a company in the past that used Nexus [0] for their dependencies across various platforms. Perhaps Nexus has some tooling build in to deal with transitive dependencies, not sure.
Theres a few nice lightweight CSS/JS frameworks like mdl.io and bootstrap that are still relevant, react apps get lost in a mess of callback caches and useEffect, unclear side effects seem to happen on rendering in most large react apps.
I developed some games like https://github.com/lee101/wordsmashing in a fairly simplistic jquery style i developed where i purposefully avoid using features of JS like "this", it trips most developers up when you pass functions around and the reference to "this" gets lost.
Agree with lots of the comments here about complexity of both JS (new es module compatibility issues etc) and JS frameworks that are limiting what people can actually build these days from cognitive overload and bugs.
a tonne of technologies (like polymer.io/web components) died in a sea of complexity and much technology we use is going to go the same way unfortunately.
Another gripe i think unpopular but true is that we as coders prefer complex things like static site generators like Jekyll/ghost etc, normal people can use a CMS and get stuff done without having to resolve packaging conflicts and SSL issues, we have to resist the urge to pretend coding/setting up complex software is easy because its not... also Kubernetes...
I built a website for my girlfriend's business using almost exactly this approach. Handwritten HTML, deploy to CloudFlare pages on git push, Stripe payment pages for ecommerce. It's great, more people should try it.