Hacker News new | past | comments | ask | show | jobs | submit login

Web frameworks are churn-y because they are incredibly leaky abstractions covering really awkward impedance mismatches. This means that they are never quite satisfactory - and that just to use one, you need to be capable of building a new one yourself.

Think of a typical web app. Your data exists:

1. As rows in a database, accessed via SQL

2. As model objects on the server, accessed via method calls and attributes

3. As JSON, accessed via many HTTP endpoints with a limited set of verbs (GET/PUT/POST/DELETE)

4. As Javascript objects, accessed via (a different set of) method calls and attributes

5. As HTML tags, accessed via the DOM API

6. As pixels, styled by CSS.


Each time you translate from one layer to the next, there's a nasty impedance mismatch. This, in turn, attracts "magic": ORMs (DB<->Object); Angular Resources (REST<->JS Object); templating engines (JS Object<->DOM); etc. Each of these translation layers shares two characteristics:

(A) It is "magic": It abuses the semantics of one layer (eg DB model objects) in an attempt to interface with another (eg SQL).

(B) It's a terribly leaky abstraction.

This means that (a) every translation layer is prone to unintuitive failures, and (b) every advanced user of it needs to know enough to build one themselves. So when the impedance mismatch bites you on the ass, some fraction of users are going to flip the table, swear they could do better, and write their own. Which, of course, can't solve the underlying mismatch, and therefore won't be satisfactory...and so the cycle continues.

Of these nasty transitions, 4/5 are associated with the front end, so the front end gets the rap.

(I gave a lightning talk at PyCon two weeks ago, about exactly this - stacking this up against the "Zen of Python" and talking about some of the ways we avoid this in Anvil: https://anvil.works/blog/pycon18-making-the-web-more-pythoni...)

All of this is why, in the end, more and more... I do things "the hard way". If I'm using a dynamic language WTF do I need an ORM for, if I understand enough to write an SQL command, and use a library for that DB that does parameterized queries?

On the front end, I tend to lean towards abstractions that work together... I really like React and the material-ui library's switch to JSS. It's relatively clean, and useful. Even then, it's only a mild syntax adjustment, not a full on abstraction. React is more of an abstraction, but that comes with functional paradigms that aid in testing, and predictive behaviors.

It really depends though. One can always do just JS/HTML/CSS, and there's something to be said for that. There are lighter tools that are similar to JQuery to smooth over a few of the rough edges. There's really an ala cart of available options.

The problem is that people assume that the PFM (pure fucking magic) will solve it all for them. You can use the cleanest or simplest abstractions, and then still write layers of incomprehensible spaghetti in between.

> I do things "the hard way". If I'm using a dynamic language WTF do I need an ORM for

I think it's mostly premature optimization. People think writing DTOs is challenging, so they want an ORM. But since you end up needing DTOs anwyays, removing SQL capabilities from the app means writing SQL in not SQL, and things like joins suddenly become slow and problematic and result in really heavy systems that are harder to change. For the joy of a quicker startup the entire project moves slower.

ORMs have their place, but in the majority of the systems I've seen they were unnecessary, and in broad terms don't provide any particular productivity advantage over using "dumb" SQL-based mapping solutions (a la Dapper [https://github.com/StackExchange/Dapper]), that preserve the power of SQL.

> People think writing DTOs is challenging

I don't think this is true. Writing these objects isn't difficult, it's tedious and repetitive. That's why people keep trying to automate it!

The problem is that you can't quite automate it smoothly, because SQL doesn't work like objects. You avoid this interface issue by taking the hit for the tedious-and-repetitive stuff directly (and I agree that's often the right choice) - but that doesn't dissolve the problem.

I’m actually quite fond of SQL, but I disagree that ORMs are not a productivity boost. My experience is in Rails and I think ActiveRecord is a pretty clear win for simple queries. That being said, it is pretty common for less experienced developers to not understand what the ORM is actually doing.

On any project of sufficient size there will be fairly advanced reporting functionality that you generally will not be able to do using ORM so you will end up with a mix of ORM and direct SQL. Also ORM forces you to the lowest common denominator for supported RDBMs I generally do not want to be limited to the SQLite features if I am running PostgreSQL

I did qualify my statement with “for simple queries”.

For more complicated queries, a pattern I have become quite fond of is making database views and then using them as the backing table for an ORM model. In Rails, at least, this gives you the best of both worlds.

Using PostgreSql, I tend now to directly generate query result in json.

It implies an architecture model where you put the business logic and type safety in the RDBMS.

It reduces the number of layers for a lot of functionalities.

How does version control work for an architecture like this?

In my experience, migration scripts that include modifications to output JSON (as ALTER TABLE statements) are always tracked and programmed to be auto-executed on each version update.

Yep that works well for many use cases

About 17 years ago, I wrote a GUI-based code generator that allows me to generate the boilerplate JDBC cruft from SQL statements, with several options for common scenarios. The code it generates is extensible and provides helpers for extending.

To this day, I haven't found anything (including ORMs, Spring support, etc.) easier to use, more flexible, or more sensible.

I recently wrote one for C# / MSSQL. I've been using EF on side projects, but work was concerned with it, so I just wrote a CRUD sproc / Entity / Repository / Service / DTO generator. Connect to a database and select the tables you want to build for, and done. 5 layers of abstraction in under a second. It gets me to about 95% of what I need and I custom build the special circumstance stuff from the generated objects. It creates the "if exists / drop" stuff, exception handling and comments w/ dates in the sprocs that reference the objects in C#, and vice versa, what table was referenced, proper [Key] and [MaxLength] attributes, etc. Puts them in their proper folders for git too. It does exactly what I would have done had I done it manually.

Works great with no ORM abstraction and it was fun to write.

Yeah, very similar--with one primary enhancement: That is, I initially started generating objects-per-table, but found that it was a little limiting for a lot of use-cases I encountered. In particular, it didn't cover joins very well--particularly for queries that fed list views.

So, I expanded into supporting raw SQL SELECT queries that can include joins, which I parse and combine with DB metadata. I then generate the DTOs from there. So, a single DTO can have properties mapped to different tables, which I found much less redundant/limiting than entity-per-table designs.

In addition to the SELECT code, I can use simple checkboxes to also generate INSERT/DELETE/UPDATE/UPSERT code, which map the DTOs back to the underlying tables. It recognizes keys and includes multiple-table writes in a single transaction, etc. In addition to the DTOs and the DAO layer, it can also optionally generate a service interface.

Of the utilities I've written over the years, it is the one that most stands out as having paid me back incalculably.

Is it closed source? I am in search for something like this. I consider this as Naked Object approach.

Unfortunately, it is closed and wired into an overall framework I crafted, so has some dependencies there.

If I were to write it today, I'd be more conscious of limiting dependencies and generally designing with open-sourcing in mind.

If your application mostly reads from the DB then don't use a ORM. I think ORMs shine when you need to save/update a domain model to a relational DB. In the CQRS world, a fairly typical choice when using a relational DB is to use a ORM for the write side and raw SQL for the read side.

I do things "the hard way".

I agree, except that I would argue it’s often the easier way. If I had to give a one sentence answer to the original question, it would be, “Front-end [web] development is so unstable because people introduce so much accidental complexity.”

For example, while I don’t disagree with Meredydd that there can be awkward mismatches between the layers he described, I also think several of those layers only exist if you presuppose an object model in your programming languages. Arguments about object-relational mismatch have been made as a criticism of OO for far longer than we’ve been building substantial front-ends for web apps.

If instead you stay closer to the real data, your architecture reduces to the more traditional persistence and presentation layers. Since you’re on the web you have a distributed system so you also need a protocol for the remote communication between those layers. However, there aren’t any inherent mismatches in that combination, any more than there are if you build native applications or distributed systems using something other than web technologies.

I think a lot of the disconnect can go further away as JSON data types (regardless of actual serialization such as BSON etc) and use are more well supported at the database layer. When I did more C#, and Entity Framework came out, I'd add an XML column to most of my tables, as well as a base class wrapper for my own use... that allowed by to write extension properties that wrapped around XML nodes under the covers. So I could extend with extra properties for things that didn't need to be queried on.

It worked pretty well, of course I actually started doing it because getting schema changes at my workplace was a painful endeavor.

I agree that a lot of the disconnect is induced by developers. It's also part of why I'm a pretty big proponent of a JS UI talking to a service written in JS. It allows for a lot less cognitive adjustment. I remember doing HTML/CSS/JS with Flash/Flex, with .Net, T-SQL, and VB6 in one workplace regularly. I swear every time I had to change from one to another, I was typing the wrong way for a good 15-20 minutes... answering questions at times took 2 minutes just to shake my brain out of whatever I was working in.

I started using less stored procedure code and the DB more as dumb storage, and embraced node pretty early on. Even if it is a "lesser" language, there's something to be said for one language to rule them all. (I do like modern JS though.)

Have you ever looked into Clojure/ClojureScript? The Clojure ecosystem seems to favor your approach. They embrace the dynamic nature of this kind of programming, are data-oriented, shun ORMs, and generally have solid principles (in my opinion). I found it was well worth working through the (somewhat steep) onboarding curve.

> The Clojure ecosystem seems to favor your approach. They embrace the dynamic nature of this kind of programming, are data-oriented, shun ORMs, and generally have solid principles (in my opinion).

I've recently started my Clojure journey (< 1 month in!), after stumbling across aphyr's very interesting work, and it is being driven entirely by this line of thought. It's taken me a long time, at least a decade, of moving deeper and deeper in to web development to start to really appreciate this perspective but it _feels_ like The Right Way at this point in my career. I'm hoping to, at the very least, be able to take those lessons from Clojure and apply them to the areas of my professional life.

I have, but haven't had the opportunity to really dig in. :-) . It's on my list... along with Go and Rust as things I want to learn.

> WTF do I need an ORM for, if I understand enough to write an SQL command, and use a library for that DB that does parameterized queries?

I am getting into frontend for a hobby project after spending a few years doing ML and applied stats, and I am currently asking myself this question. If your db interaction is simple, a query is almost as easy to write and maintain as an interaction with an ORM, and is significantly more flexible. If it is something more sophisticated, then your ORM quickly becomes more of a hindrance than a help. What am I missing here? Where is the virtue of an ORM beyond not having to use SQL?

In more static languages, you generally need to convert from the DB types to the Native types to your language. This means a lot of code (more than boilerplate ORMs) and prone to a lot of mistakes.

Some tooling that generates it for you helps a lot in some cases. I can see the appeal, but not in a dynamic language environment where there is less disconnect.

This is a great comment and an amazing insight. What's particularly interesting is that people have attempted to collapse (almost?) every stage of that abstraction hierarchy individually, but none of them have been so successful as to take over the world.

If you were writing a desktop application, you would still have at least three of the layers (serialized data on disk, in-memory data, and the rendering of the objects), but without the dramatic impedance mismatch that the web platform introduces everywhere.

Thanks! And funny you should mention that. We're challenging every layer of that heirarchy simultaneously, by building a development environment for the web and making it as integrated as Delphi or VB were on the desktop:


How does one build/distribute modules for it? Do you have a package manager?

Even on the desktop those three layers involve impedance mismatch and much the same pathologies as meredydd describes. But I guess three layers of it are better than 6.

It's data. There's a fix format for serializing it. In a lossless way. I don't know what meredydd talks about, there's no mismatch with regards to data.

You can get the same bits in your JS objects as you have in the DB. If not, that means your system is shit.

The problem with frameworks is not the hardship of funneling data up and down the stack.

The problem is that they are optimizing for different things. React optimizes for simplicity of making components. Angular optimizes for providing a full toolkit. And new versions then focus on different things. Server Side Rendering was hot, but now that Google just executes some JS and penalizes large downloads, it's the quest for less bytes on the wire. And tree-shake-ability. And faster time to first paint.

And as browsers and the web changes, so do frameworks. And frameworks try to target, at the same time, both the future, and the very present problems, they try to provide instant gratification, yet try to optimize for the future.

So they usually look half-assed useless pieces of autogenerated-by-MS-Word code all the time. But they work, nevertheless, and power a lot of sites.

> I don't know what meredydd talks about, there's no mismatch with regards to data.

He's talking about different services each having their own preferred way to structure the data. When the layout differs, it cannot simply be a memcpy, and so you get tools to try to ease the tedium of translating one structure's layout into to another. They get the job done most of the time, but run into edge cases that return the developer back to manual tedium. Since developers do not like tedious work, some set out to find a new solution that solves for those edge cases, but they end up leaving many more on the table for the next intrepid developer.

> own preferred way to structure the data.

Absolutely. But user/business data? That doesn't matter. When you design the system/stack you pick the right components/tools (right data structures) that can losslessly represent the input/output of the neighboring/adjacent layers. If you want to store 500 byte long fields, then make your DB column 500 byte wide, make sure the backend accepts 500 byte long input, but rejects longer ones, make sure your HTML input has a maxlen=500 (and account for Unicode code point surrogate / multibyte fuckery if applicable)

There's mismatch, of course, but as I've detailed in a sibling comment [0], it's because of difference in purpose and function. A DB is different from a HTML/CSS layout rendering engine, because they have a very (set) of purpose(s), hence different interfaces, and so on. And frameworks are glue between these functions (and the layers as we allocate them to).

> Since developers do not like tedious work, some set out to find a new solution that solves for those edge cases [...]

Yes, perfectly agreed. And since we concentrate on different edge-cases each time, we move from trade-off to trade-off with each new framework, and browsing trend/fad (mobile, tablet, SSR, ultra-tree-shakable-gzip-able, "native" [mobile] compilable, etc).

[0] https://news.ycombinator.com/item?id=17209305

>You can get the same bits in your JS objects as you have in the DB. If not, that means your system is shit.

I'm not sure what exactly you mean by that. But one thing is absolutely clear. You cannot automatically derive a logical layer from the layer above or below. If you could, there would be no reason to have seperate (logical) layers in the first place.

That's why you get an "impedance mismatch" that has to be bridged by providing some additional information, which often has consequences for performance, debuggabilty and clarity.

Maybe I misunderstand the gist of your comment though.

Maybe I misunderstood the original comment, but the claim was that frameworks are leaky (this I wholeheartedly agree with, and this of course leads to impedance mismatches, after all, pixels on the screen are very different from an SQL DB, but that's why we have the libraries and frameworks, to help us do this translation from one layer to the other, to glue together very different functional components of systems). But then follows it up with talk about data. How JSON and SQL is not a great match. Which is nonsense. You can losslessly represent the same data in both JSON and SQL, you can engineer a perfect system for handling data (you use the same field and column types, lengths, constraints, validation, and so on on both the front- and the backend, and it works), the mismatch is not around data.

The problems are about development trade offs (TypeScript vs JS, small library - few features, complexity - code modularization + chunked lazy loading, optimization - script load time vs development time, and throw in cross browser compatibility; supported features vs complexity - HTTP/2 is nice, and fast, but it's more complex plus you need HTTP1.1 too for old clients, and maybe your API somewhere doesn't support prefetch, or you can't hint your backend to push that to the client, or you can't access the raw request after the framework extracted the request attributes, blablabla), visual communication (current/modern components vs old jQuery sprinkled DOM result in different sites; mobile first, mobile browsers, React Native and Ionic). And these trade offs are different over time. So we get different frameworks over time. And since the change in browsing is very fast, and the effort to start a new framework is small, we get a lot of new unstable frameworks. (And since those frameworks rarely mature really, we get a lot of new ones, because there's not really a "sunk cost" for developers when abandoning the old ones. And it's easy and hip to pick up new skills, and try them out on a new project, etc.)

Also, you can put the "business logic" into one place, and represent it and then push that representation to the client. (GWT, Scala.js, or crude autogenerated forms, point and click website/workflow builders, and so on). Of course, if you want to change the system, then it might be a big pain in the ass to represent something very different than what it was designed for, so these kinds of entombed vertical complexity barriers lead to a metastable state - when you hack something quick on the layer most accessible for you with respect to the task, instead of properly implement it in the whole vertical stack, and these hacks grow and the elegant single source representation of business logic goes out of the window. (Or if you implement everything in one place, you are destined to implement a very powerful - or verbose - DSL to describe the "front end logic" - which should be CSS, and the DB optimization logic - which should be SQL, and so on.)

That's good, in each cycle, sooner or later you get improvement: webpack over grunt, react over jQuery and npm over vendoring your jQuery plugins.

OP forgot how life looked when your web app project was handcrafted HTML page with manually inserted scripts tags. When your form submission was multi-level backend API in PHP. jQuery plugins with 20+ options published randomly on the internet.

I feel like this is a false dichotomy. The choice doesn't have to be roasting squirrels over an open flame/handcrafting PHP pages vs. shiny futurism/Node+React. I have been getting along just fine with Rails, HTML, and a sprinkling of JS for over 10 years.

There is an immense productivity gain to be found in mastering a set of tools. After a while of dogmatic tooling changes you begin to analyze more critically whether the new shiny thing is going to provide any real value. I would argue that for the huge majority of websites, tried and tested tooling is perfectly fine and definitely more robust and supported.

Every tool you introduce is hours of troubleshooting just waiting to happen.

That stuff works fine if your client-side needs are simple, but if you actually want a single-page application, or just an application with a lot of rich JS functionality, it quickly becomes unwieldy.

That stuff works fine if your client-side needs are simple

To be fair, just about anything works fine if your client-side needs are simple. However, I have reached the opposite conclusion to you: the more customised and complicated and large-scale and long-lived the software becomes, the less value I see in a lot of the popular but ever-changing web technologies and the more I am likely to favour building on the standard foundations with minimal dependencies in between and usually a relatively small but high-value set of libraries.

The benefits of quickly fetching many tiny packages with a package manager or of building on top of all-encompassing frameworks or automation tools are mostly found in two situations, in my experience: getting started quickly (including rapid prototyping exercises) and ongoing development if (and only if) you are staying almost entirely within the bounds of what your chosen technologies already do well.

However, if your requirements start to evolve and diversify in a longer-lasting project, it’s all too easy for those numerous tiny dependencies to become a liability or for that framework or tool you built everything around to become a straitjacket. The relatively short lifetimes of many of these technologies can also become an expensive problem if the community drifts away and the security and compatibility work slows down or stops entirely but your project still depends on them as much as ever.

I don't like microlibraries very much either, but that's a different question. A monolithic frontend framework can make your life easier.

Monolithic frameworks can also corner you into edge cases where you end up having to write shitty work arounds because their "opinionated" framework didn't have an opinion based in reality.

Yes, and a car can go off the road and kill you. Nevertheless I don't choose to walk everywhere I go.

That was my point, that nothing solves every problem. Monolithic frameworks aren't inherently better than micro libraries, they are just different paths to achieving the same goal. I'm still going to choose micro libraries, because flexibility is more valuable to me than batteries included, which is just as good an argument as batteries included is better than flexibility.

Fair enough. Still, "I don't like microlibraries" is not an argument against client-side frameworks.

Strongly agree. What was a larger team is now a part time project. That's a goal and outcome of the churn.

there's still lot of churn coming from following a moving target.

like when ios made the top of the page untouchable least it pulled safari out of full screen, breaking the toolbar convention of the past two decades.

and when ios made the bottom of the page untouchable least it pulled safari out of fullscreen, breaking the bottom bar icon webapp did under the very apple guidelines > https://developer.apple.com/ios/human-interface-guidelines/b...

and when ios made the app sides unusable due to a varying notch, having a whole set of unstandard properties you have to handle to manage correctly being into safari on a iphone x

if we had companies following standard decently and a linear, planned grow instead of the organic mess we're into, it'd be far easier to produce building blocks that work in a stable manner over time.

we're better today than in netscape days, but marginally. as complexity increase the cost of this constant churn does too and the saving from better framework are not quite enough to offset the constant fads that come and go.

What is curious is that the tech industry has been very conservative about rethinking these 6 things that you just listed. Why rows in a relational database? Why objects? Why Javascript? And why HTML? We'd surely be in a better place if we got rid of these things and rethought our approach from first principles.

I've written about these issues many times before.

Regarding the problem with objects, I wrote "Object Oriented Programming Is An Expensive Disaster Which Must End":


Regarding the problems of HTML, I wrote "The problem with HTML":


To answer the question "Why Is Front-End Development So Unstable?" the answer is surely, in part, the fact that we refuse to build technologies that are designed to be great front-end technologies.

There have been decades of attempts at databases with a different model than relational and we have seen what happened. The reality is that a relational model is very well suited for general purpose databases. For specific needs you can use timeseries databases or key/value store, but at this point I seriously doubt that the Nth attempt of killing the relational model will succeed. And honestly I prefer an enforced relational schema rather than an ungodly mess of schemaless documents.

HTML is declarative if you use it that way. You can even use it as an API: https://bjoernkw.com/2018/05/20/html-is-an-api/

HTML is a good compromise between procedural / event-based UI frameworks (such Java Swing or Apache Wicket) and visual UI design tools (such as RAD Studio, Apple's Interface Builder or Adobe Dreamweaver) that allow you to implement the most common patterns fairly easily while often making the design of more custom UIs much more difficult.

There have been attempts at rethinking it though; noSQL was the buzzword of a couple of years ago, and even nowdays there's mature tools like Firebase that allow you to store and retrieve data much more directly than e.g. SQL. The challenge in nosql storage is of course data migrations and whatnot. But yeah, in theory you can just open up a MongoDB instance to your front-end and not have to bother with SQL or much of a back-end.

And your last comment doesn't make much sense tbf; the big frameworks, most notably Angular and React (and its ecosystem) were both designed to be great front-end technologies.

Angular and React both rely on Javascript and HTML, so you can't describe them as "designed to be great front-end technologies". HTML was designed for document exchange, it is a descendent of SGML. Javascript was initially meant to be a light weight scripting language that allowed dynamic elements in HTML. It's gotten better over the years, but it is still far from what you would expect if you were trying to build a great programming environment for the front end. As to the limits of HTML, just consider forms. In the last 20 years, there have been very few new form elements added. Compare the form elements available in HTML in 2018 to what VisualBasic 6 had achieved by 1999, or what Netbeans/Swing offered by 2003.

> HTML was designed for document exchange, it is a descendent of SGML. Javascript was initially meant to be a light weight scripting language that allowed dynamic elements in HTML

I always see people saying this, but why does it matter? Electricity was originally piped into homes for lighting, but we don't need an alternate way to power all the electric devices in our home. Unix was designed for computers that are quite different from the ones we use today. And so on.

I can't repeat all of my arguments in a comment on Hacker News. I would ask that you read what I wrote in "The Problem With HTML":


I've been saying the same for years. As long as we are working directly with HTML, we will have these impedance problems.

The component/event model a la Swing et. al. is a far more elegant match for modern Web development, which is now essentially the same as building window-based native applications.

OTOH, HTML was designed for static content delivery. Even if a framework must generate some HTML to remain compatible with browsers, there's no reason we have to work or think in HTML as our central interface model.

Give me a canvas, let me lay out (and style) components, then let me respond to events.

I agree with you on HTML/CSS but I don't see how JavaScript is any worse than VB or any other programming language ever used for frontends.

HTML has lots of problems, but it's there, sitting on every computer and mobile phone that you can think of. There's not really another viable cross-platform alternative. Your only option would be to render to HTML5 canvas and create some alternative rendering model from the DOM (not unlike Flash). There are tools that exist to do that today (CreateJS, for example), but they're not mainstream. You pay a huge penalty by going against the standard platform. All the interop, tooling, and libraries out in the world work on HTML/JS. Re-inventing that from scratch and coming up with your own hacks/solutions for accessibility, responsive design, style sheets, components, etc, is expensive and unlikely to succeed. I agree that OOP is an expensive disaster...but thankfully JS is flexible enough to code using functional patterns.

I don't know what you're talking about. People keep trying to replace all of those things.

(B) Nails it.

I'd guess that a large portion of new frameworks start simply because everyone who spent enough time with the old one gets sick of the bad or non-existent documentation for edge cases, etc.

Rinse and repeat.

Vue is no better, after reading the rest of the comments. In many ways worse.

You miss the last layer: user and browser.

Even with all the improvement is web technologies in recent past, the browser is still not close to native widget.

And finally the users themselves. No matter how neat you managed to be in the underlying layers, the UI is messy. It can change fast, meaning either massive cost rebuilding an entire application, or breaking those neat layer.

Users also wants everything connected to everything. It does not matter if those connections are explicitely done via ugly spagetthi code or implicitely through clever abstractions, they effectively exist and that's how the requirements, user experience feedback, bugs and testing will be based on.

Some good points... but I think you're too pessimistic. Some tiny subset of those fraction of users you mention manage to do something different / better. React is one example. GraphQL is another.

I don't see what's so "leaky" about going from 2-4.

Are there plans to publish Anvil as open source?

And are there any open source projects similar to Anvil?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact