Think of a typical web app. Your data exists:
1. As rows in a database, accessed via SQL
2. As model objects on the server, accessed via method calls and attributes
3. As JSON, accessed via many HTTP endpoints with a limited set of verbs (GET/PUT/POST/DELETE)
5. As HTML tags, accessed via the DOM API
6. As pixels, styled by CSS.
Each time you translate from one layer to the next, there's a nasty impedance mismatch. This, in turn, attracts "magic": ORMs (DB<->Object); Angular Resources (REST<->JS Object); templating engines (JS Object<->DOM); etc. Each of these translation layers shares two characteristics:
(A) It is "magic": It abuses the semantics of one layer (eg DB model objects) in an attempt to interface with another (eg SQL).
(B) It's a terribly leaky abstraction.
This means that (a) every translation layer is prone to unintuitive failures, and (b) every advanced user of it needs to know enough to build one themselves. So when the impedance mismatch bites you on the ass, some fraction of users are going to flip the table, swear they could do better, and write their own. Which, of course, can't solve the underlying mismatch, and therefore won't be satisfactory...and so the cycle continues.
Of these nasty transitions, 4/5 are associated with the front end, so the front end gets the rap.
(I gave a lightning talk at PyCon two weeks ago, about exactly this - stacking this up against the "Zen of Python" and talking about some of the ways we avoid this in Anvil: https://anvil.works/blog/pycon18-making-the-web-more-pythoni...)
On the front end, I tend to lean towards abstractions that work together... I really like React and the material-ui library's switch to JSS. It's relatively clean, and useful. Even then, it's only a mild syntax adjustment, not a full on abstraction. React is more of an abstraction, but that comes with functional paradigms that aid in testing, and predictive behaviors.
It really depends though. One can always do just JS/HTML/CSS, and there's something to be said for that. There are lighter tools that are similar to JQuery to smooth over a few of the rough edges. There's really an ala cart of available options.
The problem is that people assume that the PFM (pure fucking magic) will solve it all for them. You can use the cleanest or simplest abstractions, and then still write layers of incomprehensible spaghetti in between.
I think it's mostly premature optimization. People think writing DTOs is challenging, so they want an ORM. But since you end up needing DTOs anwyays, removing SQL capabilities from the app means writing SQL in not SQL, and things like joins suddenly become slow and problematic and result in really heavy systems that are harder to change. For the joy of a quicker startup the entire project moves slower.
ORMs have their place, but in the majority of the systems I've seen they were unnecessary, and in broad terms don't provide any particular productivity advantage over using "dumb" SQL-based mapping solutions (a la Dapper [https://github.com/StackExchange/Dapper]), that preserve the power of SQL.
I don't think this is true. Writing these objects isn't difficult, it's tedious and repetitive. That's why people keep trying to automate it!
The problem is that you can't quite automate it smoothly, because SQL doesn't work like objects. You avoid this interface issue by taking the hit for the tedious-and-repetitive stuff directly (and I agree that's often the right choice) - but that doesn't dissolve the problem.
For more complicated queries, a pattern I have become quite fond of is making database views and then using them as the backing table for an ORM model. In Rails, at least, this gives you the best of both worlds.
It implies an architecture model where you put the business logic and type safety in the RDBMS.
It reduces the number of layers for a lot of functionalities.
To this day, I haven't found anything (including ORMs, Spring support, etc.) easier to use, more flexible, or more sensible.
Works great with no ORM abstraction and it was fun to write.
So, I expanded into supporting raw SQL SELECT queries that can include joins, which I parse and combine with DB metadata. I then generate the DTOs from there. So, a single DTO can have properties mapped to different tables, which I found much less redundant/limiting than entity-per-table designs.
In addition to the SELECT code, I can use simple checkboxes to also generate INSERT/DELETE/UPDATE/UPSERT code, which map the DTOs back to the underlying tables. It recognizes keys and includes multiple-table writes in a single transaction, etc. In addition to the DTOs and the DAO layer, it can also optionally generate a service interface.
Of the utilities I've written over the years, it is the one that most stands out as having paid me back incalculably.
If I were to write it today, I'd be more conscious of limiting dependencies and generally designing with open-sourcing in mind.
I agree, except that I would argue it’s often the easier way. If I had to give a one sentence answer to the original question, it would be, “Front-end [web] development is so unstable because people introduce so much accidental complexity.”
For example, while I don’t disagree with Meredydd that there can be awkward mismatches between the layers he described, I also think several of those layers only exist if you presuppose an object model in your programming languages. Arguments about object-relational mismatch have been made as a criticism of OO for far longer than we’ve been building substantial front-ends for web apps.
If instead you stay closer to the real data, your architecture reduces to the more traditional persistence and presentation layers. Since you’re on the web you have a distributed system so you also need a protocol for the remote communication between those layers. However, there aren’t any inherent mismatches in that combination, any more than there are if you build native applications or distributed systems using something other than web technologies.
It worked pretty well, of course I actually started doing it because getting schema changes at my workplace was a painful endeavor.
I agree that a lot of the disconnect is induced by developers. It's also part of why I'm a pretty big proponent of a JS UI talking to a service written in JS. It allows for a lot less cognitive adjustment. I remember doing HTML/CSS/JS with Flash/Flex, with .Net, T-SQL, and VB6 in one workplace regularly. I swear every time I had to change from one to another, I was typing the wrong way for a good 15-20 minutes... answering questions at times took 2 minutes just to shake my brain out of whatever I was working in.
I started using less stored procedure code and the DB more as dumb storage, and embraced node pretty early on. Even if it is a "lesser" language, there's something to be said for one language to rule them all. (I do like modern JS though.)
I've recently started my Clojure journey (< 1 month in!), after stumbling across aphyr's very interesting work, and it is being driven entirely by this line of thought. It's taken me a long time, at least a decade, of moving deeper and deeper in to web development to start to really appreciate this perspective but it _feels_ like The Right Way at this point in my career. I'm hoping to, at the very least, be able to take those lessons from Clojure and apply them to the areas of my professional life.
I am getting into frontend for a hobby project after spending a few years doing ML and applied stats, and I am currently asking myself this question. If your db interaction is simple, a query is almost as easy to write and maintain as an interaction with an ORM, and is significantly more flexible. If it is something more sophisticated, then your ORM quickly becomes more of a hindrance than a help. What am I missing here? Where is the virtue of an ORM beyond not having to use SQL?
Some tooling that generates it for you helps a lot in some cases. I can see the appeal, but not in a dynamic language environment where there is less disconnect.
If you were writing a desktop application, you would still have at least three of the layers (serialized data on disk, in-memory data, and the rendering of the objects), but without the dramatic impedance mismatch that the web platform introduces everywhere.
You can get the same bits in your JS objects as you have in the DB. If not, that means your system is shit.
The problem with frameworks is not the hardship of funneling data up and down the stack.
The problem is that they are optimizing for different things. React optimizes for simplicity of making components. Angular optimizes for providing a full toolkit. And new versions then focus on different things. Server Side Rendering was hot, but now that Google just executes some JS and penalizes large downloads, it's the quest for less bytes on the wire. And tree-shake-ability. And faster time to first paint.
And as browsers and the web changes, so do frameworks. And frameworks try to target, at the same time, both the future, and the very present problems, they try to provide instant gratification, yet try to optimize for the future.
So they usually look half-assed useless pieces of autogenerated-by-MS-Word code all the time. But they work, nevertheless, and power a lot of sites.
He's talking about different services each having their own preferred way to structure the data. When the layout differs, it cannot simply be a memcpy, and so you get tools to try to ease the tedium of translating one structure's layout into to another. They get the job done most of the time, but run into edge cases that return the developer back to manual tedium. Since developers do not like tedious work, some set out to find a new solution that solves for those edge cases, but they end up leaving many more on the table for the next intrepid developer.
Absolutely. But user/business data? That doesn't matter. When you design the system/stack you pick the right components/tools (right data structures) that can losslessly represent the input/output of the neighboring/adjacent layers. If you want to store 500 byte long fields, then make your DB column 500 byte wide, make sure the backend accepts 500 byte long input, but rejects longer ones, make sure your HTML input has a maxlen=500 (and account for Unicode code point surrogate / multibyte fuckery if applicable)
There's mismatch, of course, but as I've detailed in a sibling comment , it's because of difference in purpose and function. A DB is different from a HTML/CSS layout rendering engine, because they have a very (set) of purpose(s), hence different interfaces, and so on. And frameworks are glue between these functions (and the layers as we allocate them to).
> Since developers do not like tedious work, some set out to find a new solution that solves for those edge cases [...]
Yes, perfectly agreed. And since we concentrate on different edge-cases each time, we move from trade-off to trade-off with each new framework, and browsing trend/fad (mobile, tablet, SSR, ultra-tree-shakable-gzip-able, "native" [mobile] compilable, etc).
I'm not sure what exactly you mean by that. But one thing is absolutely clear. You cannot automatically derive a logical layer from the layer above or below. If you could, there would be no reason to have seperate (logical) layers in the first place.
That's why you get an "impedance mismatch" that has to be bridged by providing some additional information, which often has consequences for performance, debuggabilty and clarity.
Maybe I misunderstand the gist of your comment though.
The problems are about development trade offs (TypeScript vs JS, small library - few features, complexity - code modularization + chunked lazy loading, optimization - script load time vs development time, and throw in cross browser compatibility; supported features vs complexity - HTTP/2 is nice, and fast, but it's more complex plus you need HTTP1.1 too for old clients, and maybe your API somewhere doesn't support prefetch, or you can't hint your backend to push that to the client, or you can't access the raw request after the framework extracted the request attributes, blablabla), visual communication (current/modern components vs old jQuery sprinkled DOM result in different sites; mobile first, mobile browsers, React Native and Ionic). And these trade offs are different over time. So we get different frameworks over time. And since the change in browsing is very fast, and the effort to start a new framework is small, we get a lot of new unstable frameworks. (And since those frameworks rarely mature really, we get a lot of new ones, because there's not really a "sunk cost" for developers when abandoning the old ones. And it's easy and hip to pick up new skills, and try them out on a new project, etc.)
Also, you can put the "business logic" into one place, and represent it and then push that representation to the client. (GWT, Scala.js, or crude autogenerated forms, point and click website/workflow builders, and so on). Of course, if you want to change the system, then it might be a big pain in the ass to represent something very different than what it was designed for, so these kinds of entombed vertical complexity barriers lead to a metastable state - when you hack something quick on the layer most accessible for you with respect to the task, instead of properly implement it in the whole vertical stack, and these hacks grow and the elegant single source representation of business logic goes out of the window. (Or if you implement everything in one place, you are destined to implement a very powerful - or verbose - DSL to describe the "front end logic" - which should be CSS, and the DB optimization logic
- which should be SQL, and so on.)
OP forgot how life looked when your web app project was handcrafted HTML page with manually inserted scripts tags. When your form submission was multi-level backend API in PHP. jQuery plugins with 20+ options published randomly on the internet.
Every tool you introduce is hours of troubleshooting just waiting to happen.
To be fair, just about anything works fine if your client-side needs are simple. However, I have reached the opposite conclusion to you: the more customised and complicated and large-scale and long-lived the software becomes, the less value I see in a lot of the popular but ever-changing web technologies and the more I am likely to favour building on the standard foundations with minimal dependencies in between and usually a relatively small but high-value set of libraries.
The benefits of quickly fetching many tiny packages with a package manager or of building on top of all-encompassing frameworks or automation tools are mostly found in two situations, in my experience: getting started quickly (including rapid prototyping exercises) and ongoing development if (and only if) you are staying almost entirely within the bounds of what your chosen technologies already do well.
However, if your requirements start to evolve and diversify in a longer-lasting project, it’s all too easy for those numerous tiny dependencies to become a liability or for that framework or tool you built everything around to become a straitjacket. The relatively short lifetimes of many of these technologies can also become an expensive problem if the community drifts away and the security and compatibility work slows down or stops entirely but your project still depends on them as much as ever.
like when ios made the top of the page untouchable least it pulled safari out of full screen, breaking the toolbar convention of the past two decades.
and when ios made the bottom of the page untouchable least it pulled safari out of fullscreen, breaking the bottom bar icon webapp did under the very apple guidelines > https://developer.apple.com/ios/human-interface-guidelines/b...
and when ios made the app sides unusable due to a varying notch, having a whole set of unstandard properties you have to handle to manage correctly being into safari on a iphone x
if we had companies following standard decently and a linear, planned grow instead of the organic mess we're into, it'd be far easier to produce building blocks that work in a stable manner over time.
we're better today than in netscape days, but marginally. as complexity increase the cost of this constant churn does too and the saving from better framework are not quite enough to offset the constant fads that come and go.
I've written about these issues many times before.
Regarding the problem with objects, I wrote "Object Oriented Programming Is An Expensive Disaster Which Must End":
Regarding the problems of HTML, I wrote "The problem with HTML":
To answer the question "Why Is Front-End Development So Unstable?" the answer is surely, in part, the fact that we refuse to build technologies that are designed to be great front-end technologies.
HTML is a good compromise between procedural / event-based UI frameworks (such Java Swing or Apache Wicket) and visual UI design tools (such as RAD Studio, Apple's Interface Builder or Adobe Dreamweaver) that allow you to implement the most common patterns fairly easily while often making the design of more custom UIs much more difficult.
And your last comment doesn't make much sense tbf; the big frameworks, most notably Angular and React (and its ecosystem) were both designed to be great front-end technologies.
I always see people saying this, but why does it matter? Electricity was originally piped into homes for lighting, but we don't need an alternate way to power all the electric devices in our home. Unix was designed for computers that are quite different from the ones we use today. And so on.
The component/event model a la Swing et. al. is a far more elegant match for modern Web development, which is now essentially the same as building window-based native applications.
OTOH, HTML was designed for static content delivery. Even if a framework must generate some HTML to remain compatible with browsers, there's no reason we have to work or think in HTML as our central interface model.
Give me a canvas, let me lay out (and style) components, then let me respond to events.
I'd guess that a large portion of new frameworks start simply because everyone who spent enough time with the old one gets sick of the bad or non-existent documentation for edge cases, etc.
Rinse and repeat.
Vue is no better, after reading the rest of the comments. In many ways worse.
Even with all the improvement is web technologies in recent past, the browser is still not close to native widget.
And finally the users themselves. No matter how neat you managed to be in the underlying layers, the UI is messy. It can change fast, meaning either massive cost rebuilding an entire application, or breaking those neat layer.
Users also wants everything connected to everything. It does not matter if those connections are explicitely done via ugly spagetthi code or implicitely through clever abstractions, they effectively exist and that's how the requirements, user experience feedback, bugs and testing will be based on.