A framework, usually, must predict ahead of time every kind of thing a user of it might need to do within its walls.
The one thing not mentioned in the article is that the above line of thinking is almost guaranteed to lead to an insane level of abstraction, which was parodied in this classic article from nearly 15 years ago:
Spring class names are the stuff of legend, but most of those classes you wouldn't deliberately use in your application. They're there so Spring can layer up its own functionality feature by feature. The only problematic part about this is when these class names leak into error messages, and the level of indirection becomes difficult to follow.
The way you use the core inversion-of-control framework people often just call "Spring" is you take your homegrown, doesn't-depend-on-Spring business logic, you make a config file, and if you're not executing it a servlet container, you make one entry-point class to start it all up [1].
There other projects under the Spring umbrella, and some of them are meant to be called from your code. Like Spring JDBC for database access, whose framework-like nature is apparent, yet in terms of its usage pattern, it resembles a library [2].
I agree in theory, but in practice I've found it very hard to keep Spring (Boot) out of my code.
Part of the blame I think falls on Spring's DI container, which is "too good". It makes it easy to pull in any bean, even the wrong one, which means developers have to be disciplined managing how beans depend on each other, and you end up with controllers returning entities. Of course, any codebase turns into a mess if you're not disciplined, but my impression is that developers tend to be less careful when working within the confines of a framework like Spring because it gives them the false sense that they can do no wrong.
Recently I started a side project with Spring Boot (after all I like the framework), trying to organise it following Uncle Bob's Clean Architecture approach, which aims to isolate the core business logic of an app from its implementation details like database code and external interfaces. The core turned out pretty well isolated, but the rest is all Spring. Database access? Spring Data. External interface? Spring MVC. Communication with external services? Spring RestTemplate. I'm not saying it's bad, it's actually awfully convenient, but it's not so easy to swap out - say - Spring MVC for http4k, so you end up with Spring being everywhere. Which again, not a bad thing, just something to consider.
> The core turned out pretty well isolated, but the rest is all Spring. Database access? Spring Data. External interface? Spring MVC. Communication with external services? Spring RestTemplate. I'm not saying it's bad, it's actually awfully convenient, but it's not so easy to swap out
External interfaces are just isolated interfaces, which end up being tied to a framework as frameworks do most if not all of the heavy lifting.
Just because you picked Spring Database to implement your persistence layer it doesn't mean you are bound to Spring to add a webapi or a web app, though.
If for some reason your frameworks are leaking out of any of your external interfaces then that's an issue with how you designed your app, not the frameworks you used.
As an example, some Clean Architecture examples using ASP.NET Core decide to implement their persistence layer passing around Entity Framework classes as their interface. That, obviously, tightly couples the whole app to Entity Framework in particular and ASP.NET Core in general. This coupling in turn is completely eliminated by passing a generic repository interface, but using Entity Framework's convenience often forces us to ignore that.
> Just because you picked Spring Database to implement your persistence layer it doesn't mean you are bound to Spring to add a webapi or a web app, though.
I agree, you're definitely not bound to, but again, in theory. For what I've observed in practice, factors like convenience and friction in going off the beaten path mean that when you pick Spring your app ends up being 90% Spring stuff. Which is not necessarily bad, there are certainly many good things about using the well supported, robust, and predictable set of Spring components. But I wouldn't say that Spring is the kind of framework that merely helps you tie together your app, that otherwise stays out of the way, and that you can swap out at any time.
With Spring Core (the DI container) the only parts of your code that have to depend on Spring are the config and your entry-point.
You can choose to import facade-style Spring libraries, and write your code against them -- like Spring JDBC, Spring JMS, TaskScheduler -- if you want something that does some heavy lifting for you, but doesn't tie you to a vendor implementation directly.
Meanwhile, Spring Boot is a fully-opinionated framework based around the combination of defaults and on-the-fly auto-configuration, and giving you a single runnable uber-JAR at the end. And you're right about it: if you're using Spring Boot, it makes sense to depend on other Spring libraries, because (1) the docs guide you into them, and (2) the two will interact to auto-configure. For example, you can just run your DB code, and it can auto-configure an in-memory instance as you're developing [1]. Then, when you've gotten the real database, just specify the real config.
Spring Boot really shines in bootstrapping a greenfield project to get going quickly, especially if you're willing to compile its annotations into your code. You can then go back and incrementally override behavior and configs once you realize you want them a certain way.
DI/IoC is just another way of expressing global variables. The global variables exist because of the constraint of class-orientation getting in the way, among other things. Spring exists because of the relative weakness of the Java language. You won't see this kind of technology emerging in more powerful languages.
This is very often the case. So often you see these stateless "service" classes injected everywhere. It's not an object at all, it just free functions packaged in a namespace with a vtable in front of it that now need to be allocated. A language with free function support alleviates that problem.
I'm having to do a lot of this now and I cannot imagine having to write any code, in any language, this way without depending on a magic IDE and other magic libaries.
To be fair, at least Spring and Lombok don't require me to maintain loads of yaml/XML files to describe the DI stuff, but annotations in java aren't always the most intuitive thing.
Requiring a class representation of literally everything is a huge limitation, and I wish there were other alternatives to this style of DDD that abstracts everything to an absurd degree.
If they were JUST functions they could be static utils and wouldn't need to be beens. But these service classes themselves have their own fields/dependencies.
The Service class holds an instance of the Caching class and the Database class, each of those hold an instance of clients with connections, for example.
Can you explain how language support would remove this pattern?
I'm currently working on Scala services, and we don't have any DI, which is actually kinda nice, but it just means we "new-up" all these classes manually in our Main class and pass them through as arguments one to the next.
So far I like how there is no "spring magic"; however, in the end the pattern of dependencies is the same.
Most of those crazy looking Spring classes aren't used by users and are just used internally by the framework developers. Of all the complaints people have about Spring, I've never heard "these classes are too abstract" or "these names are too obtuse."
There was maybe one instance where I had to deal with obtuseness like that. Definitely not the norm.
After X11 and Spring, the Maximum Abstraction Task Force shifted to focus on infrastructure, which is why your Java Virtual Machine processes are now running as a uniquely allocated user ID in a container scheduled by a control plane on a pool of OS instances separated by the hypervisor service of a dynamically sized cloud placement group.
A hypervisor definitely is; the "hardware" interface for a kernel running as a (performant) VM is different from the hardware interface for real hardware.
I think one could argue that containers aren't much of an abstraction for the running program itself, though, as you say. The place where containers provide an abstraction is at deployment time/process management time; the program itself has (essentially) the same API whether it's running in a container or not.
... but at the end you get perfect apple pie that is exactly tailored to the ecosystem, doesnt suffer from overdesign, bloat, is baked to perfection and its taste is from heaven.
The only question is if you can afford hiring a few gods for the task or you are rather satisfied with whatever falls off a truck and in reality is completely uncapable of inveting universe (or even capable of baking).
At the end the customers digestive problems are those that do matter. And you can get sued for food poisoning.
I personally love apple pies and am just not satisfied with plastic taste of supermarket frozen, microwave oven heated "products".
“When you want to hurry something, that means you no longer care about it and want to get on to other things.”
― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance
My comment in that discussion from ~15 years ago was based on experiences with a project that I "inherited" - 30,000 Java classes and interfaces, 20+ layers of abstraction, team of 30+ people working for a long time. And most importantly, it didn't actually work!
I actually "finished" it to the point of a working system for the customer with real customers in about 5 weeks by ignoring most of what they had done - required a handful of classes and JSPs. Probably distinctly under-architected but it worked and was built on for a number of years after.
The problem with over-engineered projects... our team inherited a project that was designed by some architects in an ivory tower and implemented by some other engineers.
It was quite complex (message queues, multi-threading, async etc.) so that it would be 'scalable' yet crushed under the slightest load, customers were experiencing delays during peak hours.
We removed about a quarter of the code (some intermittent queues & components), nothing changed in terms of functionality. Now we're removing more code and switching over to less complex data structures to fix the delays.
Maybe a complete rewrite would have been better after all, who knows.
I'm not a very experienced developer. I like to ship stuff and get projects done in reasonable time and a "YAGNI" attitude. Let's just rewrite stuff and introduce more abstractions when it actually hurts and a refactor is in scope.
On the other hand, I see a tendency that if you give projects (it could be a greenfield project or some maintenance) to a team of, say, 6 developers, then they will just 'create work' for themselves, filling up the todo list with items which may lead to an over-engineered project that could have been easily be done by 3 people within the same timeframe.
If you have some good reads/stories on this topic, patterns to watch for, I'd appreciate if you shared them.
> Let's just rewrite stuff and introduce more abstractions when it actually hurts and a refactor is in scope.
This is a very good attitude, but I would like to tack on a second part to it: Instead of designing your code to be infinitely extensible in all directions, make it easily replaceable instead.
It's impossible to anticipate every future requirement, but certain choices will make it easier to deal with a new requirement as it pops up. One of these things is unit tests. If you have some gnarly business logic with tons of edge cases, you really really want a good test suite because it enables you to add new edge cases with confidence.
I'm a strong proponent of this attitude. The most maintainable code is that you can easily delete.
It does require a lot more thought and discipline, because it's not just a case of constructing an epic hierarchy of abstractions, or stubbing/mocking the hell out of your code in the tests to swap things around.
Couldn't agree more. A little redundancy early on in a product's lifecycle, when requirements are still in their infancy, can be a very valuable thing. It's much easier to add a layer of abstraction later as requirements became clear, vs. remove an unneeded layer of abstraction (which is often impossible).
I've been on too many projects which tried to build the perfect abstraction right out of the box - as if the problem domain fully known.
I genuinely hate the software architect title.
It's like when we were in school and the teacher told us to write an outline before writing. No problem ! the majority wrote the text first and then the outline.
That's because people in general write to think.* The level of deep thought achieved through writing is harder to achieve beforehand.
In the same sense, the level of deep thought achieved through writing code, is hard to achieve beforehand. It's actually worse because your engineers, dangled in this web of classes, won't be able to think about the big picture for themselves.
An Architect thinks in terms of what sounds good, what is beautiful in OOP-land not in terms of what is easy to write, and what's performant to implement.
One poignant example is the interviewer who wants you to implement chess pieces as classes. which sounds good in the architects' world but is blatantly insane if you think about the actual code and the actual challenges you're facing.
* writing to think and writing to be read are conflicting goals that why editors exist.
I was where you’re at 15 years ago; my language of choice was C++/Boost. I was advised to try C, not because it is a good language, but because it’s so hard to use, it helps you focus on the exact problem you need to solve. To this day I always “do one in C; do one in X”. I think Python (substitute any ‘batteries included’ language) can be good for this, too, as there’s no need to Yak shave with them.
Similar experience here. About 6 months ago I inherited a team of 30 that had spent 4 years building some monstrous microservice "thing" that didn't work and never seemed to build reliably and barely worked even if it could be deployed. Lots and lots of late nights trying to "deploy to testing" and wasted weekends restarting things.
I stopped the work, broke the team up (and let a few of them go), re-assigned the goals of the project to 3 people who, working half-time, from scratch, and in Python, rebuilt the entire thing in 3 months, delivered it and it worked (and continues to work). "Builds" are now an scp to a VM, and a restart of a flask application.
My wife recalls a similar experience where a complex application she worked on (and ran reliably on very little infrastructure and with an entire team of 4), and serviced thousands of simultaneous users, was replaced (when a new CTO came in) by a giant, consultant driven Java project that required approximately 6x the infrastructure, 3x the staff to keep it going, and at launch could only serve 9 to 12 users at once. It was a debacle. But sunk cost fallacy forced her company to throw millions of dollars of consultants at it until it achieved some minimal level of barely available service. She and her boss left around that time.
She heard that a couple years later the CTO was let go, the consultants fired and it was all replaced by a team of 5 rewriting it all again from scratch (in Java) but like you also without all the architecture barf dropped in from orbit, and immediately went back to servicing the thousands of users again.
I'm convinced that this is the fault of some kind of consultant industry that seems to infest enterprise software circles that is really only good at inventing ways of designing systems that require more of their services, but never seem to actually deliver working systems.
As someone who fell into a similar situation just last year, I can say that it helps a lot when you have previous work that had been done before you were involved. Perhaps the existing project doesn't show you how things should have been done, but it certainly shows you some of the challenges that were faced and what not to do about them so you don't fall into the same traps as they did.
So it's not always "those architecture astronaut guys used the company's resources to study their webscale fantasies and I came to save the day", but rather, "they did as well as they could using the choices and resources they stuck with and I happened upon the project with 20/20 hindsight".
I agree with what you say here to a large degree. We were able to reuse things like external service connections, accounts, contracts, some of the requirements documentation and some of the already provisioned hardware, VMs and so on. Those things are not trivial to put together and easily saved us several months of work.
Where I think I part ways with you is with this "they did as well as they could using the choices and resources they stuck with and I happened upon the project with 20/20 hindsight". One of the big lessons we've learned as an industry involved in software development is to start small, iterate often, get feedback from users. What I see time and again with these types of overblown space elevator projects is a kind of fundamental...immaturity -- the people who run these don't seem to understand how to achieve results with the minimum required to do it.
They've never started small and iterated to large. The projects they've worked on have all been so enormous, and have taken so long, that they have very few data points to draw lessons from. Each iteration and growth cycle in a mature project creates lots of information that can be reapplied elsewhere. But for people who've only ever grown up in large enterprise software projects, that's the only approach they know, and there's only a handful of lessons they've learned.
It's not only architecture astronauts, but the entire ecosystem that supports "enterprise" software engineering: vendors, consultants, scaled agile experts, design tools -- even university programs that churn out enterprise ready engineers. The appearance of these things, and how widespread they are in certain circles, demonstrates how those circles have regressed and tossed into the rubbish bin the lessons that we've learned. The penalty that's being paid is that these same lessons have to be continuously relearned again and again, but instead of pushing against outside models of how to do things (e.g. trying to adopt physical engineering approaches to software), we have to push against our own industry.
When you go against billion dollar enterprise software industries, you end up sounding like a heretic. It's really only when you get a long portfolio of success stories can you succeed. But that's very hard to get in today's climate where people spend their entire careers trying to kill ants with nuclear weapons dropped from the orbit of Mars.
This was very well put and definitely agreed with many of the sentiments you've put forth about bottom-up (get up and running with as little rpc as possible) vs sideways development (copy the entire netflix architecture and try to make it work)
I just didn't put the blame on the previous team, maybe I should have for sticking with it for two entire iterations, still not entirely convinced. I think they couldn't help it and the decisions were already made for them by large faceless corporations. I guess what's the point of calling themselves engineers if they don't feel like they are in a cockpit pressing lots of buttons right from the start?
By profession, I call myself a computer programmer, not an engineer or architect in order to remind myself that I should weigh problems by starting from fundamentals such as a state machine, an integer constraint system or a breadth first search, before setting up service discovery or training neural networks.
We had a similar debacle back during the dot-com boom. We were a Microsoft shop and we got a new CTO who despised our monoculture and demanded a shootout against Java. A large team of consultants and employees labored for almost 10 months to produce a system that scaled to .. 2 simultaneous users while running on the latest Compaq servers with tons of memory. Meanwhile our legacy VB6 app was serving 1600+ users on mid-tier hardware.
getInstrumentableClassLoader()
Return a ClassLoader that supports instrumentation
through AspectJ-style load-time weaving based on
user-defined ClassFileTransformers.
Oh man... I admire the people who have invented this. But at the same time I feel sorry for those who have to use this in order to earn their living.
It does have it's place. Aspect oriented programming will allow you to write, for example, logging code that are very configurable. It allows you to remove the concern of logging from your classes completely.
In fact, any cross-cutting concerns - like transactions, can be done this way. So you can write code that don't care about transactions, but behind the scenes, it is all within a single transaction, and transaction errors are handled centrally.
It adds runtime costs of course. And the more you add this way, the more complicated it becomes. And certainly can be abused.
Just use a language where it's possible to implement those things in plain old code. When you have higher-kinded types you don't need any bytecode manipulation, you can just have a type that wraps anything that needs to happen in a transaction, and compose together transactional operations in a visible but type-safe way - and you don't even need the runtime cost.
How anyone can look at Java as it's actually used and say Scala is too complex is beyond me.
You can also do it visibly in Java without too much noise IMO. The point of AspectJ is to hide this stuff completely from application code (but it's still visible in its definitions). Whether that makes sense or not I'm not going to debate (I personally don't think AspectJ is particularly great) but arguing that other languages allow you to do that in a more or less visible way is besides the point.
AspectJ is a language, these aspects are visible and have their own language abstractions, just separate from the application code. Just like you have monad implementations separate from their uses.
So writing aspects is also "plain code". How the weaving happens (at compile time/load time, on source code or on bytecode, etc.) is an implementation artifact
Also, AspectJ isn't Java and the problems AspectJ tries to solve and the approach apply to a bunch of languages and variants for functional languages also exist.
> AspectJ is a language, these aspects are visible and have their own language abstractions, just separate from the application code. Just like you have monad implementations separate from their uses. So writing aspects is also "plain code".
Code containing aspects is certainly not plain Java code, because the aspect annotations (or, worse, invisible string-name based pointcuts) change the behaviour of the code they're on, breaking the normal rules of the language. You'll see "impossible" behaviour at runtime: call a method and the call stack jumps to a completely different method. A method throws an exception that's thrown nowhere in its body. If you want to say that AspectJ is a language in its own right, then it's a language that breaks all the rules of good language design: it essentially contains COMEFROM as a core language feature. Whereas monad implementations are (generally) plain old code that follows the normal rules of the language.
> Also, AspectJ isn't Java
It (or other libraries that do much the same thing) is the vast majority of Java as it actually exists in the real world. I've yet to see a substantial Java codebase that wasn't using some form of AOP or reflection (even if only indirectly via these big frameworks).
> Code containing aspects is certainly not plain Java code,
I didn't say it was. AspectJ is AspectJ, as you surmised correctly "a language in its own right", and so when you define an `aspect` that's it's own thing, not a `class`. You seem to demand that the language is Java, but that's not really a criticism of AspectJ, the language, then, rather denying that it is a language.
It's like saying a C++ class and some implicit behaviors that change behaviour of user code (via a custom assignment operator override) is crap because it's not C. Maybe it's crap because you don't like it and like C better but it's still not quite fair to demand it be "plain code" when it is plain code.
Please look more deeply into https://en.m.wikipedia.org/wiki/AspectJ. It has its own syntax and is not just some annotations plus a framework you have to run.
FWIW, your arguments against AspectJ work just as well against any language with macros or metaprogramming in general (although you probably don't like those either, and that's fair!).
> If you want to say that AspectJ is a language in its own right, then it's a language that breaks all the rules of good language design
Frankly, the kind of things that AspectJ offers or proposes to use are IMO no less sane than some of the things languages like Ruby (e.g. monkey patching, middleware decorating/nesting calls via including a module, defining methods is a dynamic call that can be intercepted and customized, etc.), Python (e.g. decorators), JS (e.g. messing with prototype behaviors), or even Prolog (e.g. metainterpreters that metalogical predicates to implement different resolutiom strategies) typically encourage.
People have called monads the "programmable semicolon", and so you can do all kinds of things that don't at all represent what a user of the monad would anticipate unless they read the docs/implementation. It could enforce some form of order of evaluation, or store additional state that is hidden, etc. Sure, this is not the same thing as aspects or metaprogramming/macros, but what they have in common is that some external definition defines the exact semantics of a application/user piece of code and has potentially a whole lot of freedom to stray from what may look like it's "obvious".
The goal of AOP is to find a solution to challenges were modularization by typical means hasn't worked. And the typical examples are usually valid. Is it great? Is it needed? I'm not sure either, but your criticism sounds a lot like you only look at it from a lense of a pure statically typed functional programmer. From that POV a lot of language designs probably look impure and crap, whether justified or not.
> It (or other libraries that do much the same thing) is the vast majority of Java as it actually exists in the real world.
Are you specifically talking about AOP or AspectJ? I believe the former. And I don't think it's AOP, you are talking about various frameworks implementing a wild set of changes to the core language via reflection, code generation, or other such things tailored to their specific use cases.
Say what you will but that, while it might fall into the AOP class of doing things, is quite different from AspectJ's goals. The idea is/was that you don't need all these different frameworks with their own conflicting custom implementations. Rather, that using AspectJ should be the common language in which these metaprogrammatic aspect-oriented concerns are defined (for instance by such a framework that's currently using it's owm custom implementation with no common set of rules), which can be analyzed and provide static tool support (e.g. "who"'s amending, wrapping, early aborting behavior for this particular method).
So, again I don't think the criticism of AspectJ is fair (abd that's what I addressed, not the modern state of Java and frameworks in general). If at all, the problem is that AspectJ itself isn't actually used but instead that its superficial ideas have been coopted and turned into a metaprogramming wild west. Maybe it's AspectJ's fault but I think it's just a coincidence. That state of affairs is not unique to Java at all, either.
> I didn't say it was. AspectJ is AspectJ, as you surmised correctly "a language in its own right", and so when you define an `aspect` that's it's own thing, not a `class`. You seem to demand that the language is Java, but that's not really a criticism of AspectJ, the language, then, rather denying that it is a language.
By building on Java AspectJ tries to have it both ways, which I think is a mistake. It's normal, and I'd even say semi-encouraged, to use non-AspectJ-aware tools when working on an AspectJ codebase - and of course an upstream library maintainer may not even know their library is being used in an AspectJ codebase. Which causes real problems because they will refactor according to a different set of rules.
> FWIW, your arguments against AspectJ work just as well against any language with macros or metaprogramming in general (although you probably don't like those either, and that's fair!).
> Frankly, the kind of things that AspectJ offers or proposes to use are IMO no less sane than some of the things languages like Ruby (e.g. monkey patching, middleware decorating/nesting calls via including a module, defining methods is a dynamic call that can be intercepted and customized, etc.), Python (e.g. decorators), JS (e.g. messing with prototype behaviors), or even Prolog (e.g. metainterpreters that metalogical predicates to implement different resolutiom strategies) typically encourage.
Yes and no. You list a bunch of things that are bad to a lesser or greater extent, but the method-name-pattern-based stuff I've seen done with AspectJ is the most cryptic form I've ever encountered. Not only is nothing visible at the declaration site or the call site (not only no decorator but no magic import either), you can't even grep for the method name to find the thing that's messing with that method.
> People have called monads the "programmable semicolon", and so you can do all kinds of things that don't at all represent what a user of the monad would anticipate unless they read the docs/implementation. It could enforce some form of order of evaluation, or store additional state that is hidden, etc. Sure, this is not the same thing as aspects or metaprogramming/macros, but what they have in common is that some external definition defines the exact semantics of a application/user piece of code and has potentially a whole lot of freedom to stray from what may look like it's "obvious".
That's not really true in my experience; monadic composition is programmable only in the sense that an abstract method in an interface is programmable (indeed monad usually literally is an interface with abstract methods, or the closest equivalent in the language you're working in). Syntactically you can see that monadic composition isn't regular composition and so you know that some effect is being invoked, and as with any abstract method you might be able to click through to a specific implementation or you might have to accept that it's just "some unknown implementor of this interface". But in an important sense there's no magic: all your values are just values, all your functions are just functions, all the normal rules of the language still apply. I do object to things like thoughtworks each where a seemingly normal assignment gets magically rewritten into something monadic.
> The goal of AOP is to find a solution to challenges were modularization by typical means hasn't worked. And the typical examples are usually valid. Is it great? Is it needed? I'm not sure either, but your criticism sounds a lot like you only look at it from a lense of a pure statically typed functional programmer. From that POV a lot of language designs probably look impure and crap, whether justified or not.
Well I came to that precisely because of experience with that problem. It was seeing the bugs introduced by AOP that made me look for a better way to do things, and that was how I got into functional programming.
> The idea is/was that you don't need all these different frameworks with their own conflicting custom implementations. Rather, that using AspectJ should be the common language in which these metaprogrammatic aspect-oriented concerns are defined (for instance by such a framework that's currently using it's owm custom implementation with no common set of rules), which can be analyzed and provide static tool support (e.g. "who"'s amending, wrapping, early aborting behavior for this particular method).
That's fair. But in that case you have to see at AspectJ as a failure in terms of today's Java ecosystem. Even codebases that use AspectJ don't manage to avoid using all those other metaprogramming frameworks as well, and I don't think I've ever heard of a framework deciding to move away from a custom AOP implementation into doing something via standard AspectJ. (And I do think that AspectJ offers too much flexibility to ever make for a comprehensible codebase, even with the help of better tooling support).
> That state of affairs is not unique to Java at all, either.
It's not unique, but it's embraced to an unusually high extent in Java (Ruby would be another example). And I can't help thinking it's because of Java's deliberate, advertised simplicity, because a lot of the aspect-based stuff seems to be there to paper over missing language features (or, even more tragically, to paper over language features that are now there as of newer versions of Java, but that Java programmers have got used to having to step out of the language for).
I think you are being completely unfair here. AOP in Java (and .NET) is a metaprogramming facility that has few equivalents in other languages. The closest thing that comes to it is macros, but those are often compile time modifications.
More importantly, nothing about standard Java development requires The use of AOP.
> AOP in Java (and .NET) is a metaprogramming facility that has few equivalents in other languages. The closest thing that comes to it is macros, but those are often compile time modifications.
You don't need macros to achieve what grandparent was talking about - handling cross-cutting concerns like logging or transaction boundaries without being too intrusive. In a language that lets you do a half-decent monad implementation (which is pretty much any language that has higher-kinded types and first-class functions, although do notation can be a useful enhancement), you can comfortably write this kind of thing in plain old code. (I do those very things - transaction boundaries and logging - in Scala all the time).
> More importantly, nothing about standard Java development requires The use of AOP.
And yet, in real-world Java development, people feel the need to use AOP. In my 10+ years of JVM work in companies large and small, I don't think I saw a single one that didn't use some form of AOP. That should tell you something.
If I give you a jar (or equivalent) of my compiled app, can you still apply cross cutting concerns (over compiled code you may not own)? Because that's what AOP and bytecode transformation can do.
You can't insert them at completely arbitrary points in third-party code. But libraries can be written generically (in terms of e.g. a general monad) so as to propagate cross cutting concerns from your callbacks to the right place, even when those concerns are implemented later on and the library doesn't know about them. Look at e.g. fs2 and http4s: everything's written generically in terms of some F[_] type which the user supplies when they use the library, and it will all be plumbed through correctly. So if you want logging, you use something like treelog as your F[_] type; if you want transactions, you include a transaction type in there. And so on.
In my experience that covers almost all the use cases, while it means that library code can still behave predictably and be understood and tested (e.g. you don't have to worry that a library upgrade is going to break your application, which you do if you've used AOP to define pointcuts deep in its internals).
I'm not familiar with scala but will take a look. Completely agree over your last point. I very much prefer libraries over frameworks, and as you said pointcuts are a bit fragile, but my main point is that they behave differently because of bytecode transforms.
AOP is doable in most languages that allows very basic levels of runtime instrospection and manipulation of objects and types, and is easy in a large number of them.
In Ruby, for example, "alias_method", "define_method" and "send" is sufficient to implement cross-cutting in a handful lines of code, and at least one generic gem exists that provides basic cross-cutting in a generic way.
But it's very rarely done because it's generally cleaner to pass down code for the library to explicitly call.
I used to be very excited about AOP, but in effect while AOP can be great as a debugging facility (e.g. ability to inject logging at any point for example), when you have control over the design it tends to be better to explicitly build code so that it is designed to accommodate user provided functionality. In Ruby that tends to involve e.g. the ability to pass in blocks a lot of places.
To be fair, there's something to be said about a seasoned/documented library versus the cooked up micro-framework/function that a random coworker invented in Scala to do the same thing, and which might be implemented differently in each project in your company!
Source: I'm about a year into Scala after many years of Java. I like a lot about Scala, but 4/5 places where we have "plain old code" where in Java we'd have used an open source library, the "plain old code" is painful to read/reason about.
I'm all for using standard and documented libraries provided they follow the rules of the language, and I don't think Scala goes against that - if anything "microlibraries" that do one thing and do it well seem to be more common in Scala than in Java. What I'm objecting to is frameworks that violate the rules of the language and require their own special understanding, even if those frameworks are well documented.
How would you then, say, do a profiled run of the application, to help find the choke points in your application? You'd have to make sure that you apply the exact same "trace the runtime of this function" wrapper around every single function call, then make sure you haven't missed anything anywhere, then have to maintain such usage, and then mentally ignore it everywhere when you're trying to read through the code.
Profiling isn't really a good example; I don't actually mind doing it via language-level magic, since it doesn't affect the actual behaviour of the application - every function still executes the same and still returns the same thing, we just observe it in more detail from the outside.
But to answer the question, I'd use a monad, something akin to treelog ( https://github.com/lancewalton/treelog ). Apply the wrappers at the points where you want them (and they'll be visible in the type), and then the tracing is naturally threaded through in a way that's visible but not intrusive (basically just the difference between <- and =, so when you're reading the code you can see it but it doesn't get in the way of reading the business logic). Maintaining it is easy because it's just part of the type of everything, so automated refactoring in you IDE will do the right thing; testing is easy because everything's still just a function and a profiled computation is still just a value, but you have access to the profiling trace in a normal way as a first-class value in the language.
Well, the whole point of AOP is to handle these "not good examples". Now, certainly if you use the wrong tools for the wrong job, you'll shoot yourself in the foot. But there are many powerful tools that we shouldn't eschew just because we fear using them wrong. And then there are many cases where you shouldn't use these tools just because you can.
> Well, the whole point of AOP is to handle these "not good examples".
I don't have a problem with AOP that doesn't change the functioning of the code. I have a problem with AOP that changes the functioning of the code, which is the overwhelming majority of AOP that I've seen in real life.
> Now, certainly if you use the wrong tools for the wrong job, you'll shoot yourself in the foot. But there are many powerful tools that we shouldn't eschew just because we fear using them wrong.
But you don't need any of the dangerous functionality to do profiling. So that's not a valid argument.
I mean, you can do anything you want, it's just code, but I would argue that I can get more maintainability and better results with less code by using AOP to implement application-wide profiling.
For cases where you need to change the behaviour of the function, you get more maintainability and better results by doing it in a way that's visible in the code. For profiling, you get more maintainability and better results by using something less powerful than full AOP (e.g. stack sampling or tracing). There is certainly no case on the JVM where AOP leaves you better off than not having AOP, and I'd be surprised if there were such a case anywhere else.
For those of us hazy on the AOP concept... Is it an artifact stemming from restrictions of static languages, or would it be a valid concept with dynamic languages too?
In the dynamic languages world this is done with monkey patching.
If you invent a declarative way to monkey patch your code, you've got AOP as done in Spring.
I don’t thin AOP is specific to static languages. From my understanding, the “aspect” part implies the ability to “hook into” and extend an application at specific “cut points”, which are typically pre/post function/method calls. This is done in Java by modifying byte code to insert proxies. I imagine something similar could be developed with other languages. It might just be easier to do in Java because the type signatures are static.
Python decorators are a (limited) form of AOP: they let you transform functions in-place to implement cross-cutting concerns. Aspectlib allows you to do a more intrusive form of AOP.
> It allows you to remove the concern of logging from your classes completely.
You say that like it's a good thing. When there's a bug to look into grepping strings from the log is one of the first steps, the first step if you don't have a stack trace, I want that string to be where the problem is. Having the logging in with the rest of the code is an inevitability anyway and unless you only want logging at function boundaries.
> In fact, any cross-cutting concerns - like transactions, can be done this way.
That's another thing I want to be explicit, if I have some code that is going to to do a bunch of updates I want it to demand a transaction as part of it's interface. I don't want it hoping it gets executed in the scope of a magic layer that will handle the transaction.
The common theme with AOP is that it takes stupid simple code into an opaque mess spread across a dozen classes that's far harder to maintain.
I'm writing microservices using a relatively lightweight, straightforward and explicit web framework (sparkjava, which is more of a library than a framework really), modern language features and no AOP...
Enterprise... pah!
(Oh, unless you mean the Spring crowd are enterprisey, in which case I take that all back.)
The people that use Spring love it, well, at least they did when I was last working with them. They also loved talking about patterns, often didn't have enough traffic to stress the systems built on these relatively heavy object hierarchies, and were relatively comfortable (the only competition was the other half of the company using .NET.)
lol. In my experience, in the enterprise world, yes people liked Java/spring, but also where comfortable giving it a ribbing. Everyone loved python but didn't get to use it for main-business code, and the Go and Scala projects have been very conflicted with both lovers and haters. In my ~20 person division, 1 person quit and another transferred with the main reason being their dislike of Scala!
Stuff like that always makes me feel like someone has turned off their logical reasoning and they are just basically creating code that fills in some personal mental gap that somehow 'completes the set' of the things they are working on at the moment.
I actually knew a guy that argued that one method per class is ideal. He never quite got that at that point you're really just writing a function with a manual closure (the constructor).
Was it tiny objects or tiny messages, or both? And I wonder how much of the Smalltalk system was made that way. I think at some point your class needs to handle some complexity that isn't easily broken into smaller classes.
But that’s just OOP in general. You can choose a language that defaults to not using OOP-style abstractions, but then people will question why you choose python/node/ruby over a proper language like Go (or whatever else)
The OOP paradigm itself requires rather heavy use of abstraction. You can deviate from those patterns to an extent, you can add more polymorphism if you want to, you can be selective in using some design patterns like DTOs. But most of the complaints I hear about Java would apply equally to most of the C# projects I’ve worked on, because those are just the annoying bits of OOP. I’d agree the Java is a bit more inflexible, but it’s not like the other OOP languages aren’t also full of boilerplate and layer upon layer of abstractions.
Am I a weird person if that one doesn't seem very egregious to me?
The class loader is an important entity for a Java app. And you can have more than one (although this is advanced voodoo already). So, an instrumented class loader.
I wouldn't mind frameworks if they actually had what they claim to offer. Nowadays so many frameworks don't even deliver, they have those nicely looking webpages and come with advertisement after advertisement of their features (even if they aren't sold!), and then when you look closely at the source code or API docs, half of those features are vaporware.
This is more than infuriating, it can actually cost you a lot of time if you're not careful about evaluating the framework.
I can’t even figure out what Spring is claiming to offer. The only benefit of Spring is that you can move what ought to be compile-time exceptions into the runtime.
"Right. Fuck you. Fuck your lack of hammers. Fuck your factory factory factories. And fuck your store. If you decide to pull your head out of your ass, I'll be over here, duct-taping a rock to a stick so I can actually hammer a actual nail into a actual piece of wood."
I'm reminded of the HGttG scene regarding the display department.
> duct-taping a rock to a stick so I can actually hammer a actual nail into a actual piece of wood.
Note to anyone actually trying this: if you have a drill, drilling a hole in a piece of wood and sticking the stick (preferably dowel) through works better; if you need more weight, attach the rock to one side of the head, and a coin or other patch of metal to the other.
Also (if you have the space for it): adopt a lathe/smelter/set of chemical-refining glassware/etc today!
I prefer to read articles like the one you linked. They feel honest. If you don't like something you don't have to say "but yeah for this specific case it's fine, or if google is behind then it's alright..." just to be more correct and avoid comments from haters. I don't mind if you are a junior dev or Joel Spolsky. You should always write with confidence, especially when it's an article about an opinion.
I remember the Rails people making fun of the Spring stacktraces vs the Rails one (being much shorter). Guess what, a couple of years later Rails has the same layers of abstraction
It was always an argument about fashion. It just wasn't in fashion to admit that out loud at the time.
The conclusion was "the rails-not-java club is best" and then everyone worked backwards to find believable reasons why. At the impromptu pitch meetings "minimalism vs. complexity" took off.
Now that the rails club is out of fashion and the k8 club is in. Well, we'll invent reasons that over-engineering is better than deployment automation that "just" sticks a .jar in a .deb and apt-get upgrades a handful of non-virtualized servers colocated at a local datacenter.
Spring is not a good example. If you're a seasoned Java developer you look at an `AbstractDestinationResolvingMessageTemplate` and you just know what it does. I'm not kidding.
Spring used to be a bad example yes, but not anymore. They really put the thought and work into fixing its design. Right now I can write an app __completely__ independent of Spring. In fact that's what I'm usually doing. You can do Clean Architecture with Spring now. Please keep your facts straight.
> If you're a seasoned Java developer you look at an `AbstractDestinationResolvingMessageTemplate` and you just know what it does. I'm not kidding.
This claim is so bad.
You could look at the name AbstractDestinationResolvingMessageTemplate and the useful information they would be gaining is that it's a message template. They would still not know how to use it; they'd still have to do an additional step like checking the documentation on its usage etc. The amount of useful information to text is small and it doesnt help.
I didn't say that you would know how to use it. You don't need to know, the framework uses it. What I said is that you would know _what_ it is if you ever get an exception from that class.
I'm 95% sure that all those people complaining about Spring never read the reference documentation. I did, and whenever I bump into a problem I have an idea about how to solve it. Besides reading it only takes a few days and you're set for years.
Why do anybody expect to be able to work with a technology if they don't bother reading the documentation? This is like complaining about not being able to drive a car when you sit down in the seat without training.
What's your definition of 'know'? It's very common to have surface level knowledge of the class only to dive in and see that it does something else. So do you 'know' the code as well as you thought so? Probably not. Fact of the matter is you can't magically 'know' everything about a class based off a class name. And this includes your 'seasoned Java developers'.
You're throwing out some very large assumptions as well.
That’s amazing, thank you. Is Google Sheets still missing the table functionality from Excel? It’s the first time I’ve seen this and it looks great. Normally I’d be forced to create a new “sheet”. On a side note, Joel is using Excel for Mac and I was wondering at what point it would beachball, which it does around 42 minutes.
IMHO, the framework vs library debate is similar to the inheritance vs composition.
Inheritance: You must predict how people will use your class. Make a mistake and they're screwed and need to resort to all kinds of hack. (Like frameworks)
Composition: People will use your class if it fits their needs. They can easily wrap the code, tweak or simply discard it. (Like libraries)
I have a vague memory of a similar parody article which I've never been able to find again. IIRC it was in Ruby and he imported all libraries into a "god" object that could do absolutely anything.
Pretty sure murder was taboo back then also. The only difference is people are much more eager to virtue signal these days by finding offence wherever they can..
Edit: to explain my position a bit, what I observe with people that are easily offended is that they can't seem to separate what the character is saying from what the overall piece is saying.
Just for example, think of The Office. Michael Scott says a lot of horribly sexist and homophobic stuff right? And we as the audience frequently laugh at it. But I don't think anyone in their right mind would say that The Office is sexist or that the audience that watches it is. Because we understand that the humor is that Michael Scott is saying something really inappropriate but he's also clueless and naive.
In this piece, the salesman character makes a dark joke. That is part of the character of the salesman, along with other aspects that are revealed. If you personally don't find it funny -- maybe you just don't like dark humor -- hey, I won't tell you you're wrong, humor is always subjective, but I don't see anything offensive about this. It's not advocating for violence it's simply adding a weird dark personality quirk to an already very weird character.
What I find with the easily offended crowd is all they look for is "bad thing involving a protected group in here? OFFENSIVE!" and completely ignore the actual context of everything else. It'd be like if you took a "that's what she said!" joke from The Office and completely ignored the context of Michael Scott's character.
It feels increasingly odd to work in exclusively male teams when there is no obvious biological reason why that would be. Possibly the casual “kill the ex-girlfriend” jokes contribute to that. I mean, some people really think like that, so it isn’t a purely innocent joke.
I don't have as much faith as you. I believe being "woke" is one of many things filling a void left behind by religion as more and more people find that void needing to be filled. I can't see that changing anytime soon.
I think the same thing every time reading this otherwise-excellent post. I don't think we should attempt to erase history, but it is super jarring, at least for me. Was it ever actually funny?
Just read this to my non-programmer girlfriend and she could barely stop laughing, especially at the ex-girlfriend jokes. I really feel sorry for all the people over-analyzing these things. Like, do you think the author ever killed his ex-girlfriend? Do you think he even knows anyone who killed their ex-girlfriend? It's a joke!
I think it depends on how absurd you think it is. But it is in line with the absurdity of a factory factory and adds to the setting of "why would a developer ever extend the tool to that?" that is at least a common thought for me.
But I recently saw "Dont Fck With Cats" on Netflix and if had been a joke about animals I'd also been jared (because in my mind it would no longer be absurd).
So I guess it depends on what media makes us fearful off right now.
The interesting thing is that Joel is actually gay so he probably specifically went out of his way to not offend the sensibilities of his readers and the culture at large at that time by using "girlfried" as opposed to "partner".
Moreover dark humor has its place. Indeed 2 of the biggest comedies of the last 20 years used exactly this kind of humor (Seinfeld and The Office). Even to this day some of the most acclaimed comedies have humor in this vein (Veep and Curb Your Enthusiasm for eg).
You’ve got it backwards. No “psycho” would consider it funny; they would just consider murder something normal one could conceivably do with a hammer. There is nothing funny about it, to them; it’s just normal.
The murder joke was only funny to those people who consider murder something utterly unthinkable. It is its very outlandishness which creates the humor.
If it is no longer considered funny today, it can mean a few things: Either more people now believe that there are more “psychos” than they thought previously, or an extraordinary number of people have turned into “psychos”. Or possibly both.
You can only mask outrageous things as humor if they are so utterly outrageous as to be virtually unthinkable. If they become only slightly plausible, they become no longer funny. Therefore, if jokes about murder are no longer funny, it’s because people no longer believe that murder is as likely to be unthinkable as it used to be.
This is why I think that masking outrageous things as humor is a losing strategy. Either it is funny, in which case the thing being masked with humor is so outrageous as to be completely niche, making the secret signal irrelevant, since so few people recieve it. Or, the thing is such a relatively common stance to make it no longer absolutely unthinkable, which makes it no longer funny, which removes the mask.
(Also, how do “white supremacists” fit into this? The original joke was about murdering an ex-girlfriend. This might possibly be generalized into misogyny, but I fail to see the connection to white supremacy. Please don’t fall into thinking that all people who hold opinions which you don’t like also hold all other opinions which you don’t like.)
I was expecting a one-sided rant. Instead, I got a reasoned distinction between two different approaches, the trade-offs of each, and a discussion about the cases where one approach (frameworks) can be beneficial but also identifying the added risks of going that route. I agree with the conclusion, and why.
I wish even more people were willing to write down a discussion of alternatives and their trade-offs. Instead of "Google does it this way, so it's obviously the way everyone should do it"... a reasoned discussion of the pros and cons. Good to see.
Yeah, I was expecting something a lot less nuanced after that warning at the beginning. I both love using full-featured frameworks (I reach for Django over Flask) and totally agree with the article.
This was the blog equivalent of someone sincerely saying “Pardon my language, but gosh darn it that man is a jerk!”
I like to use this approach with all team decisions. It is how you get real buy in from people, because after seeing pros and cons, they chose A instead of B.
Agreed. In one team I managed, we artificially created "red teams" to take opposing view points if we were getting into traps with too much group think. Leaves the teams with confidence in their choices and real buy in, as you said.
The more I program, the more I am convinced that owning flow of control is one of my primary jobs as a programmer.
If I surrender this to a framework, there are a lot of decisions I can't make with regard to performance, and I have a lot less certainty about when and in what order exactly things are executed.
There are of course some exceptions, but in general I want libraries to provide me simple, synchronous functions, and it's my job to figure out how to spread them out over the hardware.
On the level of practical risk: on the security level libraries are much more risky imo. A good framework for most project i've seen will lead to a more structured and secure codebase. Probably precisely because there is less room for creativity.
For instance Laravel is very structured: orm, auth routes, controllers system etc., where express.js let's you do whatever.
And almost all Expressjs or other Nodejs projects i've joined were a mess and had security issues, something as simple as CSRF tokens, or not allowing redirects to external domains, are often forgotten or not fully implemented.
Doesn't mean I don't sometimes prefer a library approach, but the developer(s) need to be a lot more qualitative.
oh, Laravel is actually an excellent example. It has a lot of hidden "magic" knowledge inside and it takes many months to get a good understanding of how it works. Until then it is trivial to make bad decisions based on this wrong knowledge. Lack of proper IDE support (because of laravel's dynamic nature) doesn't help. And yes, I know about laravel-ide-helper. It has it's issues too
I’ve been working in some largish Rails codebases lately and this is exactly the problem I have. When I’m looking at a screenful of code I have no idea who is calling what, what’s in scope and from where, and what the shape of any of my data is.
In my opinion the productivity benefits of these kinds of frameworks evaporate and go negative pretty quickly once you move very far beyond what they provide out of the box.
This hits home for me right now. I'm currently battling some performance problems related to a lot of "magic" that happens when using the Apollo GraphQL framework and am strongly considering ripping it apart and taking back control of what happens and when.
Would be super awesome if you could share what you find! Having the same issues here - and I have a real time accepting that stuff becomes slow when you wanna return over 1k records. Sometimes way less for advanced things. Read: nested objects. But that slow? Come on.
I also use nestjs and there it seems updates to how to resolve a field has made it tons faster l, purely looking at the trace.
And I know that it won't be as fast as regular old json due to the checking but that slow? Must be something someone can do :)
I think it helped us to go really fast at first. Observable queries and subscriptions made it relatively easy to make the site feel very live and collaborative. Not sure how much help it is now as some of the magic is causing performance challenges
In a similar vein: boilerplate code is fine in the main control flow. It’s okay if it takes twenty lines of code to set up your library, as long as it’s not the same 20 lines for everybody in production.
Frameworks tend to avoid any boilerplate in the imperative core but then multiply I throughout the rest of the project. Possibly due to chronology.
How can a library providing only sync functions a good thing? Lots of IO is inherently async and it makes little sense for a library developer to go out of his way to make it synchronous only for the callers to make things asynchronous again. If something is async, it's async. You deal with that.
Also, I don't see why one would even consider using a framework if squeezing out every bit of performance from your hardware is that important. I can't imagine writing a Web app without some framework in the interest of time it takes to develop. Then writing a hundred of the same stuff again? You just have to ensure you don't use a bloated framework which is akin to not choosing a bloated library. Unless you are working on a device layer, it's not our job to spread things over the hardware. Unless you have direct access to memory, you can't do this anyway with most of the languages designed to develop apps. I think the primary goal of an app developer is to implement the business logic correctly and securely in a reasonable time. I agree with the OP that libraries should be preferred but I don't see a problem especially with performance in the domains frameworks are used.
Good luck using any sort of GUI toolkit then. Virtually every single one requires you to sacrifice the very top level of your program to some sort of "MainLoop()" function.
(I agree with your main point, and the above fact causes a quasi-allergic reaction in me which causes me to avoid GUI programming - despite graphics being one of my favorite topics.)
Sadly even imgui cannot get around the fact that many operating systems (macOS/iOS/Android) force you to give up your main thread. Linux and, perhaps surprisingly, Windows are good guys in this regard.
Rhw issue is that it's not exactly possible, especially on Mac, where your initial thread is sacred and for OS use only.
Windows and Linux UI libs need a main loop somewhere, but it can live on any thread and in fact doesn't need to be structured the same.
The one thing I do value from frameworks - especially in an unfamiliar domain - is that they teach me how to structure things.
I can usually patch functionality together from various sources but understanding how to break a process down if you haven't done it before is often very challenging.
For example - Django taught me how to structure server-side code that needed to handle input from both a database and a browser and potentially an API in a way that keeps things nicely layered.
I'd done all this before with PHP but my architecture was a dogs dinner. I could structure my own code nicely now - but only because of the experience I gained from using an opinionated framework.
I think the key difference is that frameworks usually hijack your control flow, while library are not.
But is framework necessarily bad or inferior to library? I would like to present React as an example. React heavily regulates the control flow for its developer, leaving several specific hooks to allow you control the timing when your code would trigger.
But React is an excellent piece of software. And assuming in a perfect world, if everything we interact are more or less functional or self-containing, then timing doesn't really matter, and uncertainty is thus controlled.
In the end, the lesson here seems to me is, making good frameworks are much harder than making good libraries. For libraries, you need to make sure the building blocks are solid, but for frameworks you are defining a whole world for people to visit.
I would argue React is actually a great example why the article title is on point.
Idiomatic React has completely changed several times over the course of a few years. This shows that it did not anticipate people's needs well enough. And React is in the fairly enviable position of receiving corporate backing in the tune of a million dollars per year.
Now consider tools like Redis or Postgresql. You can interface with them from just about any language and you can go years without your usage of them ever becoming unidiomatic.
Framework <-> library is a spectrum, of course, but as an obvious indicator of React’s location on that spectrum—you can feasibly use it inside, say, an Angular app to render a particular third-party widget. At its core, React is a super-efficient unopinionated render() and tools for constructing its input. On its landing page it establishes right away (IMO, correctly) that it’s a library.
The reverse, bringing a single Angular component into an otherwise React-based webapp, is basically an oxymoron. Angular will generate e2e test scaffolding for your entire project, prescribe where to query data, expect you to follow a particular file layout in your codebase, and pull in enough machinery to rule out any auxiliary uses.
Just because you can do something doesn't mean that it makes sense to, or that it's idiomatic to do it. The use case of bringing in a single 3rd party React component into an Angular app is far into edge case territory. Realistically, you would never consider doing it unless there was absolutely no Angular-only alternative in any way, shape or form.
FWIW, it _is_ also possible to render Angular code inside of React [1] (but again, this falls well into non-idiomatic territory for both Angular and React)
Idiomatically, React is the only thing dealing w/ DOM in an app, and the way it's used is rather frameworky ("don't call us, we'll call you"). And when it's used that way - which is the vast majority of the time - the used idioms do vary heavily depending on when the code was written.
Tying this back to what I mentioned about postgres: _within_ a React codebase, you need to be aware of the _semantics_ of the system: performance is achieved through reasoning about the semantics of things like shouldComponentUpdate/memo and even object identity, one needs to understand the semantics of stale closures and useCallback, etc. This is similar to having to understand the semantics of how various ORM idioms translate to underlying SQL queries. By comparison, the cost of any given jQuery idiom tends to correlate more directly with the underlying semantics, just as the cost of a raw SQL query does - i.e. it's pretty obvious without context how expensive `$('html').html(html)` is, just as it's more obvious what is the performance profile of a CTE vs a subquery, in comparison to whatever the high-level ORM idiom is (assuming, of course, that you know SQL)
The same holds true for Postgres: idiomatically, it’s the data store and thus comes with its own set of instructions. I’m not really sure what you’re arguing here. Redis gets even worse: here are a few datasets we support, now go architect your use cases around them.
Of course, none of these reasons make them bad tools. Quote the contrary. We are about to ship a react + postgres + redis product. They all work great together for the very fact that like good libraries, we can borrow from each where they are needed.
Just because idiomatically React is the only thing interacting with the DOM isn’t an indictment...it just means React is really good at that and most people would rather use it exclusively for DOM manipulation. The same way that Postgres is often the only data store, not because it forces it upon you, but because it’s really really good at it.
I don't think with React as distributed (and per tutorial) you can avoid writing that first render() call, so no, “you call us” in this case.
The existence of third-party skeletons and templates that impose some structure or another and pre-write the initial render() call isn't making React a framework.
It's a feature, of course: being mostly on the library end of the spectrum opens up innovative uses of the efficient rendering. A framework by definition is narrow about what one can achieve without fighting it, otherwise (as I have to agree with TFA) its developers would drown in features and complexity.
As to running Angular in React, I stand corrected, last time I built on Angular when it was at version 5. I stand by the rest of my argument, though.
DBs, IMO, are neither library or frameworks. They have their own DSL after all, which is close to programming languages.
Programming languages are hard to categorize as either library or framework. They are libraries with full set of control flow primitives or framework that doesn't assume a particular flow at all.
I don't consider Redis to have a DSL, and similarly, I was thinking about Postgres in the context of its client APIs rather than the SQL language. FWIW, there's another good example there: a client API centered around connections, queries, etc tends to be far more usable in terms of expressiveness and transparency than an overarching ORM framework.
With Postgres that's because SQL has been a standard for decades now. And with idiomatic React the changes were largely not because of what you claim: not anticipating needs well enough. The changes in what is idiomatic were largely around discovering better ways of doing things, along with JavaScript itself changing. You can still write a React component using idioms from five years ago and it's still easy to read the code. React is very focused, and very simple. The main motivation for moving to the newer idioms is that you can make the code even shorter and more readable. It seems to me that they anticipated the need to keep up with the JavaScript language evolving quite well.
Could you point out which reactjs code has completely changed?
I believe reactjs has remained the same, there are deprecations, hooks etc but basic premise is same.
React went from React.createClass -> class xxx extends Component -> Hooks.
But I don't think that means...change? The fundamental concept of react, that UI is a derivative from the state, stays the same. Class based or Hook (or just a really nice way to write Functional Component), stays the same.
Fc and hooks force you to think in an entirely different manner though. I would say it is an entirely different paradigm. And it is a much higher learning curve.
For instance I see a lot of developers and engineers with 6 months and more experience with functional components and still misunderstanding the point of all of this. And these are smart people at high tier companies.
The main thing being the meaning of having dependencies for useEffect and other hooks.
You're right. But this is the natural progression for React. React emphasizes stateful behavior in components. UseEffects with specific dependencies are actions which depend on the specified variable-set alone, while all the other infinite variables can change however they like. This concept already exists when we talk about state machines.
That said, it isn't a smooth transition going from explicit lifecycle methods in class based components to useEffect hooks in functional components. But it refines the quality of the thinking, and the quality of the code overall, imo.
Six months? Something is wrong. I learned hooks in a half day. I occasionally ask someone with more experience if I'm using the right hook to solve a specific problem, but that's a one minute question and the rest I can do on my own.
Specifically, how are useEffect dependecies complex? You list the variables that, when changed, should trigger the hook to run. What am I missing? And I'm not claiming no one struggles with hooks. But people also struggled with the lifecycle methods of React classes. In almost every case I've seen this, I've asked the developer to read the React docs. That's usually enough. I'm guessing there is some other underlying more fundamental issue if someone is still struggling with hooks after six months of hand holding.
> The main thing being the meaning of having dependencies for useEffect and other hooks.
This is 100% a result of how react (ab)uses the host language semantics to implement their DSL to describe components and has nothing to do with the conceptual nature of functional components themselves.
It relatively easy to imagine a design that doesn’t require the dependency lists when capturing a callback closing over props or state.
Not saying that design would be better or worse, just that it could exist and still leverage concepts like FCs, hooks, effects etc.
I think React is somewhat of a special case, because a lot of what makes React good is that it mitigates a lot of the issues which make web painful to deal with: "the browser" is this highly inconsistent target, and Javascript doesn't have built-in facilities for modularizing code, and React gives you a sensible way to get consisten behavior and implement reusable components in this type of environment.
Another way to put it is that for web you're already pretty heavily invested in the "framework" of HTML/CSS/JS wheather you like it or not. React is essentially just choosing another baseline framework to replace it, not adding a framework where none exists.
> But is framework necessarily bad or inferior to library?
The point is that more people, projects and even frameworks can use the library you write. If you write a framework, people have to drop whatever it is they're using and go with your framework. That's bad, unless your framework is the one to trump them all, which it probably isn't.
>I think the key difference is that frameworks usually hijack your control flow, while library are not.
Which is what should be expected.
The point of a framework is to not only provide higher level abstractions to rapidly put together a certain type of application but to also provide a certain approach or set of approaches (a "framework") to follow as a guide during the process.
This means some generalization should exist but if a framework is too abstract its really no longer a framework... it's a library.
A library is supposed to simply provide useful utilities/lego blocks to assemble however you want and not necessarily give specific guidance or patterns.
Certain common patterns of usage may arrive as a sort of emergent property based on fundamental library choices/structure but it's overall pretty flexible.
it's a rough rule of thumb at best though – higher-order-functions or other things that take a callback blur the line a lot (IoC, what a sibling comment said). or say a library has an event-loop, or maybe does networking and you only provide some handlers – would that make it a framework?
i guess it's about frameworks being "the main thing happening". i.e. a framework controls the whole program's execution, occasionally yielding control to your code. which is a bit wordier to be sure :)
When something is a framework, it calls functions on your code AND it does a lot of the heavy lifting for you.
I studied Frameworks in my Software Engineering degree, but really had no appreciation for them until developing large software in WebObjects.
Until you've extensively used a well-written Framework, libraries seem great. After, you realize a library just helps you here and there, but a Framework is so much more.
This discussion reminds me of one of my favorite quotes: "form is liberating". I always associated this with the sculptor Henry Moore, but the beauty of the idea is that it can be applied to all sorts of domains, whence it's also associated with Brook's Mythical Man Month about software development.
The association here is that frameworks are "form". One might intuitively think that form/frameworks is/are constraining rather than liberating, but per the sculptural analogy the roughed-out shape provided by the framework liberates you to concentrate on the details rather than suffer from analysis-paralysis when there is too much freedom of approach (just a large block of marble from which you might sculpt literally anything).
But it depends on the scope and nature of what you are trying to do ...
If your problem fits withing the scope/form of the framework, and your value-add is in the details, then using a framework, all else considered, make sense.
Alternatively, when you are trying to do something outside of the norms of gentrified frameworks, then the flexibility of libraries (or ultimately a blank sheet of paper) is what is called for.
Of course, the devil is in the details... a poorly designed framework may be TOO constraining and not provide enough flexibility to be widely applicable, while poorly designed libraries may be overly restrictive due to poorly designed APIs that don't place nice with other libraries or the data structures you'd like to use.
1 more plus for the form of frameworks. People on your team must be on the same page and work within that form. If everyone is left to their own devices you will get a lot of friction when going into parts of the code you did not create yourself.
I like to think of frameworks like a kitchen. The chef sets the menu and everyone works to produce how the chef thought it should be.
Sure sometimes you will run into things that you fight the framework, but a good framework normally allows a "raw" way of doing it "by hand" instead of the sugary framework way that "just works (most of the time)". But most of the time it does work out fine and you can hop between other's proejcts/parts of the code and have instant familiarity and intent
The devil is sometimes in the large as well. Form changes — code that used to run while generating a web page now runs in a REST service. Code that used to run in a REST service now runs as a FAAS. Code that used to run in a FAAS now runs in a stream processor.
Your framework could tie you to a specific form, forcing you to implement your FAAS by posting each request to a REST service, or forcing you to implement your stream processor by posting each event to a FAAS. Or you can liberate your code from form by making sure that nothing important about it depends on a form-specific framework.
True, but all code ages out, and I wouldn’t expect the average LAMP stack site to have adjusted to the REST-powered react world any better than a rails app would. When use cases change, huge parts of a code base should be rewritten (otherwise it was duplicated work to begin with)
That said: good, modular, well-developed frameworks evolve alongside their use cases. Django with DjangoRestFramework is an excellent choice for many API use cases, and the migration path is about as painless a major architectural overhaul can be.
It shouldn't need to be rewritten. The same Python code can be utilized in all of those different forms, unless you've tied yourself to a framework for functionality that needs to be used in the new context, i.e., for functionality that is not form-specific.
If you've fallen into the trap of thinking a specific form is eternal -- for example, if you've subconsciously identified "our back end == our REST services" and therefore "data serialization/deserialization, metrics, and data storage access should and will be done only in the REST services" then you might have relied on a REST service framework for that functionality. Then when you want to move functionality to an AWS Lambda or a Spark streaming job, you find yourself forced to choose between running a REST framework inside the Lambda or Spark streaming job (which will be hacky, if it works at all, since frameworks want to own the application lifecycle and be configured via specific mechanisms) or writing and maintaining a duplicate implementation of all your data serialization/deserialization code, metrics configs, S3 configs, etc.
I don't mean to say it's a disaster if you end up doing that, but it's a cost that will tend to hold you back from doing the right thing and make you persist with hacks and band-aids longer than you would have if you were free to change.
> good, modular, well-developed frameworks evolve alongside their use cases. Django with DjangoRestFramework is an excellent choice for many API use cases
That's a very modest change. It's still a long-lived service doing request/response over HTTP.
> Do you introduce a domain-specific language? You're now responsible for part of the build chain, and for editor integration.
> Now, if there's a major organization backing the framework, maybe the calculus works out. Google can back Angular...
In my experience, even large orgs don't necessarily have the capacity for building out full-featured editor integration and build tools. Google has a great team managing Angular and it still has no tool for performing codeshifts. The editor integration to date still leaves much to be desired.
With the number of years and man-hours dedicated to this point, I'd imagine these thing would've happened by now, or they may not happen at all. Lately, Angular's push has to been to 'make it faster', which helps with developer productivity, but not much movement on the developer experience (better editor integration) part.
Agree with the thesis, but it's still understated how expensive creating your own DSL is.
One option is using XML with complicated enough XML Schema. Good editors like Intellij Idea support editing such an XML and provide good autocompletion, on-the-fly validation. XML libraries would parse that XML into tree and validate it against schema without any additional effort.
While XML certainly deserves some blame, the amount of tooling around it is unmatched.
XML's main limitation derives from its base primitives: Strings and hierarchies of ordered strings labelled with strings. Starting from that premise adds a great degree of schematic overhead to convey semantics, which tends to overflow into syntactical boilerplate.
Traditional parsers start from one string and evaluate that a character at a time. JSON starts from a few JavaScript primitives. SQL has primitives that vary with the engine you're using. These are other points on the spectrum that are appealing in the details - a plaintext string is accessible to all text editors and simple string processing tools. JSON covers a selection of common data primitives. And SQL has a grip on automatic constraint enforcement, which helps in defining user types with high data integrity.
If I had to pick one that's really going underused, SQL would be it. A string source file could be dissected into a number of tables and edited like any other CRUD system, then re-serialized as a string. That we don't do it this way is more an accident of our current beliefs about which means of implementation are appropriate for various tasks - queries on relational data are "slow" compared to bodging buffers and recursing through DAGs. But if we want nice developer tools we should probably start acting like data integrity is more important. Slopping dependencies around the file system and relying on a separate bespoken tool chain to reconstruct their relationships as part of a built artifact is a thing we ought to outgrow at some point, but it's also so intrinsically accepted that the paradigm is hard to escape.
It's somewhat unfortunate that XML arrived at this perception of being complex, when SGML (= superset of XML, HTML, and everything angle-bracket markup) very much is a technique to give structure to ordinary plain text. For example, SGML could parse .INI file syntax such as the following (and present it as a fully-tagged, canonical markup ie. XML):
If you look at how popular markdown and other wiki syntax is, and at the same seeing folks hit markdown's limitations fairly quickly (for example, markdown has no way to pull-in content from other files or doing any document composition whatsoever), then I hope you can see that SGML is a technique designed for a broad application in this area, being able to handle standard markdown and custom markdown extensions (not to mention that SGML is the only ISO markup meta-language able to handle HTML with all it's tag omission rules).
In XML-land, this has been re-discovered as "invisible markup" a couple years ago, going full-circle in the "dumbing down" of SGML into XML (which was seen as progress and win for simplicity at the time when XML was subset from SGML).
I would also disagree that the defining characteristic of SGML/XML is nesting of tags so much as it is regular content models. A content model such as
is what drives SGML's tag inference in the above example.
But yeah, SGML/XML is made for markup, not necessarily config files and service payloads. And I agree SQL could be improved by treating it like an ordinary programming language, with the same focus on SQL artifacts wrt syntax highlighting, static checking, and testing (though probably not in the way you suggested, by reading-in SQL script files as a primary means to serialize DBs :).
While XML certainly deserves some blame, the amount of tooling around it is unmatched
That tooling was created as a result of XML being difficult to use otherwise, not the other way around; in much the same way that some languages like Java seem to depend on an IDE to be even usable, far more than others.
I disagree. The amount of tooling come from the combination of well-written specification and enterprise jumping on it. Also, basic XML is quite easy to grasp even for people with really limited or no technical skills, which is also a big bonus for adoption. Granted, some XML later developments indulged in the factory factory pattern, but XML got lot of underserved blame, especially when the purpose it’s to advocate another less readable format that doesn’t handle comments.
Yeah, the outcomes vary for sure. For a counter-example you can look to JSX which has excellent support in most major editors, but it's definitely the exception to the norm.
Agreed, the size and scope of the DSL will impact the amount of man-hours necessary for developer experience related work. Resourcing that effort is a separate concern, but one that should be considered for the DSL's viability.
JSX, being a sugared syntax for the React.createElement API, likely doesn't need a team of 20 for editor integration, whereas Angular templating language has more to it: pipes, directives (shorthand and non-shorthand syntax), property bindings, expressions, etc -- and templates need to be evaluated in context of their corresponding TypeScript Component.
I think all the major frontend frameworks are overly complex beasts, they're too big and can't be used as libraries - for example, you can't replace some functionality out of the box without making a fork. I also didn't find any other reactive frontend framework that could fill this requirement, so I decided to write my own. [0]
It is the simplest thing I could conceive (to build, not to use - I plan to provide more convenience, but the selling point is the ability to customize everything.) It's template based and without a virtual dom - during render, each variable used will be listened and when it changes, the element that uses it is updated directly, without recomparing the whole tree. It's still in progress.
It's 35.9kB gzipped, although "too big" is subjective. I may be unimaginative, but over my last 5 years or so with it, there isn't much I've wanted to change. As a library consumer, I don't personally mind if a VDOM is used or not, so long as performance is good enough not to ship a noticeably lagging UI to my users. Having experienced both React and Angular --one uses VDOM, the other not-- I haven't found this implementation detail to meaningfully impact performance, but I spent my time building straightforward CRUD apps.
I would consider sharing an example to Ari of something that's hard to do in React/Vue/Angular, and how it's made simple with AriJS. That might help elucidate where it shines, and the key problem being solved.
Thank you! Yes, i've tried it. You can think of me like someone that prefers an old model of the Porsche 911 than the latest model - too much technology that you can't fix yourself, and that's my primary concern with the major frameworks - they're not simple to reimplement!
React implements the component lifecycle, calling into the component to do it, stores all the component state, and reconciles DOM changes on behalf of the component. It's the very definition of a framework - it's calls the component, the component doesn't call it.
It has been said it is a library, but imo it is not for the reason how it forces you to write code according to their rules. For instance functional components. Most of the code you write is to fit React, so for this reason I would say it is a framework.
Knockout was the last time I felt I really understood the js ecosystem. Since then it’s a mess of npm, webpack, and long complicated front end builds to do simple things.
What interesting is of all the js code I’ve written the code that lasted the longest was pure business logic Written in vanilla js, separated from any library or framework. It’s gone through knockout, angular, and react versions of the site, and people always try to rewrite it the new way, but it just doesn’t work as well and they end up writing a thin simple shim to bridge the old and new code.
You can use React like a library. You can even go without JSX, though nobody really does. React is also kind of odd because it includes a bunch of other stuff like hooks, even though most people bring their own state management to it. But it's definitely the least-framework-y of all the major frameworks.
For years I used react without jsx - with a few minor conventions to make it shorter to write elements via JS (`_div(...)` being equivalent to the jsx `<div>{...}</div>`, and some minor variant for elems with attrs) it's quite easy to build trees using plain old JS syntax.
The advantages of doing that were better tooling support, since everything under the sun supported JS, but not necessarily JSX very well, and jsx isn't really all that important anyhow. Obviously that's not a factor anymore today ;-).
React by itself is just a templating language, yeah I got it.
But i think for most devs out there, like 99% (yeah I made this up) of them, React is actually React + React-dom, the latter per the account of the official team is like a runtime thing, which perfectly fits into the description of a framework.
I understand where you're coming from - it is advertised as the 'view layer'. But I think that templates are more similar to the native browser language - html, so it is easier to grasp. It is also simpler and easier to implement than a js+jsx parser.
Nobody does react without jsx because it is the worst of both worlds, but if React had template strings (not necessarily built-in but at least as a first-party plugin) it would be a more viable option.
Similar to the case of writing, this isn't absolute. But when you decide to use a framework in your project, you are accepting a set of limitations and should do it with your eyes open.
Unfortunately, framework authors make this difficult. It's understandable but frameworks spend a lot of effort promoting their "pros" to developers and none on clarifying their "cons" (well, typically).
When you adopt a third-party framework for a project, you want to do an analysis of the longer-term implications of the limitations the framework. Sometimes that analysis is easy. E.g., if I am making an "event" app -- like an app for "XYZ's 2021 Comedy Tour" -- there isn't a long term (though there, it makes sense to take it further and see if there's an even more restrictive "template" app that will still nicely meet requirements).
But if you are hoping to extend and build-out your app/project/solution over time, especially if you don't know details of how, up-front, it's tough to accept the limitations of a framework.
> A framework, usually, must predict ahead of time every kind of thing a user of it might need to do within its walls.
Well, when I'm building something opinionated like something resembling a framework, I usually aim for something that handles 80% of the use-cases, and then make sure that you can go around the framework for the other 20%.
Your "thing" should always have a way to get out of the way and fall back to the un-abstracted way of doing things. Then you don't have to predict the future.
This was eerie to read. I've made some of the exact same points, word for word to others. It would be fascinating to do a survey/quiz for developers to see who's tempted to write tiny custom frameworks, and who knows better. This rant would also make a good first day read in an enterprise software engineering class.
Thinking over my career, I've seen a lot of libraries and tools out there that are really trying to be tiny frameworks. One example is [ngOptions](https://docs.angularjs.org/api/ng/directive/ngOptions) from angular 1. As a web developer, you're already using javascript, html and css at the very minimum. Yet ngOptions went ahead and created a new configuration language just for rendering a drop down. It was ok for simple stuff, but it rarely felt right for more complex widgets.
I think java bean mappers kind of fall into this trap as well. On the surface, bean mappers look like libraries. But once you start customizing the mappings, you see that each library has created a complex configuration language. The problem is that configuring custom mappings takes about the same amount of space as creating a tiny method to manage a specific field mapping. The difference is that I have to read a bunch of documentation to understand the bean mapper, but I already know how to write java.
At the end of the day, library vs framework boils down to imperative vs declarative. Libraries and imperative languages are composable and relatively easy to understand but can sometimes require a lot of work to accomplish difficult tasks. Frameworks and declarative languages require a lot of work to get right and are really only appropriate for problems that are well understood or formally described. They take on a tremendous amount of responsibility, but when done well they are incredibly valuable. There's a reason the world runs on SQL.
> The difference is that I have to read a bunch of documentation to understand the bean mapper, but I already know how to write java.
I think this is spot-on, and sums up one of my points in a way that's a bit closer to the heart of the issue. However:
> At the end of the day, library vs framework boils down to imperative vs declarative.
I don't think this is true at all. There's a correlation, maybe. Particularly in languages like Java that don't have great support for declarative stuff themselves, frameworks like Spring have served as a kind of workaround for that shortcoming. But if anything I would say pure functions lend themselves even more to library-thinking than imperative code does.
I’m pretty sympathetic to this, as my experience with Spring has been “it’s convenient when it works, but awful to deal with when it’s not working”.
However, I think to push this line of thought, it would help if we had examples of how to build significant applications without a framework. Ideally someone could provide a walkthrough of an app like that. Unfortunately I can’t volunteer myself: the last time I built without a framework, it was a mess.
If you're trying to build an entire app, you are not building a library. An app uses libraries and frameworks to stitch them together. A library brings functions, and a framework brings connectivity and logical flows. An app without a framework is an app with a hidden framework. An app without libraries is an app with a hidden framework and monolithic, catch-all, single-purpose library.
I think you're right that people - even (especially?) those who use frameworks - should know how the sausage gets made. Though it's hard for me to address your specific wish since the answer varies widely depending on what kind of "app" we're talking about
Any kind of app where you might reach for a framework. The point is to have a reasonable contrast between "here's how you do it with a framework, here's how you do it without one". I think size is important, as I can imagine that the benefits of a framework might show when there's more to do.
Personally I'd think of a web app, since this is one place the discussion comes up a lot, but I don't know that's a requirement (I also think the idea that you can build command line tools out of a handful of libraries with no framework is pretty widely accepted).
It has occurred to me in the past that frameworks are often just a way of publicizing a pattern, and that there could be room for something like "open source patterns" that people could reach for and implement themselves without getting locked into an actual dependency and loading a bunch of extra code. That sounds kind of like what you're suggesting.
I'd love to see this as well. Web apps seem really tricky when it comes to not using frameworks though. You can use JQuery - it's tried and true - but it also seems to be on the way out. Great if you know it, but is it worth learning if you don't?
I had the unfortunate experience of building a static site using Nuxt recently which included Jupyter Notebook exports as html files. I ran into so many problems that the complexities of the framework quickly overshadowed the conveniences. Makes me wish I tried doing everything in vanilla JS instead - I might at least have learned something transferable to other projects that way.
> Great if you know it, but is it worth learning if you don't?
The main thing with a lot of these small web libraries (I'd consider JQuery a library) is that they are incredibly easy to learn. You can be productive with JQuery within an afternoon. You can be productive with React within a weekend or two. These are not Java frameworks with Hibernate, XML configuration, database drivers, migration paths, etc etc. You pick one up, do what you need to do, and forget it within a week if you so desire.
That said, if you use JQuery in 2020, with a new project, the vast majority of frontend devs I know would consider that a serious code smell (as in, you don't know JS and that's the reason you have to result to JQuery). Check [1] for some info about vanilla JS.
I think Dropwizard is a great alternative to Spring. If you want to draw a spectrum between framework and "curated libraries + best practices", it's much closer to the latter. But then I guess that also depends on how much you consider the libraries it uses like guice and jndi to be "mini-frameworks". Ultimately, all that matters is how easy it makes your job, in both the best case and worst case scenarios, so I'll leave that to your own judgement as an engineer.
Dropwizard was amazing when it came out and is still much better than deploying to Tomcat, say. But it's showing its age and I'd consider it too frameworky these days. My current currently preference is http4k, which is more library and less framework. Admittedly, that's Kotlin and thus not a fair comparison, but then, newer things can take advantage of other newer things.
1. trying hard to do one thing and do it well (and failing usually), but ...
2. there are many dependencies that you should allow your user to pass explicitly ...
3. and that’s the problem - you have 0% control on how the library is integrated but receive full responsibility for how well does it work. Even libc could be very different and will not behave the way you’d expect it to.
The big problem of making everything a small well thought-out library is large integration surface + having to be flexible and accommodate for all potential cases of integration.
That’s why we mostly have hundreds of overlapping “fat” libraries - too much trouble and too much integration complexity to split them up.
This is why I love writing libraries in Rust: the language provides so many ways of protecting users of your library to shoot themselves into the foot.
Combine this with the built in and easy to use testing and I find myself often to break out core functionality of my applications into libraries just because it is a sane way to keep things decoupled.
This is far harder in languages without a strict type system: you can do everything right and your users still abuse it.
Also to destroy your point on Rust making it safe for your end users.
Library that doesn’t have stable C API is next to useless outside of its original language/community/etc. and I have been bitten by this a few times already myself (my great libraries in D’s std cannot be easily exported outside of D).
Do not repeat my mistakes - provide stable C API (and ABI by extension) as soon as you can. You cannot make it safe though it’s simply not accounted for in the land of linker “technology”.
This has generally been the approach in the Clojure community. Of course, lacking standard frameworks, every library gets abandoned every few years so you have to learn a new even cleverer one. And wherever you might benefit from a group of very cohesive abstractions (e.g. in data science) you’re instead left with a bunch of random incompatible things (that again, will be abandoned in due course).
What are you talking about? Every Clojure library does not get abandoned. It's just a process of survival of the fittest, like with any language. The popular libraries in Clojure are (mostly) actively maintained like with any other language.
It's great that this has been your experience, but I'm on my fourth SQL library at this point. I've been through several HTML rendering libraries, same with authentication, same with config management. The zipper and component libraries I used are abandoned. The data science story is sad compared to the R or Python ecosystems: Incanter dying, clj-ml unmaintained, Fastmath is good for some random stuff but not comprehensive, tech.ml on the rise but still not standard to the point anything actually integrates with it, multiple competing viz libraries, some of which require a browser. Even core stuff like spec hasn't stabilised yet.
I have used Clojure full time for 11 years and it's nonsense to pretend that the ecosystem doesn't have problems that most Ruby, R, Python or .NET developers don't have at all or as often, by virtue of them having strong frontrunning frameworks. I'm not saying they're necessarily more happy or productive than Clojurists, but there is constant churn everywhere in the Clojure ecosystem, and it increases the cognitive load on every project I have ever done in the language.
You're not being rude, but you are being patronising.
Since you've been using Clojure full time for 11 years, don't you think there's chance that you might be biased yourself? I mean, you've basically used it since it was only just released and the library ecosystem is bound to be in flux just after a new language gets any traction.
I BTW never said Clojure was immune to churn. I just said it is comparable to any other language... which it is. You misrepresented my original reply which I find really disingenuous. It might not have been comparable to other languages a decade ago, but it sure is now. And the situation is much better than the most popular language at the moment, JavaScript.
...
I've been using Clojure/ClojureScript as my only language for more than 3 years. Currently on my third Clojure job. Prior to that, I was a heavy user of mainly Python, PHP and Java... all of them in anger (that's such a weird expression, but whatever).
I think your issues must be very tied to the data science domain? I've been following, though not actively using, the data science libraries in Clojure and it's true that the data science ecosystem seems quite immature compared to Python. Incanter was already DOA when I started using Clojure, Cortex came and went, Dragan's libraries are a sort of foundation that hasn't been tapped into yet, and now there's all this Python/R integration going on. Data visualisation seems to have converged heavily on Vega as of late.
But I'm doing web development and parsing. I find that the ecosystem is very stable for that sort of usage. Reagent is the default for frontend, Re-frame the most popular state-management library. There's an ecosystem around these two libraries that many tap into. There are basically only two popular SQL libs available (hug and honey, both using jdbc at their core), so you just pick whichever one fits your use case the best. It sucks that you went through 4 different ones, but I don't think that's very representative. Ring is the standard protocol for most web backends. There seems to be a convergence towards Datomic-style Datalog for newer Clojure databases. Instaparse is the only game in town for parsing using grammars and Clojure spec is widely used (although it's alpha) and can also used for certain forms of parsing.
The only thing I've really been changing has been my tooling, moving from Leiningen/figwheel to Clojure CLI/Shadow-cljs. And I didn't have to do that, it was only done out of an interest to explore these newer tools.
We’ve also made the same migration, working with cljsjs and lein-npm was horrible previously and Shadow is indeed nicer. We also use re-frame, and I have no real complaints with it, beyond Om having had a superior GraphQL like approach to querying but we managed to swerve that bullet because Om never looked finished or really intuitive. Many others did commit to it though.
Are you on the now largely deprecated clojure.java.jdbc or next.jdbc? Fun to rewrite all those serialisers to and from complex Postgres types. Also Instaparse is nice but dog slow compared to clj-antlr and I’ve had to write projects with both. There is no end to my safari park of pet peeves.
Anyway, I love Clojure and you’re obviously allowed to be happy with your tools. I wish you a long and productive career without grumpy folks like me sniping. But please don’t tell me my experience of this ecosystem is wrong or an outlier, because it’s really not. It is a core part of a culture based on granting power and flexibility to the smartest programmers. That comes at the cost of stability and productivity for other classes of programmer. You can’t optimise for everything.
I didn't say your experience was an outlier, just that you're surely a bit biased towards a certain point of view having taken up Clojure right after it was birthed. But yes, your experience is bound to be an outlier compared to people starting to use it in much a more mature state, which will be nearly everyone in the long run.
As for jdbc, I don't use it currently, but did at my old job (along with honey sql). If you think you have to rewrite some DB access code just because there's a newer library out, maybe the real problem is not Clojure, but your mindset. Perhaps something to think about.
The old library has always been painfully slow on large datasets, and is no longer actively maintained so that isn't getting fixed. And because Clojure people expect you to compose libraries not use frameworks, these sorts of upgrades are never transparent. Thus, a whole new library, the Clojure way.
And again, this is every year I've used Clojure (e.g. the Shadow migration you yourself went through on what I can only assume was one tiny web app). I'll happily stop patronising your 3 years experience if you stop gaslighting mine. You'll be telling people you never get type errors and you find Clojure's stacktraces really readable next, all the greatest hits. :)
Not sure what you think your many straw men are going to accomplish. Pretty sure almost no one but the two of will be reading this...?
I guess you think your attitude is a mark of superiority, but in reality you sound about as intellectual as Holden Caulfield in The Catcher in the Rye.
I'm not trying to sound intellectual, I'm just trying to convince you my experience is real. I posted a very simple response to the article, detailing one direct consequence of a preference for libraries over frameworks. You're the one that jumped in to tell me I was making it up.
You can take personally any criticism of your favourite language and its proponents (I'm still one of them, honest!), or you can approach technology with a degree of scepticism. Really no interest of mine, and it's clear the feeling is mutual. Good luck in your future endeavours (and for what it's worth, I used to work for a Clojure NLP startup so I do genuinely mean that - wouldn't it be nice if there was a nice cohesive framework for NLP tasks in Clojure?)
> The trouble with frameworks is that to use it, you have to match your application to the language of the API. If you’re not careful, you end up coupling your system to the framework. This means that you create a cycle between yourself and the framework by architecting your component as an interpreter from F_o to F_i.
You can avoid this cyclic dependency by marginalizing the framework at the edges of your application, as suggested in the post, but that strikes me as much easier to do with libraries.
Frameworks' key trait is that they impose limitations on the programmer
You could say that a type system also imposes limitations on the programmer. Is a type system a framework? Would you rather use a library than a type system?
I think the point is rather that making a framework is much much harder than making a library. So it makes sense to first create a library to solve a problem if that is feasible. Once you are reasonably comfortable with your choices during creating your library, you might think about turning your solution into a framework. You should do that only if there is an actual benefit in doing so.
I think frameworks live on a higher abstraction level than libraries. If you need that higher level, libraries are a poor fit. If you don't need it, why make your life harder in going there?
I have noticed 2 kinds of developers: Vanilla vs Frameworks
the later seems a lot more demanded by the market, but the formers are usually producing better/unique applications, and with much better perf compared to a framework based app
2 weeks ago a customer asked me to optimize a slow Angular SPA frontend for a simple CRUD app. It was taking 30 seconds to load nearly 15MB of js for the login screen!
I have simply trashed everything, 250MB of code and dependencies, ALL replaced by 15KB of Vanilla HTML/CSS/JS. The customer just could not believe how fast and efficient the new frontend was! His first question was which magical framework I used ...
This is my core experience with Tensorflow vs Torch for deep learning -- Tensorflow keeps trying to predict how users will want to structure their research code, while Torch just sticks to making really nice building blocks.
That's why you can use several libraries at once, but you can only use one framework at a time.
That's also why I refrain myself to use things like unity, unreal engine, godot, etc.
Once you wrote something, a framework prevents you from using another framework, or the cost of porting is way too high. Libraries are more easily interchangeable.
You can do so to some extent in PHP. I'll wait for the PHP has rep to slow down....
In PHP, there is Framework Interior Group that created a few standards to which many frameworks implement their components. The interfaces are designed by FIG, and frameworks implement them (or rather, use libraries that implement them).
Containers, HTTP message objects, request handlers, middleware, caches, they all can be interchanged if they implement these interfaces.
Frameworks can now give the choice to the user to pick their own preferred libraries.
There is Laravel, which is on the far end of opinionated frameworks, but Symfony andnSlim for example provided a lot of flexibility.
The frameworks that people really like are things like emacs and xmonad. That is to say, the framework is opinionated enough that you are writing configuration not applications but also flexible enough that your configuration can be completely arbitrary.
Many frameworks are too close to being libraries while still being frameworks, so you lose your control for a dubious return (you don't start with working software and tweak it, rather you have to build from scratch inside of someones else's paradigm).
This feels like an apples vs oranges comparison. Libraries tend to be more focused and single purpose (e.g., provide an interface for technology X).
When you begin composing libraries together to do a multitude of things you tend to end up with a defacto framework whether you call it that or not.
Frankly, I'd rather use a framework that has been battle tasted and hardened through open source and is well documented instead of one that is incomplete and lacks conventions built in-house.
React is a great example of a library that provides enough structure that you typically don’t need a framework. I don’t think you always need a framework for this reason. And it’s definitely battle tested as are all the extras like router etc.
Aside from the fact that create-react-app itself is pretty much a framework, even if you don't use that and roll it all yourself you end up with a codebase that has many of the same hallmarks of a framework such as directory conventions, common class patterns, a bootstrapping mechanism, etc. There's plenty of messy React codebases out there that screw this up.
Yeah, but nobody gets to have their own conference for a library. Frameworks are a product. Frameworks have cults. Libraries are parts. There's no money in parts.
You hit another good word there, magic. Frameworks IMO get to be magical or at least extra magical ... but with that extra magic comes responsibility (documentation, support, etc).
A good example of "magic" that is 100% worthwhile is the MobX library (I may actually write a post on it soon). A few key things that make this the case:
- It accomplishes something that would be dramatically messier and more error-prone in the host language without the use of magic
- It introduces a very small number of "magical" concepts, and their behavior WRT the outside world is very intuitive
- While every abstraction has leaks, the ones in MobX gravitate towards performance rather than correctness. I.e. it virtually always does what you expect, and usually does it efficiently.
- It provides trap-doors and hooks for dealing with every possible abstraction leak you might run into
Definitely. Like frameworks themselves, magic isn't necessarily bad, but it is an added liability that shouldn't be taken lightly, and should only be introduced when it's really worth it.
I totally get what would have gone in to come up with this. Often frameworks are released and their versions are bumped up with major changes without thinking about the struggles/emotions developers might face. Speaking from experience, just recently I had to sit together fixing and upgrading my codebase to the latest version of a framework that I was using. More than the development process itself these take more time and the developer frustrates himself going over StackOverflow and reading through comments on GitHub issues to figure the point he is missing out on.
More importantly, I think that framework writers need to think of their developers as their users and make sure everyone is notified about the updates happening and are on the same level. I know it is a lot to ask for given that these projects are open source but a huge gap in the way we leverage open-source products for its use.
I think the article leaves a bit of a blind spot where the dreaded "in-house framework" lives. Those are always the worst of frameworks when evaluated like you would evaluate a "proper" framework, but they can make up a lot by being used exclusively by their original authors or at least under their personal supervision (then it all depends on people skills). The biggest drawback is that they can cut you off from a lot of upstream innovation, so maybe don't go that way in client side front-end dev... (but sometimes even that can be good, a properly fossilized pure servlet source tree can be nicer to work with than one that has gone through every hype since struts2)
Good article. A while ago I worked a little with Angular and I thought to myself that probably 80% of it could easily be in a reusable library instead of tying it to the framework.
That way you could write a lightweight framework with a powerful library under it.
Framework and libraries are both useful. Besides those, languages are also useful.
The problem is there are too many frameworks instead of libraries. People are writing frameworks by default, and frameworks make strong assumptions, making things not flexible, thus frequently burn people.
When tackling a new domain, it's better to write libraries first. Then write frameworks if it can indeed improve things. On top of that, when framework works, there's a chance a language may work better.
In brief, frameworks are overused, while libraries and languages are overlooked.
As I see it. A good framework is an extraction of common used patterns and a set of libraries. Sometimes people make the mistake of not dogfooding their frameworks or turning that process around, starting with a framework hoping that it will make life easier. Someone from their ivory tower saying, this his how it should academically work. Like the early .net enterprise block framework or asp.net. The frameworks that are painful to use are usually extracted to soon or not accommodate the 80% for that specific use case
Frameworks exist to prevent you from having to do work that has already been done, to a much greater extent than libraries. Those interfaces between libraries are the part that waste the most time for little reward. For web development in particular, reinventing the wheel will waste a huge amount of time. Looking at them as limiting rather than accelerating is an idealist dev's mistake that ignores the realities of business.
While you're busy gluing those components together, your competitors who used a framework already made it to market. While you're reinventing authentication, they've got their first customers. After launch, while you're figuring out new ways to add features that come with the framework, they are able to focus on scaling and growing their customer base.
Sure, this works out differently if you're one of the big tech companies, with tons of bodies to throw at any problem. But if you're on a small to medium sized team, use a framework. There's a reason so many successful startups and mid-sized teams (and large teams) start and grow with Rails or Django compared to non-framework paradigms - and it's not that there are more people doing it. Node-based non-framework app attempts seem to be far more popular and far less successful on average.
I don’t recall from where I heard it, but I’ve heard libraries described as proper tools where you get to take their facilities and (to an extent) use them as you wish, versus frameworks which are an old text adventure game; you have to work within the constraints (and hopefully the frame work is both sane and well documented, otherwise “You are standing in an open field west of a white house, with a boarded front door.”)
I would like to share my view as of an author of a JS UI framework. It largely depends on the project, but sometimes it can be very difficult to integrate many libraries to play nicely together. To build an admin web application with some data entry, a developer would need a set of standard widgets like inputs and menus, a nice data table implementation, form validation engine, a router and a state management library. For more serious applications charts may also be needed, multi-lingual support and some advanced widgets like maps, html editor, calendar, or diagrams. If each of those is a separate library with separate rules for styling, state management, selection, number and date formatting, context menus, tooltips, the project quickly becomes very very complex. If all things are integrated into a framework you get a sane development experience. There are always some sacrifices and some features may be missing, but frameworks usually cover 80-90% of all requirements. The problem is that requirements change and frameworks usually have hard time to catch up and preserve backward compatibility.
It's primarly a matter of taste, but I suppose that if you use only libraries instead of a framework: you will end up copy/pasting the same boilerplate to tie each library from one another over every project.
You could make a library that standardizes a library format and take care of that boilerplate, then you can focus on what makes your project different, but that's not very different from a framework is it ?
I find that this type of duplication is incidental: I happen to need to do the same thing, but if one project has special requirements the other projects don’t need to change. Hence the boilerplate is fine and doesn’t need to be dragged into a library.
This is good advice and of course, because of the nature of computer science and software engineering, it WLOGs its way to many other topics.
For example: I’ve always enjoyed the tools that come as a pile of small Unix utilities, rather as one single application (GUI or not, but usually GUI.)
BitTornado in the early 2000s. Git of course, where you build your own workflows that work for you using the different command line tools (fetch and rebase, pull and merge, cherry-pick if you want to.)
Much of the Debian helper script suite is designed to be reusable in small components in a makefile, rather than enforcing you do it all their way in one big invocation. And of course the current systemd vs bash-plus-lsb-shell-functions comparison.
I found I couldn’t use ZFS without systemd recently, on an Ubuntu install. ZFS shipped with init scripts that only worked with systemd, but systemd meant my console booted blank. I decided to fix the former rather than the latter and it was pretty easy to write my own ZFS init scripts using the LSB init shell functions — a library, not a framework.
What about the compatibilty issue for Libraries.. Do you think if dependent libraries are not compatible it will create issues?
Like back in 1998, half of the developers time went in figuring what version of hibernate works with what version of Spring mvc.
Later on, BOM came into existence.
Then everything was packaged in extendible autoConfiguration and Boot came along.
Whenever I've had tools a platform/infra team provides that feels like I'm working with my hands tied, maybe it makes things withing some predicted boundaries and everything else hard, this seems to be the culprit. I never realized it until I had a teammate redesign an infra tool from a framework into a library, explicitly pointing out this distinction. Now I explicitly think about it as a fuzzy spectrum when I'm writing what's kinda analogous to higher order code, or really anything that feels like "investment" code I want to leverage to help myself down the line. Being aware of the spectrum has let me be intentional about my choices in a way that may be the biggest improvement in my programming since I learned about unit tests.
Recently, pytorch-lightning has been interesting to me because it seems to be relatively close to the boundary between these two ideas.
This makes no sense, both have their place and limits are good! Ossification has a purpose!
I often use the metaphor of a tree:
leaf = library
stem = framework
root = platform
Write platforms so you can hot-deploy everything including the database and change the turnaround of the iteration to as close to zero as you can!
In my MMO engine client I have a couple of hundred of milliseconds turnaround from saved source to running in the engine (platform + framework + library) without any reloading of assets.
Same thing on the Server.
Only use OS + language (C+ (only string/stream and classes for structure only) for client and JavaSE for server) and build everything else yourself except if there are libraries built by one person (personally I use JSON and DNS4J for Java and GLEW + SoftAL for C, all of those are at the edges!). Everything else is a recipe for long-term disaster!
One way to improve both frameworks and libraries (but is especially needed with frameworks) is if they create 'safety valves' - places where it's possible for the programmer to just break free of what the framework/library is doing and do their own thing. With this approach, and sufficiently modular architecture, you get ease of use at the highest level, but can drop down to as a low a level as needed to work around limitations (a common example is ORMs just letting you write raw SQL when you want to, but most of the time you can get the benefit of their abstractions).
I have to admit that I never use frameworks, only libraries. I feel everytime I use a framework there is some kind of mismatch between what the author of the framework had in mind and the problem at hand.
The general principal the author is talking about — ie not being overly opinionated— and asking yourself “Could I use this in combination with something else like it?” — could be applied to even something like a programming language.
For example, Elm 0.19+ would be the counter-example in this case. It makes it very hard t/ almost impossible to interact with JS code. A more flexible web front-end wouldn’t force such a strong opinion upon its users. It would let the users decide, and leave flexibility / room in various aspects of the language.
A common objection to this is the proposition that eventually engineering will produce a framework, even if it's not intentional, so why not just choose an existing one and adapt/patch as necessary?
I tend to agree that good engineering teams will produce a useful framework, eventually--but this framework will be very specific to the domain of the problem it solves. It will be extensible and flexible to add new features to. It won't be something that can be generalized well to other problems, though.
There is also Write Libraries, not language features.
The progression language -> library -> framework maps to semantics -> a vocabulary of expression -> ideology.
I think the OP has made an error in considering a framework as a sort of 'maximalist' construct that must cater to any and all possiblities.
Let's consider a (known) rigirously and formally specified 'framework' approach and see if it needs to "predict ... every kind of thing ... [possible to accomplish] within its [constraints]":
A framework design team is tasked with facilitating a technical goal:
RoR: "MVC for the world!".
Java Beans: "Tool makers of the world, unite!"
J2EE: "Look ma, my IT commodity programmer can do distributed component oriented programming and not even know it!".
At its most abstract consideration, RoR is an idoelogical take on writing MVC. "This is how you do it". Beans nails down an approach for wirable components that can be supported at both design and runtime with tooling and automation. JEE says lays down 4 canonical and formally boxed contexts of reference (naming), transaction, persistence, life-cycle.
None of the above had to predict ahead of time "everything". Frameworks are almost always really about top-down considerations.
[p.s. follows]:
Keep in mind that we're always building on top of the 2 fundamental layers of langauge semantics and libraries. The proverbial "standard library" is the reminder. So that just leaves the final layer, and the critical consideration of "more libraries" or "manifesto on how to use libraries".
The actual core issue is composition limitations. It is far easier to compose libraries (vocab-a + vocab-b + ..) than it is to compose framework (which may not even be possible).
A framework is a 'final consideration' of the best approach for a (set of) niche technical concern(s).
Could this also be stated that a DSL or code that seeks to abstract away most code in the real language is more overhead to maintain, grow, and build on top of than building stuff mostly on top of real language?
In which case, are we merely saying adding code abstraction layers are at the extremes - where you need to add a lot of custom code - more overhead than working on top of the bare language.
I found the blog post to be just another take on the micro-services vs monolithic application debate. Where micro-services = library and monolithic = framework. As a full-time coder I take a different approach - documentation is everything.
I could care less if a piece of code is a library, a framework, is made up of micro-services, or is monolithic. It all means diddly-squat if the documentation sucks and you can't figure out how to use the damn thing. It really doesn't matter if a big company sponsors the code either, because they usually have the worse documentation. Take the Mongoose Application Framework by Infor (https://mongoose.infor.com) which is supposed to be designed to help developers quickly build applications. The API has virtually no documentation. It has tons of libraries but you don't know what they do or what they are capable of unless you dig into the code to see what it does. They re-implement tons of standard .NET API with their own twist on it.
That experience changes the way I develop for fun. I have taken to do a lot my free-time coding with Python because the documentation, for the most part, is very well done. Most of the Python developers seem to take a documentation-first approach to their code which makes it easy to take on new concepts and branch out into different areas such as IoT or AI.
Django’s docs are world class. It’s by far the example I always reach for when talking to people about how I think “great” docs should be written and served.
I liked this piece, but came out of it more on the side of frameworks, oddly, than libraries. I went into it with a bias ostensibly shared by the author based on the title but here we are. Probably different priorities or values in terms of what constitutes 'shipping' I would guess based on our respective backgrounds.
It's the most honourable way, but kiss goodbye to recognition and money. Somebody will slap a UI/app on top of it and get all the credit and get paid for it. It's one of the paradoxes of free software/open source.
AGPL fixes this somewhat but it just makes it practically not profitable for everybody involved.
One nice benefit of framework restrictions is standardization. For example a Django developer can jump into a lot of Django projects and instantly know a lot about them. Things like the builtin admin are often very similar between projects. This can ease onboarding a great deal, to everyone's benefit.
Phoenix is the only web framework I like, because I can understand and control its control flow in a way that I can't with, say, Rails. Plus running on BEAM is great - the whole Erlang ecosystem has always felt like what OOP hype-men promised us in the 90s and 00s, but in reality.
> Google can back Angular, Pivotal can back Spring
Funny, when you talk about frameworks that create more problems than they solve, these two are the first two that pop into my head. In fact, I’ve been trying for years to find any problems that either of these two actually do solve.
Do we need frameworks? If you think a framework is a layer of abstraction, then an OS is a framework, a browser is a framework. You dont need angular to write web app but you need browser and OS. So someone needs to write frameworks.
First you write libs, then you get tired of always doing the same scaffolding work, so you build the boilerplates to cover the most common flows - and before you realized you end up with a framework... the resistance is futile
I think the best way to make a framework is as a collection of libraries that are designed to work well together, but can also be used independently. Then users can pick and choose which parts they need.
I guess I should weigh in with my experience in this area, as author of both libraries and frameworks, in a lot of different domains, over a number of years.
I feel that it depends on what needs to be done.
For the most part, I prefer writing fairly atomic libraries. I like to use a “LEGO Block” approach, and [relatively] small, focused, modules are ideal for this.
I like each module to have an independent project identity and lifecycle. Makes for much higher quality and usefulness.
Sometimes, though, I need a really fundamental infrastructure. In that scenario, a framework is the best approach. In a lot of cases, we may have considerable control over the implementation of the framework, so that resolves a number of the downsides (like prediction of how it will be employed).
I’ve written a number of frameworks. It’s a huge PItA, and I avoid it, if at all possible. It is my experience that the biggest pain point is quality. I find that proper testing of a large infrastructure is damn near impossible.
I consider writing a framework to be the “nuclear option,” to be only employed when nothing else will do. Even then, my frameworks tend to be layered affairs, with coupling between layers as loose as possible.
Writing any reusable software module is like having a child. I can’t just walk away from it when I get bored.
A framework is a huge responsibility. Once it is written and out there, I need to take care of it, in all its contexts, and for all its life. It is a lot easier to deprecate a small module, than a large infrastructure.
The same goes for libraries, but the scope of a library is a great deal more manageable and flexible than a framework. I find flexibility to be a crucial aspect of long-term software (I have written software that has lasted decades).
When I design software, I am constantly looking to the future. It’s become force of habit, and, as Niels Bohr once said: ”Prediction is very difficult, especially about the future.” The less control I exert on the future, the better.
If I can accomplish the same job as a framework with a set of independent modules, even if it is more work to do the modules, that’s what I do. A framework is a big, fat atomic blob that tends to age badly (in my experience).
However, I am obsessed with quality; which is not always an acceptable posture in today’s fast-paced, competitive workplace.
Engineering, at its most fundamental level, is always about finding the most practical approach.
I like microframeworks because they do one thing well and they don't in the way - like a library. When I use batteries-included frameworks, I end up not using a lot of what they offer.
I don't see Chefs complaining about the difference between getting better kitchen utensils versus working in a more functional kitchen. They go hand in hand.
Most of the time you need one framework (to give a best practice structure to your project) and many additional libraries to add functionality. So it should be obvious that the number of frameworks this world needs is quite a bit lower than the number of libraries.
In addition, you are always free to go with your own implementation and that is true for the structure dictating frameworks as well as for the libraries.
Do we need frameworks? If you think framework is a layer of abstraction, then an OS is a framework, a browser is a framework. You don't need angular to write a web app, but you'll need OS and browser. So someone needs to write frameworks.
On another note here a a few claims framework authors make which are usually short sighted
1. Frameworks promote code reuse
Frameworks however don't promote reuse within different versions of the same framework, which makes the code reuse claim hypocritical at best.
Where is the code reuse between Angular 1 and Angular 2 ?
Libraries also promote code reuse by the use of shared object files (C) or modules (Python, Node.js ...) and it is usually more long lasting.
2. Frameworks are easier for beginners to use
Frameworks get difficult quickly once the initial use case is run out. Frameworks appear deceptively simple because they hide complex code under the rug, but anytime you need to do functionality similar but different to that initial complex code, you will have a problem.
With a library, you can inspect the code ... copy it and edit it to do what you want, it is not as simple with frameworks especially when you have to deal with 100 levels of inheritance.
In a way frameworks encourage code illiteracy.
3. Frameworks are not composable
Using two or more frameworks is next to impossible because of all the assumptions frameworks make. Frameworks are totalitarian in this regard.
4. Object Relational mismatch is real
By forcing the object model on everything, it makes understanding non OO models for data, which is most data ... very difficult.
Relational model is more generic as it supports references and is obviously more efficient.
>Libraries aren't everything, but they should be preferred.
The story is much much more complicated than this. Every single one of you is using a framework everyday, that is unless you program in assembly language.
All Programming languages are highly opinionated frameworks on top of assembly languages.
When you know this, then you know that you can't just say something simplistic like libraries are "preferred."
If all programming languages are frameworks on top of assembly language then frameworks are not only preferred they are Required for humans to make sense of complexity.
When you write a framework like django within the framework of python you are putting a framework within a framework. It's similar to writing a compiler that compiles some PL into python. You are building layers and layers of interfaces between your logic.
For every extra layer you add, two things, in general, happen:
1. Complexity decreases
2. Restriction increases
So when someone writes an article like this, what they are actually saying is this:
"The complexity and restriction balance in the current framework/programming language
strikes the right balance so in my opinion (keyword)
writing another framework on top of the current framework will
increase the restriction to the point where the trade off is not worth it."
When writing a library as opposed to a framework, what people are essentially doing is augmenting the capabilities of the current framework without creating an extra layer of abstraction. Another good way to think about it is that frameworks are equivalent to extra layers of abstraction and therefore the benefits and downsides of adding more layers of abstraction are equivalent to adding more frameworks. So it's really not as simple to say that "Libraries are preferred."
The story actually gets even more complicated then this. I spoke previously as if there is a trade off between Complexity and Restriction. While a decrease in complexity is always good, an increase in restriction is not always bad.
Take for example ELM. ELM is a language (or framework) that compiles to javascript and HTML. It is more restrictive then javascript due to type checking. Yes type checking is a framework that decreases freedom and increases restriction. But people think type checking is a good thing. Why? Well, the ELM type checker/framework is so powerful that it restricts you from ever writing a run time error. You absolutely do not have the freedom to write code that has certain bugs in ELM while in javascript you absolutely have the freedom to pollute your code with thousands of obscure bugs.
Restriction can be good, really good. In fact restriction outside of testing is our greatest tool and weapon for combating the problem of bugs occurring as a result of too much complexity.
In short, I don't like this article because it doesn't illustrate the full story. It kind of biases against frameworks. The author did not think deeply enough, he thinks of frameworks in terms of things like rails or django, but he does not think of what a framework is from a more general perspective.
Keep in mind, react or how react is used is basically a framework as well. Nobody uses it as a library, they prefer the component abstraction to take over the entire notion of the DOM.
However if yo write a framework write a framework.
Don't cripple the community by doing the IoC part and the framework calls you but create a fragmented ecosystem of a million ways to do state, routing and so on.
Looking at you React. Doing Vue was so much easier when I had a Vue project because there were a lot fewer choices.
I see the author is in the "they is not a singular pronoun" camp, which means we cannot be friends.
Anyway I don't know how it is in other parts of the stack, but based on my experiences on the front-end, to build a large-scale project you're going to need a:
- Router
- Way to manage forms & validation
- Way to communicate with the backend
- Set of tools for testing
- Set of commonly used components - that includes layout
- 118n support
- Convention where to put certain files and how to name them
- Store
So it would make sense to have all this in one coherent framework. That being said not all projects are large-scale projects. Normally you'll only need a subset of the mentioned.
The one thing not mentioned in the article is that the above line of thinking is almost guaranteed to lead to an insane level of abstraction, which was parodied in this classic article from nearly 15 years ago:
http://web.archive.org/web/20141018110445/http://discuss.joe...
(Sadly, that site is gone, but the memories live on...)
Addendum: to see this in practice, one needs to look no farther than Spring, the famous Java framework:
https://docs.spring.io/spring-framework/docs/current/javadoc...
https://docs.spring.io/spring-framework/docs/current/javadoc...
https://docs.spring.io/spring-framework/docs/current/javadoc...