The one thing not mentioned in the article is that the above line of thinking is almost guaranteed to lead to an insane level of abstraction, which was parodied in this classic article from nearly 15 years ago:
(Sadly, that site is gone, but the memories live on...)
Addendum: to see this in practice, one needs to look no farther than Spring, the famous Java framework:
The way you use the core inversion-of-control framework people often just call "Spring" is you take your homegrown, doesn't-depend-on-Spring business logic, you make a config file, and if you're not executing it a servlet container, you make one entry-point class to start it all up .
There other projects under the Spring umbrella, and some of them are meant to be called from your code. Like Spring JDBC for database access, whose framework-like nature is apparent, yet in terms of its usage pattern, it resembles a library .
 https://docs.spring.io/spring/docs/current/spring-framework-...  https://docs.spring.io/spring/docs/current/spring-framework-...
Part of the blame I think falls on Spring's DI container, which is "too good". It makes it easy to pull in any bean, even the wrong one, which means developers have to be disciplined managing how beans depend on each other, and you end up with controllers returning entities. Of course, any codebase turns into a mess if you're not disciplined, but my impression is that developers tend to be less careful when working within the confines of a framework like Spring because it gives them the false sense that they can do no wrong.
Recently I started a side project with Spring Boot (after all I like the framework), trying to organise it following Uncle Bob's Clean Architecture approach, which aims to isolate the core business logic of an app from its implementation details like database code and external interfaces. The core turned out pretty well isolated, but the rest is all Spring. Database access? Spring Data. External interface? Spring MVC. Communication with external services? Spring RestTemplate. I'm not saying it's bad, it's actually awfully convenient, but it's not so easy to swap out - say - Spring MVC for http4k, so you end up with Spring being everywhere. Which again, not a bad thing, just something to consider.
External interfaces are just isolated interfaces, which end up being tied to a framework as frameworks do most if not all of the heavy lifting.
Just because you picked Spring Database to implement your persistence layer it doesn't mean you are bound to Spring to add a webapi or a web app, though.
If for some reason your frameworks are leaking out of any of your external interfaces then that's an issue with how you designed your app, not the frameworks you used.
As an example, some Clean Architecture examples using ASP.NET Core decide to implement their persistence layer passing around Entity Framework classes as their interface. That, obviously, tightly couples the whole app to Entity Framework in particular and ASP.NET Core in general. This coupling in turn is completely eliminated by passing a generic repository interface, but using Entity Framework's convenience often forces us to ignore that.
I agree, you're definitely not bound to, but again, in theory. For what I've observed in practice, factors like convenience and friction in going off the beaten path mean that when you pick Spring your app ends up being 90% Spring stuff. Which is not necessarily bad, there are certainly many good things about using the well supported, robust, and predictable set of Spring components. But I wouldn't say that Spring is the kind of framework that merely helps you tie together your app, that otherwise stays out of the way, and that you can swap out at any time.
You can choose to import facade-style Spring libraries, and write your code against them -- like Spring JDBC, Spring JMS, TaskScheduler -- if you want something that does some heavy lifting for you, but doesn't tie you to a vendor implementation directly.
Meanwhile, Spring Boot is a fully-opinionated framework based around the combination of defaults and on-the-fly auto-configuration, and giving you a single runnable uber-JAR at the end. And you're right about it: if you're using Spring Boot, it makes sense to depend on other Spring libraries, because (1) the docs guide you into them, and (2) the two will interact to auto-configure. For example, you can just run your DB code, and it can auto-configure an in-memory instance as you're developing . Then, when you've gotten the real database, just specify the real config.
Spring Boot really shines in bootstrapping a greenfield project to get going quickly, especially if you're willing to compile its annotations into your code. You can then go back and incrementally override behavior and configs once you realize you want them a certain way.
To be fair, at least Spring and Lombok don't require me to maintain loads of yaml/XML files to describe the DI stuff, but annotations in java aren't always the most intuitive thing.
Requiring a class representation of literally everything is a huge limitation, and I wish there were other alternatives to this style of DDD that abstracts everything to an absurd degree.
The Service class holds an instance of the Caching class and the Database class, each of those hold an instance of clients with connections, for example.
Can you explain how language support would remove this pattern?
I'm currently working on Scala services, and we don't have any DI, which is actually kinda nice, but it just means we "new-up" all these classes manually in our Main class and pass them through as arguments one to the next.
So far I like how there is no "spring magic"; however, in the end the pattern of dependencies is the same.
There was maybe one instance where I had to deal with obtuseness like that. Definitely not the norm.
Real CPU instructions
I think one could argue that containers aren't much of an abstraction for the running program itself, though, as you say. The place where containers provide an abstraction is at deployment time/process management time; the program itself has (essentially) the same API whether it's running in a container or not.
The only question is if you can afford hiring a few gods for the task or you are rather satisfied with whatever falls off a truck and in reality is completely uncapable of inveting universe (or even capable of baking).
At the end the customers digestive problems are those that do matter. And you can get sued for food poisoning.
I personally love apple pies and am just not satisfied with plastic taste of supermarket frozen, microwave oven heated "products".
“When you want to hurry something, that means you no longer care about it and want to get on to other things.”
― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance
I actually "finished" it to the point of a working system for the customer with real customers in about 5 weeks by ignoring most of what they had done - required a handful of classes and JSPs. Probably distinctly under-architected but it worked and was built on for a number of years after.
It was quite complex (message queues, multi-threading, async etc.) so that it would be 'scalable' yet crushed under the slightest load, customers were experiencing delays during peak hours.
We removed about a quarter of the code (some intermittent queues & components), nothing changed in terms of functionality. Now we're removing more code and switching over to less complex data structures to fix the delays.
Maybe a complete rewrite would have been better after all, who knows.
I'm not a very experienced developer. I like to ship stuff and get projects done in reasonable time and a "YAGNI" attitude. Let's just rewrite stuff and introduce more abstractions when it actually hurts and a refactor is in scope.
On the other hand, I see a tendency that if you give projects (it could be a greenfield project or some maintenance) to a team of, say, 6 developers, then they will just 'create work' for themselves, filling up the todo list with items which may lead to an over-engineered project that could have been easily be done by 3 people within the same timeframe.
If you have some good reads/stories on this topic, patterns to watch for, I'd appreciate if you shared them.
This is a very good attitude, but I would like to tack on a second part to it: Instead of designing your code to be infinitely extensible in all directions, make it easily replaceable instead.
It's impossible to anticipate every future requirement, but certain choices will make it easier to deal with a new requirement as it pops up. One of these things is unit tests. If you have some gnarly business logic with tons of edge cases, you really really want a good test suite because it enables you to add new edge cases with confidence.
It does require a lot more thought and discipline, because it's not just a case of constructing an epic hierarchy of abstractions, or stubbing/mocking the hell out of your code in the tests to swap things around.
I've been on too many projects which tried to build the perfect abstraction right out of the box - as if the problem domain fully known.
That's because people in general write to think.* The level of deep thought achieved through writing is harder to achieve beforehand.
In the same sense, the level of deep thought achieved through writing code, is hard to achieve beforehand. It's actually worse because your engineers, dangled in this web of classes, won't be able to think about the big picture for themselves.
An Architect thinks in terms of what sounds good, what is beautiful in OOP-land not in terms of what is easy to write, and what's performant to implement.
One poignant example is the interviewer who wants you to implement chess pieces as classes. which sounds good in the architects' world but is blatantly insane if you think about the actual code and the actual challenges you're facing.
* writing to think and writing to be read are conflicting goals that why editors exist.
I stopped the work, broke the team up (and let a few of them go), re-assigned the goals of the project to 3 people who, working half-time, from scratch, and in Python, rebuilt the entire thing in 3 months, delivered it and it worked (and continues to work). "Builds" are now an scp to a VM, and a restart of a flask application.
My wife recalls a similar experience where a complex application she worked on (and ran reliably on very little infrastructure and with an entire team of 4), and serviced thousands of simultaneous users, was replaced (when a new CTO came in) by a giant, consultant driven Java project that required approximately 6x the infrastructure, 3x the staff to keep it going, and at launch could only serve 9 to 12 users at once. It was a debacle. But sunk cost fallacy forced her company to throw millions of dollars of consultants at it until it achieved some minimal level of barely available service. She and her boss left around that time.
She heard that a couple years later the CTO was let go, the consultants fired and it was all replaced by a team of 5 rewriting it all again from scratch (in Java) but like you also without all the architecture barf dropped in from orbit, and immediately went back to servicing the thousands of users again.
I'm convinced that this is the fault of some kind of consultant industry that seems to infest enterprise software circles that is really only good at inventing ways of designing systems that require more of their services, but never seem to actually deliver working systems.
So it's not always "those architecture astronaut guys used the company's resources to study their webscale fantasies and I came to save the day", but rather, "they did as well as they could using the choices and resources they stuck with and I happened upon the project with 20/20 hindsight".
Where I think I part ways with you is with this "they did as well as they could using the choices and resources they stuck with and I happened upon the project with 20/20 hindsight". One of the big lessons we've learned as an industry involved in software development is to start small, iterate often, get feedback from users. What I see time and again with these types of overblown space elevator projects is a kind of fundamental...immaturity -- the people who run these don't seem to understand how to achieve results with the minimum required to do it.
They've never started small and iterated to large. The projects they've worked on have all been so enormous, and have taken so long, that they have very few data points to draw lessons from. Each iteration and growth cycle in a mature project creates lots of information that can be reapplied elsewhere. But for people who've only ever grown up in large enterprise software projects, that's the only approach they know, and there's only a handful of lessons they've learned.
It's not only architecture astronauts, but the entire ecosystem that supports "enterprise" software engineering: vendors, consultants, scaled agile experts, design tools -- even university programs that churn out enterprise ready engineers. The appearance of these things, and how widespread they are in certain circles, demonstrates how those circles have regressed and tossed into the rubbish bin the lessons that we've learned. The penalty that's being paid is that these same lessons have to be continuously relearned again and again, but instead of pushing against outside models of how to do things (e.g. trying to adopt physical engineering approaches to software), we have to push against our own industry.
When you go against billion dollar enterprise software industries, you end up sounding like a heretic. It's really only when you get a long portfolio of success stories can you succeed. But that's very hard to get in today's climate where people spend their entire careers trying to kill ants with nuclear weapons dropped from the orbit of Mars.
I just didn't put the blame on the previous team, maybe I should have for sticking with it for two entire iterations, still not entirely convinced. I think they couldn't help it and the decisions were already made for them by large faceless corporations. I guess what's the point of calling themselves engineers if they don't feel like they are in a cockpit pressing lots of buttons right from the start?
By profession, I call myself a computer programmer, not an engineer or architect in order to remind myself that I should weigh problems by starting from fundamentals such as a state machine, an integer constraint system or a breadth first search, before setting up service discovery or training neural networks.
Return a ClassLoader that supports instrumentation
through AspectJ-style load-time weaving based on
In fact, any cross-cutting concerns - like transactions, can be done this way. So you can write code that don't care about transactions, but behind the scenes, it is all within a single transaction, and transaction errors are handled centrally.
It adds runtime costs of course. And the more you add this way, the more complicated it becomes. And certainly can be abused.
How anyone can look at Java as it's actually used and say Scala is too complex is beyond me.
AspectJ is a language, these aspects are visible and have their own language abstractions, just separate from the application code. Just like you have monad implementations separate from their uses.
So writing aspects is also "plain code". How the weaving happens (at compile time/load time, on source code or on bytecode, etc.) is an implementation artifact
Also, AspectJ isn't Java and the problems AspectJ tries to solve and the approach apply to a bunch of languages and variants for functional languages also exist.
Code containing aspects is certainly not plain Java code, because the aspect annotations (or, worse, invisible string-name based pointcuts) change the behaviour of the code they're on, breaking the normal rules of the language. You'll see "impossible" behaviour at runtime: call a method and the call stack jumps to a completely different method. A method throws an exception that's thrown nowhere in its body. If you want to say that AspectJ is a language in its own right, then it's a language that breaks all the rules of good language design: it essentially contains COMEFROM as a core language feature. Whereas monad implementations are (generally) plain old code that follows the normal rules of the language.
> Also, AspectJ isn't Java
It (or other libraries that do much the same thing) is the vast majority of Java as it actually exists in the real world. I've yet to see a substantial Java codebase that wasn't using some form of AOP or reflection (even if only indirectly via these big frameworks).
I didn't say it was. AspectJ is AspectJ, as you surmised correctly "a language in its own right", and so when you define an `aspect` that's it's own thing, not a `class`. You seem to demand that the language is Java, but that's not really a criticism of AspectJ, the language, then, rather denying that it is a language.
It's like saying a C++ class and some implicit behaviors that change behaviour of user code (via a custom assignment operator override) is crap because it's not C. Maybe it's crap because you don't like it and like C better but it's still not quite fair to demand it be "plain code" when it is plain code.
Please look more deeply into https://en.m.wikipedia.org/wiki/AspectJ. It has its own syntax and is not just some annotations plus a framework you have to run.
FWIW, your arguments against AspectJ work just as well against any language with macros or metaprogramming in general (although you probably don't like those either, and that's fair!).
> If you want to say that AspectJ is a language in its own right, then it's a language that breaks all the rules of good language design
Frankly, the kind of things that AspectJ offers or proposes to use are IMO no less sane than some of the things languages like Ruby (e.g. monkey patching, middleware decorating/nesting calls via including a module, defining methods is a dynamic call that can be intercepted and customized, etc.), Python (e.g. decorators), JS (e.g. messing with prototype behaviors), or even Prolog (e.g. metainterpreters that metalogical predicates to implement different resolutiom strategies) typically encourage.
People have called monads the "programmable semicolon", and so you can do all kinds of things that don't at all represent what a user of the monad would anticipate unless they read the docs/implementation. It could enforce some form of order of evaluation, or store additional state that is hidden, etc. Sure, this is not the same thing as aspects or metaprogramming/macros, but what they have in common is that some external definition defines the exact semantics of a application/user piece of code and has potentially a whole lot of freedom to stray from what may look like it's "obvious".
The goal of AOP is to find a solution to challenges were modularization by typical means hasn't worked. And the typical examples are usually valid. Is it great? Is it needed? I'm not sure either, but your criticism sounds a lot like you only look at it from a lense of a pure statically typed functional programmer. From that POV a lot of language designs probably look impure and crap, whether justified or not.
> It (or other libraries that do much the same thing) is the vast majority of Java as it actually exists in the real world.
Are you specifically talking about AOP or AspectJ? I believe the former. And I don't think it's AOP, you are talking about various frameworks implementing a wild set of changes to the core language via reflection, code generation, or other such things tailored to their specific use cases.
Say what you will but that, while it might fall into the AOP class of doing things, is quite different from AspectJ's goals. The idea is/was that you don't need all these different frameworks with their own conflicting custom implementations. Rather, that using AspectJ should be the common language in which these metaprogrammatic aspect-oriented concerns are defined (for instance by such a framework that's currently using it's owm custom implementation with no common set of rules), which can be analyzed and provide static tool support (e.g. "who"'s amending, wrapping, early aborting behavior for this particular method).
So, again I don't think the criticism of AspectJ is fair (abd that's what I addressed, not the modern state of Java and frameworks in general). If at all, the problem is that AspectJ itself isn't actually used but instead that its superficial ideas have been coopted and turned into a metaprogramming wild west. Maybe it's AspectJ's fault but I think it's just a coincidence. That state of affairs is not unique to Java at all, either.
By building on Java AspectJ tries to have it both ways, which I think is a mistake. It's normal, and I'd even say semi-encouraged, to use non-AspectJ-aware tools when working on an AspectJ codebase - and of course an upstream library maintainer may not even know their library is being used in an AspectJ codebase. Which causes real problems because they will refactor according to a different set of rules.
> FWIW, your arguments against AspectJ work just as well against any language with macros or metaprogramming in general (although you probably don't like those either, and that's fair!).
> Frankly, the kind of things that AspectJ offers or proposes to use are IMO no less sane than some of the things languages like Ruby (e.g. monkey patching, middleware decorating/nesting calls via including a module, defining methods is a dynamic call that can be intercepted and customized, etc.), Python (e.g. decorators), JS (e.g. messing with prototype behaviors), or even Prolog (e.g. metainterpreters that metalogical predicates to implement different resolutiom strategies) typically encourage.
Yes and no. You list a bunch of things that are bad to a lesser or greater extent, but the method-name-pattern-based stuff I've seen done with AspectJ is the most cryptic form I've ever encountered. Not only is nothing visible at the declaration site or the call site (not only no decorator but no magic import either), you can't even grep for the method name to find the thing that's messing with that method.
> People have called monads the "programmable semicolon", and so you can do all kinds of things that don't at all represent what a user of the monad would anticipate unless they read the docs/implementation. It could enforce some form of order of evaluation, or store additional state that is hidden, etc. Sure, this is not the same thing as aspects or metaprogramming/macros, but what they have in common is that some external definition defines the exact semantics of a application/user piece of code and has potentially a whole lot of freedom to stray from what may look like it's "obvious".
That's not really true in my experience; monadic composition is programmable only in the sense that an abstract method in an interface is programmable (indeed monad usually literally is an interface with abstract methods, or the closest equivalent in the language you're working in). Syntactically you can see that monadic composition isn't regular composition and so you know that some effect is being invoked, and as with any abstract method you might be able to click through to a specific implementation or you might have to accept that it's just "some unknown implementor of this interface". But in an important sense there's no magic: all your values are just values, all your functions are just functions, all the normal rules of the language still apply. I do object to things like thoughtworks each where a seemingly normal assignment gets magically rewritten into something monadic.
> The goal of AOP is to find a solution to challenges were modularization by typical means hasn't worked. And the typical examples are usually valid. Is it great? Is it needed? I'm not sure either, but your criticism sounds a lot like you only look at it from a lense of a pure statically typed functional programmer. From that POV a lot of language designs probably look impure and crap, whether justified or not.
Well I came to that precisely because of experience with that problem. It was seeing the bugs introduced by AOP that made me look for a better way to do things, and that was how I got into functional programming.
> The idea is/was that you don't need all these different frameworks with their own conflicting custom implementations. Rather, that using AspectJ should be the common language in which these metaprogrammatic aspect-oriented concerns are defined (for instance by such a framework that's currently using it's owm custom implementation with no common set of rules), which can be analyzed and provide static tool support (e.g. "who"'s amending, wrapping, early aborting behavior for this particular method).
That's fair. But in that case you have to see at AspectJ as a failure in terms of today's Java ecosystem. Even codebases that use AspectJ don't manage to avoid using all those other metaprogramming frameworks as well, and I don't think I've ever heard of a framework deciding to move away from a custom AOP implementation into doing something via standard AspectJ. (And I do think that AspectJ offers too much flexibility to ever make for a comprehensible codebase, even with the help of better tooling support).
> That state of affairs is not unique to Java at all, either.
It's not unique, but it's embraced to an unusually high extent in Java (Ruby would be another example). And I can't help thinking it's because of Java's deliberate, advertised simplicity, because a lot of the aspect-based stuff seems to be there to paper over missing language features (or, even more tragically, to paper over language features that are now there as of newer versions of Java, but that Java programmers have got used to having to step out of the language for).
More importantly, nothing about standard Java development requires The use of AOP.
You don't need macros to achieve what grandparent was talking about - handling cross-cutting concerns like logging or transaction boundaries without being too intrusive. In a language that lets you do a half-decent monad implementation (which is pretty much any language that has higher-kinded types and first-class functions, although do notation can be a useful enhancement), you can comfortably write this kind of thing in plain old code. (I do those very things - transaction boundaries and logging - in Scala all the time).
> More importantly, nothing about standard Java development requires The use of AOP.
And yet, in real-world Java development, people feel the need to use AOP. In my 10+ years of JVM work in companies large and small, I don't think I saw a single one that didn't use some form of AOP. That should tell you something.
In my experience that covers almost all the use cases, while it means that library code can still behave predictably and be understood and tested (e.g. you don't have to worry that a library upgrade is going to break your application, which you do if you've used AOP to define pointcuts deep in its internals).
In Ruby, for example, "alias_method", "define_method" and "send" is sufficient to implement cross-cutting in a handful lines of code, and at least one generic gem exists that provides basic cross-cutting in a generic way.
But it's very rarely done because it's generally cleaner to pass down code for the library to explicitly call.
I used to be very excited about AOP, but in effect while AOP can be great as a debugging facility (e.g. ability to inject logging at any point for example), when you have control over the design it tends to be better to explicitly build code so that it is designed to accommodate user provided functionality. In Ruby that tends to involve e.g. the ability to pass in blocks a lot of places.
To be fair, there's something to be said about a seasoned/documented library versus the cooked up micro-framework/function that a random coworker invented in Scala to do the same thing, and which might be implemented differently in each project in your company!
Source: I'm about a year into Scala after many years of Java. I like a lot about Scala, but 4/5 places where we have "plain old code" where in Java we'd have used an open source library, the "plain old code" is painful to read/reason about.
But to answer the question, I'd use a monad, something akin to treelog ( https://github.com/lancewalton/treelog ). Apply the wrappers at the points where you want them (and they'll be visible in the type), and then the tracing is naturally threaded through in a way that's visible but not intrusive (basically just the difference between <- and =, so when you're reading the code you can see it but it doesn't get in the way of reading the business logic). Maintaining it is easy because it's just part of the type of everything, so automated refactoring in you IDE will do the right thing; testing is easy because everything's still just a function and a profiled computation is still just a value, but you have access to the profiling trace in a normal way as a first-class value in the language.
I don't have a problem with AOP that doesn't change the functioning of the code. I have a problem with AOP that changes the functioning of the code, which is the overwhelming majority of AOP that I've seen in real life.
> Now, certainly if you use the wrong tools for the wrong job, you'll shoot yourself in the foot. But there are many powerful tools that we shouldn't eschew just because we fear using them wrong.
But you don't need any of the dangerous functionality to do profiling. So that's not a valid argument.
Advice/AOP is a pattern that can fit (if you want it) in dynamic language as well.
look at https://python-aspectlib.readthedocs.io/en/latest/ for an example of using aspect with python
You say that like it's a good thing. When there's a bug to look into grepping strings from the log is one of the first steps, the first step if you don't have a stack trace, I want that string to be where the problem is. Having the logging in with the rest of the code is an inevitability anyway and unless you only want logging at function boundaries.
> In fact, any cross-cutting concerns - like transactions, can be done this way.
That's another thing I want to be explicit, if I have some code that is going to to do a bunch of updates I want it to demand a transaction as part of it's interface. I don't want it hoping it gets executed in the scope of a magic layer that will handle the transaction.
The common theme with AOP is that it takes stupid simple code into an opaque mess spread across a dozen classes that's far harder to maintain.
The Java people are describing here is alien to me, and I code Java for a living at the moment.
you're looking at enterprise java, not regular java
I'm writing microservices using a relatively lightweight, straightforward and explicit web framework (sparkjava, which is more of a library than a framework really), modern language features and no AOP...
(Oh, unless you mean the Spring crowd are enterprisey, in which case I take that all back.)
- paid per class
- sought to reduce the number of executable lines per class to 1.
Nope, that's OOP taken to extremes rather. Java does that with enforcing things to be classes.
It's the unruly mind coming up with generalised words.
The class loader is an important entity for a Java app. And you can have more than one (although this is advanced voodoo already). So, an instrumented class loader.
This is more than infuriating, it can actually cost you a lot of time if you're not careful about evaluating the framework.
The conclusion was "the rails-not-java club is best" and then everyone worked backwards to find believable reasons why. At the impromptu pitch meetings "minimalism vs. complexity" took off.
Now that the rails club is out of fashion and the k8 club is in. Well, we'll invent reasons that over-engineering is better than deployment automation that "just" sticks a .jar in a .deb and apt-get upgrades a handful of non-virtualized servers colocated at a local datacenter.
"Right. Fuck you. Fuck your lack of hammers. Fuck your factory factory factories. And fuck your store. If you decide to pull your head out of your ass, I'll be over here, duct-taping a rock to a stick so I can actually hammer a actual nail into a actual piece of wood."
I'm reminded of the HGttG scene regarding the display department.
Note to anyone actually trying this: if you have a drill, drilling a hole in a piece of wood and sticking the stick (preferably dowel) through works better; if you need more weight, attach the rock to one side of the head, and a coin or other patch of metal to the other.
Also (if you have the space for it): adopt a lathe/smelter/set of chemical-refining glassware/etc today!
Spring used to be a bad example yes, but not anymore. They really put the thought and work into fixing its design. Right now I can write an app __completely__ independent of Spring. In fact that's what I'm usually doing. You can do Clean Architecture with Spring now. Please keep your facts straight.
This claim is so bad.
You could look at the name AbstractDestinationResolvingMessageTemplate and the useful information they would be gaining is that it's a message template. They would still not know how to use it; they'd still have to do an additional step like checking the documentation on its usage etc. The amount of useful information to text is small and it doesnt help.
I'm 95% sure that all those people complaining about Spring never read the reference documentation. I did, and whenever I bump into a problem I have an idea about how to solve it. Besides reading it only takes a few days and you're set for years.
Why do anybody expect to be able to work with a technology if they don't bother reading the documentation? This is like complaining about not being able to drive a car when you sit down in the seat without training.
You're throwing out some very large assumptions as well.
Inheritance: You must predict how people will use your class. Make a mistake and they're screwed and need to resort to all kinds of hack. (Like frameworks)
Composition: People will use your class if it fits their needs. They can easily wrap the code, tweak or simply discard it. (Like libraries)
Anyone know what I mean?
Edit: to explain my position a bit, what I observe with people that are easily offended is that they can't seem to separate what the character is saying from what the overall piece is saying.
Just for example, think of The Office. Michael Scott says a lot of horribly sexist and homophobic stuff right? And we as the audience frequently laugh at it. But I don't think anyone in their right mind would say that The Office is sexist or that the audience that watches it is. Because we understand that the humor is that Michael Scott is saying something really inappropriate but he's also clueless and naive.
In this piece, the salesman character makes a dark joke. That is part of the character of the salesman, along with other aspects that are revealed. If you personally don't find it funny -- maybe you just don't like dark humor -- hey, I won't tell you you're wrong, humor is always subjective, but I don't see anything offensive about this. It's not advocating for violence it's simply adding a weird dark personality quirk to an already very weird character.
What I find with the easily offended crowd is all they look for is "bad thing involving a protected group in here? OFFENSIVE!" and completely ignore the actual context of everything else. It'd be like if you took a "that's what she said!" joke from The Office and completely ignored the context of Michael Scott's character.
You don’t have to be a bad person to have a dark sense of humor and you don’t have hate women to find the joke in the hammer factory story funny.
But I recently saw "Dont Fck With Cats" on Netflix and if had been a joke about animals I'd also been jared (because in my mind it would no longer be absurd).
So I guess it depends on what media makes us fearful off right now.
An AbstractEmotionSensorSensorFactory would be required to unwrap that.
Moreover dark humor has its place. Indeed 2 of the biggest comedies of the last 20 years used exactly this kind of humor (Seinfeld and The Office). Even to this day some of the most acclaimed comedies have humor in this vein (Veep and Curb Your Enthusiasm for eg).
It's that kind of joke that at best will make you look like a moron and ruin the mood... It's especially sad since the post is otherwise interesting.
You’ve got it backwards. No “psycho” would consider it funny; they would just consider murder something normal one could conceivably do with a hammer. There is nothing funny about it, to them; it’s just normal.
The murder joke was only funny to those people who consider murder something utterly unthinkable. It is its very outlandishness which creates the humor.
If it is no longer considered funny today, it can mean a few things: Either more people now believe that there are more “psychos” than they thought previously, or an extraordinary number of people have turned into “psychos”. Or possibly both.
This is why I think that masking outrageous things as humor is a losing strategy. Either it is funny, in which case the thing being masked with humor is so outrageous as to be completely niche, making the secret signal irrelevant, since so few people recieve it. Or, the thing is such a relatively common stance to make it no longer absolutely unthinkable, which makes it no longer funny, which removes the mask.
(Also, how do “white supremacists” fit into this? The original joke was about murdering an ex-girlfriend. This might possibly be generalized into misogyny, but I fail to see the connection to white supremacy. Please don’t fall into thinking that all people who hold opinions which you don’t like also hold all other opinions which you don’t like.)
I wish even more people were willing to write down a discussion of alternatives and their trade-offs. Instead of "Google does it this way, so it's obviously the way everyone should do it"... a reasoned discussion of the pros and cons. Good to see.
This was the blog equivalent of someone sincerely saying “Pardon my language, but gosh darn it that man is a jerk!”
I’ve spent the past 4 months learning rails and it makes me realize how much I didn’t appreciate about Django and its magic compared to rails magic.
If I surrender this to a framework, there are a lot of decisions I can't make with regard to performance, and I have a lot less certainty about when and in what order exactly things are executed.
There are of course some exceptions, but in general I want libraries to provide me simple, synchronous functions, and it's my job to figure out how to spread them out over the hardware.
For instance Laravel is very structured: orm, auth routes, controllers system etc., where express.js let's you do whatever.
And almost all Expressjs or other Nodejs projects i've joined were a mess and had security issues, something as simple as CSRF tokens, or not allowing redirects to external domains, are often forgotten or not fully implemented.
Doesn't mean I don't sometimes prefer a library approach, but the developer(s) need to be a lot more qualitative.
In my opinion the productivity benefits of these kinds of frameworks evaporate and go negative pretty quickly once you move very far beyond what they provide out of the box.
I also use nestjs and there it seems updates to how to resolve a field has made it tons faster l, purely looking at the trace.
And I know that it won't be as fast as regular old json due to the checking but that slow? Must be something someone can do :)
Frameworks tend to avoid any boilerplate in the imperative core but then multiply I throughout the rest of the project. Possibly due to chronology.
Also, I don't see why one would even consider using a framework if squeezing out every bit of performance from your hardware is that important. I can't imagine writing a Web app without some framework in the interest of time it takes to develop. Then writing a hundred of the same stuff again? You just have to ensure you don't use a bloated framework which is akin to not choosing a bloated library. Unless you are working on a device layer, it's not our job to spread things over the hardware. Unless you have direct access to memory, you can't do this anyway with most of the languages designed to develop apps. I think the primary goal of an app developer is to implement the business logic correctly and securely in a reasonable time. I agree with the OP that libraries should be preferred but I don't see a problem especially with performance in the domains frameworks are used.
(I agree with your main point, and the above fact causes a quasi-allergic reaction in me which causes me to avoid GUI programming - despite graphics being one of my favorite topics.)
I can usually patch functionality together from various sources but understanding how to break a process down if you haven't done it before is often very challenging.
For example - Django taught me how to structure server-side code that needed to handle input from both a database and a browser and potentially an API in a way that keeps things nicely layered.
I'd done all this before with PHP but my architecture was a dogs dinner. I could structure my own code nicely now - but only because of the experience I gained from using an opinionated framework.
But is framework necessarily bad or inferior to library? I would like to present React as an example. React heavily regulates the control flow for its developer, leaving several specific hooks to allow you control the timing when your code would trigger.
But React is an excellent piece of software. And assuming in a perfect world, if everything we interact are more or less functional or self-containing, then timing doesn't really matter, and uncertainty is thus controlled.
In the end, the lesson here seems to me is, making good frameworks are much harder than making good libraries. For libraries, you need to make sure the building blocks are solid, but for frameworks you are defining a whole world for people to visit.
Idiomatic React has completely changed several times over the course of a few years. This shows that it did not anticipate people's needs well enough. And React is in the fairly enviable position of receiving corporate backing in the tune of a million dollars per year.
Now consider tools like Redis or Postgresql. You can interface with them from just about any language and you can go years without your usage of them ever becoming unidiomatic.
Framework <-> library is a spectrum, of course, but as an obvious indicator of React’s location on that spectrum—you can feasibly use it inside, say, an Angular app to render a particular third-party widget. At its core, React is a super-efficient unopinionated render() and tools for constructing its input. On its landing page it establishes right away (IMO, correctly) that it’s a library.
The reverse, bringing a single Angular component into an otherwise React-based webapp, is basically an oxymoron. Angular will generate e2e test scaffolding for your entire project, prescribe where to query data, expect you to follow a particular file layout in your codebase, and pull in enough machinery to rule out any auxiliary uses.
FWIW, it _is_ also possible to render Angular code inside of React  (but again, this falls well into non-idiomatic territory for both Angular and React)
Idiomatically, React is the only thing dealing w/ DOM in an app, and the way it's used is rather frameworky ("don't call us, we'll call you"). And when it's used that way - which is the vast majority of the time - the used idioms do vary heavily depending on when the code was written.
Tying this back to what I mentioned about postgres: _within_ a React codebase, you need to be aware of the _semantics_ of the system: performance is achieved through reasoning about the semantics of things like shouldComponentUpdate/memo and even object identity, one needs to understand the semantics of stale closures and useCallback, etc. This is similar to having to understand the semantics of how various ORM idioms translate to underlying SQL queries. By comparison, the cost of any given jQuery idiom tends to correlate more directly with the underlying semantics, just as the cost of a raw SQL query does - i.e. it's pretty obvious without context how expensive `$('html').html(html)` is, just as it's more obvious what is the performance profile of a CTE vs a subquery, in comparison to whatever the high-level ORM idiom is (assuming, of course, that you know SQL)
Of course, none of these reasons make them bad tools. Quote the contrary. We are about to ship a react + postgres + redis product. They all work great together for the very fact that like good libraries, we can borrow from each where they are needed.
Just because idiomatically React is the only thing interacting with the DOM isn’t an indictment...it just means React is really good at that and most people would rather use it exclusively for DOM manipulation. The same way that Postgres is often the only data store, not because it forces it upon you, but because it’s really really good at it.
I don't think with React as distributed (and per tutorial) you can avoid writing that first render() call, so no, “you call us” in this case.
The existence of third-party skeletons and templates that impose some structure or another and pre-write the initial render() call isn't making React a framework.
It's a feature, of course: being mostly on the library end of the spectrum opens up innovative uses of the efficient rendering. A framework by definition is narrow about what one can achieve without fighting it, otherwise (as I have to agree with TFA) its developers would drown in features and complexity.
As to running Angular in React, I stand corrected, last time I built on Angular when it was at version 5. I stand by the rest of my argument, though.
Programming languages are hard to categorize as either library or framework. They are libraries with full set of control flow primitives or framework that doesn't assume a particular flow at all.
React went from React.createClass -> class xxx extends Component -> Hooks.
But I don't think that means...change? The fundamental concept of react, that UI is a derivative from the state, stays the same. Class based or Hook (or just a really nice way to write Functional Component), stays the same.
For instance I see a lot of developers and engineers with 6 months and more experience with functional components and still misunderstanding the point of all of this. And these are smart people at high tier companies.
The main thing being the meaning of having dependencies for useEffect and other hooks.
That said, it isn't a smooth transition going from explicit lifecycle methods in class based components to useEffect hooks in functional components. But it refines the quality of the thinking, and the quality of the code overall, imo.
Specifically, how are useEffect dependecies complex? You list the variables that, when changed, should trigger the hook to run. What am I missing? And I'm not claiming no one struggles with hooks. But people also struggled with the lifecycle methods of React classes. In almost every case I've seen this, I've asked the developer to read the React docs. That's usually enough. I'm guessing there is some other underlying more fundamental issue if someone is still struggling with hooks after six months of hand holding.
This is 100% a result of how react (ab)uses the host language semantics to implement their DSL to describe components and has nothing to do with the conceptual nature of functional components themselves.
It relatively easy to imagine a design that doesn’t require the dependency lists when capturing a callback closing over props or state.
Not saying that design would be better or worse, just that it could exist and still leverage concepts like FCs, hooks, effects etc.
The point is that more people, projects and even frameworks can use the library you write. If you write a framework, people have to drop whatever it is they're using and go with your framework. That's bad, unless your framework is the one to trump them all, which it probably isn't.
Which is what should be expected.
The point of a framework is to not only provide higher level abstractions to rapidly put together a certain type of application but to also provide a certain approach or set of approaches (a "framework") to follow as a guide during the process.
This means some generalization should exist but if a framework is too abstract its really no longer a framework... it's a library.
A library is supposed to simply provide useful utilities/lego blocks to assemble however you want and not necessarily give specific guidance or patterns.
Certain common patterns of usage may arrive as a sort of emergent property based on fundamental library choices/structure but it's overall pretty flexible.
Another way to put it is that for web you're already pretty heavily invested in the "framework" of HTML/CSS/JS wheather you like it or not. React is essentially just choosing another baseline framework to replace it, not adding a framework where none exists.
When something is a library, your code calls functions on it.
When something is a framework, it calls functions on your code
which has a strong association with "framework" thinking, but I think it's still useful to treat them as distinct concepts
From this perspective it would be interesting to see some rationale for why libraries not frameworks.
I would say that libraries are not the opposite of frameworks at all. You can have a library which contains a set of framework-classes.
i guess it's about frameworks being "the main thing happening". i.e. a framework controls the whole program's execution, occasionally yielding control to your code. which is a bit wordier to be sure :)
When something is a framework, it calls functions on your code AND it does a lot of the heavy lifting for you.
I studied Frameworks in my Software Engineering degree, but really had no appreciation for them until developing large software in WebObjects.
Until you've extensively used a well-written Framework, libraries seem great. After, you realize a library just helps you here and there, but a Framework is so much more.
The association here is that frameworks are "form". One might intuitively think that form/frameworks is/are constraining rather than liberating, but per the sculptural analogy the roughed-out shape provided by the framework liberates you to concentrate on the details rather than suffer from analysis-paralysis when there is too much freedom of approach (just a large block of marble from which you might sculpt literally anything).
But it depends on the scope and nature of what you are trying to do ...
If your problem fits withing the scope/form of the framework, and your value-add is in the details, then using a framework, all else considered, make sense.
Alternatively, when you are trying to do something outside of the norms of gentrified frameworks, then the flexibility of libraries (or ultimately a blank sheet of paper) is what is called for.
Of course, the devil is in the details... a poorly designed framework may be TOO constraining and not provide enough flexibility to be widely applicable, while poorly designed libraries may be overly restrictive due to poorly designed APIs that don't place nice with other libraries or the data structures you'd like to use.
I like to think of frameworks like a kitchen. The chef sets the menu and everyone works to produce how the chef thought it should be.
Sure sometimes you will run into things that you fight the framework, but a good framework normally allows a "raw" way of doing it "by hand" instead of the sugary framework way that "just works (most of the time)". But most of the time it does work out fine and you can hop between other's proejcts/parts of the code and have instant familiarity and intent
Your framework could tie you to a specific form, forcing you to implement your FAAS by posting each request to a REST service, or forcing you to implement your stream processor by posting each event to a FAAS. Or you can liberate your code from form by making sure that nothing important about it depends on a form-specific framework.
That said: good, modular, well-developed frameworks evolve alongside their use cases. Django with DjangoRestFramework is an excellent choice for many API use cases, and the migration path is about as painless a major architectural overhaul can be.
If you've fallen into the trap of thinking a specific form is eternal -- for example, if you've subconsciously identified "our back end == our REST services" and therefore "data serialization/deserialization, metrics, and data storage access should and will be done only in the REST services" then you might have relied on a REST service framework for that functionality. Then when you want to move functionality to an AWS Lambda or a Spark streaming job, you find yourself forced to choose between running a REST framework inside the Lambda or Spark streaming job (which will be hacky, if it works at all, since frameworks want to own the application lifecycle and be configured via specific mechanisms) or writing and maintaining a duplicate implementation of all your data serialization/deserialization code, metrics configs, S3 configs, etc.
I don't mean to say it's a disaster if you end up doing that, but it's a cost that will tend to hold you back from doing the right thing and make you persist with hacks and band-aids longer than you would have if you were free to change.
> good, modular, well-developed frameworks evolve alongside their use cases. Django with DjangoRestFramework is an excellent choice for many API use cases
That's a very modest change. It's still a long-lived service doing request/response over HTTP.
> Now, if there's a major organization backing the framework, maybe the calculus works out. Google can back Angular...
In my experience, even large orgs don't necessarily have the capacity for building out full-featured editor integration and build tools. Google has a great team managing Angular and it still has no tool for performing codeshifts. The editor integration to date still leaves much to be desired.
With the number of years and man-hours dedicated to this point, I'd imagine these thing would've happened by now, or they may not happen at all. Lately, Angular's push has to been to 'make it faster', which helps with developer productivity, but not much movement on the developer experience (better editor integration) part.
Agree with the thesis, but it's still understated how expensive creating your own DSL is.
While XML certainly deserves some blame, the amount of tooling around it is unmatched.
If I had to pick one that's really going underused, SQL would be it. A string source file could be dissected into a number of tables and edited like any other CRUD system, then re-serialized as a string. That we don't do it this way is more an accident of our current beliefs about which means of implementation are appropriate for various tasks - queries on relational data are "slow" compared to bodging buffers and recursing through DAGs. But if we want nice developer tools we should probably start acting like data integrity is more important. Slopping dependencies around the file system and relying on a separate bespoken tool chain to reconstruct their relationships as part of a built artifact is a thing we ought to outgrow at some point, but it's also so intrinsically accepted that the paradigm is hard to escape.
In XML-land, this has been re-discovered as "invisible markup" a couple years ago, going full-circle in the "dumbing down" of SGML into XML (which was seen as progress and win for simplicity at the time when XML was subset from SGML).
I would also disagree that the defining characteristic of SGML/XML is nesting of tags so much as it is regular content models. A content model such as
configparam: configparam-name, configparam-value
But yeah, SGML/XML is made for markup, not necessarily config files and service payloads. And I agree SQL could be improved by treating it like an ordinary programming language, with the same focus on SQL artifacts wrt syntax highlighting, static checking, and testing (though probably not in the way you suggested, by reading-in SQL script files as a primary means to serialize DBs :).
That tooling was created as a result of XML being difficult to use otherwise, not the other way around; in much the same way that some languages like Java seem to depend on an IDE to be even usable, far more than others.
And this conversation happening on web platform... Is it usable? Browser complexity is enormous.
I'd argue XML tooling is a joke. Can it make a closure?
JSX, being a sugared syntax for the React.createElement API, likely doesn't need a team of 20 for editor integration, whereas Angular templating language has more to it: pipes, directives (shorthand and non-shorthand syntax), property bindings, expressions, etc -- and templates need to be evaluated in context of their corresponding TypeScript Component.
It is the simplest thing I could conceive (to build, not to use - I plan to provide more convenience, but the selling point is the ability to customize everything.) It's template based and without a virtual dom - during render, each variable used will be listened and when it changes, the element that uses it is updated directly, without recomparing the whole tree. It's still in progress.
const elem = React.createElement('div', null, 'Hello World');
It's 35.9kB gzipped, although "too big" is subjective. I may be unimaginative, but over my last 5 years or so with it, there isn't much I've wanted to change. As a library consumer, I don't personally mind if a VDOM is used or not, so long as performance is good enough not to ship a noticeably lagging UI to my users. Having experienced both React and Angular --one uses VDOM, the other not-- I haven't found this implementation detail to meaningfully impact performance, but I spent my time building straightforward CRUD apps.
I would consider sharing an example to Ari of something that's hard to do in React/Vue/Angular, and how it's made simple with AriJS. That might help elucidate where it shines, and the key problem being solved.
Because it is, in fact, just a library.
What interesting is of all the js code I’ve written the code that lasted the longest was pure business logic Written in vanilla js, separated from any library or framework. It’s gone through knockout, angular, and react versions of the site, and people always try to rewrite it the new way, but it just doesn’t work as well and they end up writing a thin simple shim to bridge the old and new code.
The advantages of doing that were better tooling support, since everything under the sun supported JS, but not necessarily JSX very well, and jsx isn't really all that important anyhow. Obviously that's not a factor anymore today ;-).
But i think for most devs out there, like 99% (yeah I made this up) of them, React is actually React + React-dom, the latter per the account of the official team is like a runtime thing, which perfectly fits into the description of a framework.
Nobody does react without jsx because it is the worst of both worlds, but if React had template strings (not necessarily built-in but at least as a first-party plugin) it would be a more viable option.
*Use* libraries, not frameworks
Unfortunately, framework authors make this difficult. It's understandable but frameworks spend a lot of effort promoting their "pros" to developers and none on clarifying their "cons" (well, typically).
When you adopt a third-party framework for a project, you want to do an analysis of the longer-term implications of the limitations the framework. Sometimes that analysis is easy. E.g., if I am making an "event" app -- like an app for "XYZ's 2021 Comedy Tour" -- there isn't a long term (though there, it makes sense to take it further and see if there's an even more restrictive "template" app that will still nicely meet requirements).
But if you are hoping to extend and build-out your app/project/solution over time, especially if you don't know details of how, up-front, it's tough to accept the limitations of a framework.
Well, when I'm building something opinionated like something resembling a framework, I usually aim for something that handles 80% of the use-cases, and then make sure that you can go around the framework for the other 20%.
Your "thing" should always have a way to get out of the way and fall back to the un-abstracted way of doing things. Then you don't have to predict the future.
However, I think to push this line of thought, it would help if we had examples of how to build significant applications without a framework. Ideally someone could provide a walkthrough of an app like that. Unfortunately I can’t volunteer myself: the last time I built without a framework, it was a mess.
It’s not that I disagree with you, necessarily. I’m just trying to extract a more specific claim from this post.
Personally I'd think of a web app, since this is one place the discussion comes up a lot, but I don't know that's a requirement (I also think the idea that you can build command line tools out of a handful of libraries with no framework is pretty widely accepted).
I had the unfortunate experience of building a static site using Nuxt recently which included Jupyter Notebook exports as html files. I ran into so many problems that the complexities of the framework quickly overshadowed the conveniences. Makes me wish I tried doing everything in vanilla JS instead - I might at least have learned something transferable to other projects that way.
The main thing with a lot of these small web libraries (I'd consider JQuery a library) is that they are incredibly easy to learn. You can be productive with JQuery within an afternoon. You can be productive with React within a weekend or two. These are not Java frameworks with Hibernate, XML configuration, database drivers, migration paths, etc etc. You pick one up, do what you need to do, and forget it within a week if you so desire.
That said, if you use JQuery in 2020, with a new project, the vast majority of frontend devs I know would consider that a serious code smell (as in, you don't know JS and that's the reason you have to result to JQuery). Check  for some info about vanilla JS.
1. trying hard to do one thing and do it well (and failing usually), but ...
2. there are many dependencies that you should allow your user to pass explicitly ...
3. and that’s the problem - you have 0% control on how the library is integrated but receive full responsibility for how well does it work. Even libc could be very different and will not behave the way you’d expect it to.
The big problem of making everything a small well thought-out library is large integration surface + having to be flexible and accommodate for all potential cases of integration.
That’s why we mostly have hundreds of overlapping “fat” libraries - too much trouble and too much integration complexity to split them up.
Combine this with the built in and easy to use testing and I find myself often to break out core functionality of my applications into libraries just because it is a sane way to keep things decoupled.
This is far harder in languages without a strict type system: you can do everything right and your users still abuse it.
Library that doesn’t have stable C API is next to useless outside of its original language/community/etc. and I have been bitten by this a few times already myself (my great libraries in D’s std cannot be easily exported outside of D).
Do not repeat my mistakes - provide stable C API (and ABI by extension) as soon as you can. You cannot make it safe though it’s simply not accounted for in the land of linker “technology”.
It all helps but never eliminates the problem completely.
I think java bean mappers kind of fall into this trap as well. On the surface, bean mappers look like libraries. But once you start customizing the mappings, you see that each library has created a complex configuration language. The problem is that configuring custom mappings takes about the same amount of space as creating a tiny method to manage a specific field mapping. The difference is that I have to read a bunch of documentation to understand the bean mapper, but I already know how to write java.
At the end of the day, library vs framework boils down to imperative vs declarative. Libraries and imperative languages are composable and relatively easy to understand but can sometimes require a lot of work to accomplish difficult tasks. Frameworks and declarative languages require a lot of work to get right and are really only appropriate for problems that are well understood or formally described. They take on a tremendous amount of responsibility, but when done well they are incredibly valuable. There's a reason the world runs on SQL.
I think this is spot-on, and sums up one of my points in a way that's a bit closer to the heart of the issue. However:
> At the end of the day, library vs framework boils down to imperative vs declarative.
I don't think this is true at all. There's a correlation, maybe. Particularly in languages like Java that don't have great support for declarative stuff themselves, frameworks like Spring have served as a kind of workaround for that shortcoming. But if anything I would say pure functions lend themselves even more to library-thinking than imperative code does.
I have used Clojure full time for 11 years and it's nonsense to pretend that the ecosystem doesn't have problems that most Ruby, R, Python or .NET developers don't have at all or as often, by virtue of them having strong frontrunning frameworks. I'm not saying they're necessarily more happy or productive than Clojurists, but there is constant churn everywhere in the Clojure ecosystem, and it increases the cognitive load on every project I have ever done in the language.
Since you've been using Clojure full time for 11 years, don't you think there's chance that you might be biased yourself? I mean, you've basically used it since it was only just released and the library ecosystem is bound to be in flux just after a new language gets any traction.
I've been using Clojure/ClojureScript as my only language for more than 3 years. Currently on my third Clojure job. Prior to that, I was a heavy user of mainly Python, PHP and Java... all of them in anger (that's such a weird expression, but whatever).
I think your issues must be very tied to the data science domain? I've been following, though not actively using, the data science libraries in Clojure and it's true that the data science ecosystem seems quite immature compared to Python. Incanter was already DOA when I started using Clojure, Cortex came and went, Dragan's libraries are a sort of foundation that hasn't been tapped into yet, and now there's all this Python/R integration going on. Data visualisation seems to have converged heavily on Vega as of late.
But I'm doing web development and parsing. I find that the ecosystem is very stable for that sort of usage. Reagent is the default for frontend, Re-frame the most popular state-management library. There's an ecosystem around these two libraries that many tap into. There are basically only two popular SQL libs available (hug and honey, both using jdbc at their core), so you just pick whichever one fits your use case the best. It sucks that you went through 4 different ones, but I don't think that's very representative. Ring is the standard protocol for most web backends. There seems to be a convergence towards Datomic-style Datalog for newer Clojure databases. Instaparse is the only game in town for parsing using grammars and Clojure spec is widely used (although it's alpha) and can also used for certain forms of parsing.
The only thing I've really been changing has been my tooling, moving from Leiningen/figwheel to Clojure CLI/Shadow-cljs. And I didn't have to do that, it was only done out of an interest to explore these newer tools.
Are you on the now largely deprecated clojure.java.jdbc or next.jdbc? Fun to rewrite all those serialisers to and from complex Postgres types. Also Instaparse is nice but dog slow compared to clj-antlr and I’ve had to write projects with both. There is no end to my safari park of pet peeves.
Anyway, I love Clojure and you’re obviously allowed to be happy with your tools. I wish you a long and productive career without grumpy folks like me sniping. But please don’t tell me my experience of this ecosystem is wrong or an outlier, because it’s really not. It is a core part of a culture based on granting power and flexibility to the smartest programmers. That comes at the cost of stability and productivity for other classes of programmer. You can’t optimise for everything.
As for jdbc, I don't use it currently, but did at my old job (along with honey sql). If you think you have to rewrite some DB access code just because there's a newer library out, maybe the real problem is not Clojure, but your mindset. Perhaps something to think about.
And again, this is every year I've used Clojure (e.g. the Shadow migration you yourself went through on what I can only assume was one tiny web app). I'll happily stop patronising your 3 years experience if you stop gaslighting mine. You'll be telling people you never get type errors and you find Clojure's stacktraces really readable next, all the greatest hits. :)
I guess you think your attitude is a mark of superiority, but in reality you sound about as intellectual as Holden Caulfield in The Catcher in the Rye.
You can take personally any criticism of your favourite language and its proponents (I'm still one of them, honest!), or you can approach technology with a degree of scepticism. Really no interest of mine, and it's clear the feeling is mutual. Good luck in your future endeavours (and for what it's worth, I used to work for a Clojure NLP startup so I do genuinely mean that - wouldn't it be nice if there was a nice cohesive framework for NLP tasks in Clojure?)
> The trouble with frameworks is that to use it, you have to match your application to the language of the API. If you’re not careful, you end up coupling your system to the framework. This means that you create a cycle between yourself and the framework by architecting your component as an interpreter from F_o to F_i.
You can avoid this cyclic dependency by marginalizing the framework at the edges of your application, as suggested in the post, but that strikes me as much easier to do with libraries.
You could say that a type system also imposes limitations on the programmer. Is a type system a framework? Would you rather use a library than a type system?
I think the point is rather that making a framework is much much harder than making a library. So it makes sense to first create a library to solve a problem if that is feasible. Once you are reasonably comfortable with your choices during creating your library, you might think about turning your solution into a framework. You should do that only if there is an actual benefit in doing so.
I think frameworks live on a higher abstraction level than libraries. If you need that higher level, libraries are a poor fit. If you don't need it, why make your life harder in going there?