Don't put it together. Learn all this stuff, understand it deeply. Then use it when you find it improves whatever you are working on.
If you start putting it all together you will most likely only confuse other developers, especially juniors. Don't do it. The point of gathering all this knowledge should be making the application simpler and easier to maintain. So introduce stuff gradually and only when you find the added complexity pays out.
As a junior who previously got thrown into the deep end on one project, where I have been essentially useless for way too long, but now am having actual guidance, and do one part at a time: I can only second that.
Also: train your juniors, please. Getting thrown into the deep end REALLY hurts the speed of learning.
Juniors should not be learning the same things as senior developers. Imagining juniors as "mini-seniors" is IMO wrong.
There needs to be a progression and you go from focus on low level stuff to focus on high level stuff.
For any single thing, at least at the beginning of the career, you first start with copying, then you imitate, then you get proficient, then you hopefully go to understand it on a deep level -- mastery.
Junior devs -- learn to use the tools. The focus of juniors should be on getting productive within the team and this means being able to do a significant volume of tasks that require least high level knowledge -- repetitive/routine tasks, low level tasks, localised changes. As far as the high level concepts, juniors are not expected to contribute or even understand them -- they are expected to be able to work within the the project structure. It is the tech leadership job to make sure that juniors can be productive even without understanding high level concepts, strategy, etc.
Senior devs -- are expected to know the tools, learn the high level design concepts. They are also expected to have enough experience to be able to tell right from wrong. The focus of senior developers is to understand and be able to productively use high level development concepts. This means ability to design single applications, modules, interfaces, etc. Work productively with stakeholders. Work productively with juniors. Etc. Seniors is when you should start understanding programming patterns, paradigms, learn when to use them, etc. Seniors are not expected to be able to solve all problems and still require some supervision. Seniors also are not expected to be able to form or understand strategic concepts.
Tech leader / principal developer -- are expected to know the tools, know high level development concepts AND also be strategic about it. Tech leader, however intelligent he/she is, is also expected to have immense experience at various types of projects so that he can immediately tell right from wrong and use his experience to put forward solutions that worked in the past and critique solutions that are known not to work. Tech leader is the person that needs to be able to understand and debug entirety of technical situation, form the vision of where he/she wants to be in future and put together coherent strategy. For example, tech leader is the one responsible for understanding blockers for individual contributor productivity and find strategy to improve and resolve these issues. Tech leader IMO should also be a person that is able to solve ANY problem, technical or otherwise.
I would say a tech leader does not always know right from wrong, but needs to be able to make the decision and own the consequences, especially when their decisions affect the happiness of other developers.
I like your distinction between senior devs and tech leads. I think a lot of rewrites come from good senior devs faced with an existing system with problems and knowing that they wouldn't have created those problems if they wrote the system themselves. For example, if the system has code quality issues or a poor internal architecture, the senior devs think, we write good code and make good design decisions, so why should we struggle with this bad code and bad design? We should rewrite it so that it reflects our quality.
I was certainly guilty of this at an early stage in my career. As soon as I could write good code and make good design decisions, I thought the future was going to be easy. I wasn't going to be struggling with poorly written, badly designed systems forever. I'd be working on awesome systems that reflected my standards, and everything would make sense.
That was a hard dream to give up.
Now I look around and think, what one thing can this team do to make this better, while still delivering software for the people who need it?
In my experience rewrites typically start with senior devs (or tech leads who are not really leads but rather senior devs with better pay). The rewrite starts when devs are able to force/guilt/persuade the manager to do it.
And also in my experience rewrites rarely succeed. There are multitude of reasons but the best way to put it is that devs don't usually know what they are getting into (they only have part of the picture) and they are out of stamina somewhere in the middle of the project. They also never learned what caused the previous project to fail (they have too limited view to understand it) and so they tend to repeat the same mistakes.
One project I joined had lost its entire development team. New devs came and demanded rewrite. The manager allowed it. The rewrite failed (of course). The cause: the internal customer was very intrusive and they demanded to get creative control over every part of the development process including approving code reviews, etc. The lack of expected progress on the features the customer demanded only got the situation worse and got them more arguments and in the end higher management decided to kill the rewrite.
Again -- devs had only partial understanding of what caused the fail of the previous project. They looked at badly written code and surmised the previous developers to be incompetent. The reality was those guys were competent but were completely demotivated by inability to get anything done with the internal customer and so did not care about quality one iota.
My solution to rewrites is avoid at any cost and to only do rewrites under exceptional circumstances when it is absolutely clear that refactoring is pointless.
So what do I do?
1. When setting on the project to improve your system/codebase, it is important to think about your ability to finish the project. This is going to be dependant on willingness of your various stakeholders to pour money (or see a slowdown in feature development). The best way is to get a credit of trust early on and the best way to get this is by showing some early results that the stakeholders care about especially if it is something they wanted for a long time but could not get.
2. No results will be worth anything if you don't get some visibility. Put up metrics for everything that can be reasonably measured and the customer cares about -- reliability, performance, turnaround time for defects/changes, etc.
3. Those early results can be anything but when I come into a project I am trying to find out what are the biggest issues and I try to locate one where it is possible to get substantial improvement quickly. This might be something like fixing unreliable behaviour, performance or getting done a particular feature that was asked for for a long time. It is important to select carefully -- you are working on an alien codebase with a new team. The worst that can happen is you promise a lot and deliver nothing.
4. Once you get a credit of trust, you spend it on improving development efficiency. Overall, this is one thing that is most important to get done early but at least early on is completely invisible to stakeholders -- and so you need to use up a bit of your credit.
5. Development efficiency is highly dependant on the project. Automated build? Faster build times? Automated end to end functional testing? Being able to set up your private development environment quickly? Getting rid of some stupid hoops you have to get to modify the app or get your piece of code through the process? Refactoring a couple things that are causing a lot of additional work for every change? The key is to look at the actual process and understand what really is driving inefficiency -- as tech lead I always pair program with developers to get understanding of what the situation really is like.
5.1 On one project I noticed developers spent a lot of time on internal requests form customer that were nothing else than changes in configuration. I wrote couple modules and small UI to let the user self-service themselves and suddenly eliminated about 1/5th of development effort (with about a week of effort on my part). I also spent some time with the team to talk about importance of self service and how it helps reduce unplanned work that interrupts their development.
5.2 Pair programming is ABSOLUTELY best way to get to know your team and the team to get to know you. You want some respect as tech lead? No better way than to actually stick with them and show you can do stuff. This is going to be very important for you later.
6. At this time you should be thinking about improving the basic improvement process -- at the very least get your team to understand what is wrong and right and have retrospectives with the team to figure out what the problems are and how to fix them. You don't want to bog your team with full agile, but you want to start building it from the ground up by introducing basic improvement loop, transparency, openness, etc.
7. By improving development efficiency you create additional development throughput which you then spend on more improvements, but now those improvements should start providing visible results to stakeholders. Faster turnaround time on changes, more reliable and predictable deployments, more reliable and faster system, etc. Here you want to be tracking how much resources you spend on internal improvements (code refactorings, development tooling, etc.) vs things that the customer cares. It is important that the customer is always satisified because this is what gives you freedom to do whatever changes you want to make.
And if anybody wants it, I am always happy to help with a problematic project:)
One critical point I forgot to include is that cognitive load of your team members should be treated as a precious resource. Whatever happens, you need to make sure you don't waste this resource on unneeded stuff -- cognitive load more than anything else will determine how quickly stuff can be changed.
Remember, if you introduce anything people will need to take time to learn, adjust and then understand. You can also count of experience at least temporary slowdown.
So don't waste time things that only marginally help with development. You definitely don't want to switch your project from Java to Kotlin if your goal is to get something done quickly! (Yeah, I have seen this happen in real time -- new guy came to a project and "listened" to devs and switched project from Java to Kotlin with predictable outcome that he was fired half a year later after disappointing progress on actual work).
A learning project with lots of fast iterations where concepts get added over time. With feedback on each iteration from experienced devs where you are able to discuss why you chose x and what the tradeoffs are. At least in my case.
In my experience, when people look for a way to include something in the project it almost always results in more complexity.
A question like "how do I put it all together" signals focus on tools rather than focus on the product. When developers learn new things they frequently look for opportunity to use them. This preoccupation with the tool they just learned usually happens at the cost of the product. They are looking for a way to add things to the project rather than look for ways to remove unnecessary things.
The right state of mind for architect/designer is to seek learning/understanding paradigms, tools, technologies, etc. to have a library of solutions for when they are needed and be able to recognise when it is the right time to do it, not to try to find a way to use something they just learned.
These are established and well known patterns. Better other people get to learn them sooner. The seasoned developers have probably experienced the problem they solve and the juniors might be saved from doing the same mistakes as others have before them.
The argument that juniors will have a hard time understanding it can be said about anything you don't like / don't want. And then you can begin to lay it on thick. It's a convenient scape goat for sure.
This is probably a syndrome of overhyped lead developers trying to "engineer" a solution without consulting with the team beforehand. They read a bunch of articles, decide that it's the "best" solution and blend everything together in a soup of tools that don't make sense. I've seen that happening many times with bad results every time.
Yeah... for some reason developers think that the higher position/salary requires them to put forward more complicated solution. Nothing further from the truth.
My goal in life is to learn so that I can engineer systems and write code that is as simple as possible. It takes a bit of courage to put forward code that looks like everybody could write it -- which is exactly the point of getting more mature in software development.
One problem I always had is that managers seem to reward people for how complicated projects they are working on rather than reward for putting in effort into making things simple. This makes a bit of conflict of interest but I personally resolved to do what is right and feel good with myself rather than to play the silly game of who can overcomplicate the project the most and still get away with it. Half the time the benefits of my approach become apparent after some time with the project -- the other half I don't get enough time or get a manager who doesn't want to understand where the productivity comes from.
They exist. They are locked behind the firewalls of corporate America and Europe, and their engineers are generally forbidden to discuss it in public. OSS developers don't build software this way. Primarily because most generally useful (and therefore successful) OSS is not business domain-centric, and does not benefit from these patterns.
As one of those engineers that's forbidden to talk in public, I can assure you these patterns are extremely useful to keep valuable software viable, extensible, and maintainable for 15+ years. I just can't drop any specifics because IP lawyers are assholes.
I can't show you, but if you travel in Bangkok and use LINE Pay, then you're using one.
Combination of CQRS, event sourcing and kafka.
And I agree with falkensmaz3, they definitely do exist and they work much better in terms of defining the business domains and the way(s) the business "Nouns" interact with each other.
The same things you do in a code base 3 years old. We've just made it fashionable to evolve a code base until we hit a wall, toss it in the trash, and start over with the next "modern" tech that will eliminate the problems of the previous code base. Rinse/repeat.
If the business still exists and is continuing to evolve 15 years later, why shouldn't the code base?
Most places I've ever worked? They just not might not call it that.
Onion architecture, vertical slice, ports & adapters, hexagonal architecture, clean architecture, functional core imperative shell. These are all very similar, and tends to get invented over and over again.
> Can anybody point an important and successful system (preferably open source) that is built following Hexagonal, Onion, or Clean architecture?
Most of the projects I ever workers with either started off as DD+Clean/Hexagonal architecture, or were refactored to comply with those architecture styles.
It's easy to understand why once you realize that those buzzwords are basically tags for a group of fundamental best practices that lead projects to become clear and easy to include changes.
To expand on this, I have not looked for it, it may exist, but a deep dive into the high level design criticisms, benefits, explanation of its evolution in real open source projects would be really great.
I should have probably gone to Google for this before posting, because it is one of those things that should exist, so it probably does.
> - First, the system can’t neatly be tested with automated test suites because part of the logic needing to be tested is dependent on oft-changing visual details such as field size and button placement;
> - For the exact same reason, it becomes impossible to shift from a human-driven use of the system to a batch-run system;
> - For still the same reason, it becomes difficult or impossible to allow the program to be driven by another program when that becomes attractive.
Is any of the above a real pain in the project that you are building in 2022?
I very much doubt it is. And if it's not, then introducing such an architecture will bring more harm than benefit.
> I asked this question, because I do have doubts they exist. Especially, any modern one.
It sounds like you don't do much, if any, professional work.
> These architectures were introduced around 20 years ago to address technical shortcomings of enterprise systems at that time.
No, not really, and your personal assertion completely misses the whole point of DDD or any software architectural style you mentioned.
The whole point is to keep software flexible and accommodating of change, and in the process reflect a set of very basic software design principles that avoid/eliminate problems such as circular dependencies or testability.
> Is any of the above a real pain in the project that you are building in 2022?
Again, you don't seem to have much if any professional experience. UI testing, or any type of testing involving UI work either directly or indirectly such as accessibility/localization or any end-to-end test in general, is still an unsolved problem and a bane of software engineers.
Nevertheless, your comment reads like a strawman. The main benefit, and the whole point, of layered architectures is to ensure the software architecture to simplify modifying/updating the bits that change very frequently and stabilize and minimize the need the fundamental bits that only rarely need to be touched. UIs tend to be the part of an application that is updated the most, thus it makes all the sense in the world to adopt a software architecture which handles UI as a separate external component which minimizes or eliminates both fan-in, and turns fan-out to be a non-issue by depending on abstract interfaces whose concrete implementations are injected. These are not dad's from the 1990s,these are fundamental software engineering concerns.
Alright, I think I have found the main point of our misunderstanding:
> The main benefit, and the whole point, of layered architectures is to ensure the software architecture to simplify modifying/updating the bits that change very frequently and stabilize and minimize the need the fundamental bits that only rarely need to be touched.
The thing is - "onion/hexagonal/clean" architectures and "layered architecture" are not the same things.
Of course layers are fundamental - nobody questions them. What I do question, are 3 mentioned architectures.
For example MVC is also a layered architecture, and in many cases it works great. It allows the same level of testability as hexagonal architecture, but with less boilerplate and indirection. It is great for building websites like this forum.
To this day automated tests suits against UI tends to be flaky. So testing against a code api is indeed better.
Most of this stuff is just developing against interfaces, and having that interface owned by the business code.
All infrastructure code implements those interfaces. Business code code doesn't get polluted by technical infrastructure stuff. It's easy swap out databases, etc
Always a good idea.
Functional core, imperative shell is also a modern term for this.
All of these things are very much real problems. You just have to be building microservices in a corporation that's still running thousands of batch jobs on mainframes and midrange servers. It's super fun let me tell you.
It's so common that I was shocked to briefly work at a successful company with maybe 100 engineers and a code base that followed no consistent architectural rules and principal engineers that had never heard of the things listed in the article.
I won't name and shame, but I shudder at the thought of all that organically-grown spaghetti.
Not a huge fan of trying to abstract out your ACID database, but other than that, I'm generally a fan and most of our systems roughly follow these sorts of architectures.
When adding small features you end up jumping all over the place in all "layers". So a little change in controllers, services, repositories, entities. So I am not sure?
This post mentions package by feature/component. But still seem to have this strict layered approach.
Because there's no mixing of applying changes to the state of the system and retrieving the state. Testing becomes much easier and looks like a business DSL on the system (also connects well with functional core, imperative shell). Also for the heavy read systems you can easily introduce performance improvements as the read path is separated. Out of curiosity, what issues did you have with CQRS? How was it handled? I would like to know because maybe it was applied in an inconvenient way. Some developers that worked with classical layered architecture for years have a problem with CQRS, DDD and FCIS, but when it clicks it's "Oh wow, it makes a lot of things simpler" moment for them. There are also drawbacks of course and that approach fits well to bussines solutions, so you can't find many applications in OSS, which are mostly technical solutions.
Oh I just prefer functional programming. Perfect testability.
Also Django, while arguably not really FP is still perfectly testable, and not dealing with any of all of that. Why would I want to manage at minimum three classes for every endpoint?
Maybe this has some justification in a situation with a couple hundred devolpers on software handling millions of dollars worth of stuff... but otherwise??
If you start putting it all together you will most likely only confuse other developers, especially juniors. Don't do it. The point of gathering all this knowledge should be making the application simpler and easier to maintain. So introduce stuff gradually and only when you find the added complexity pays out.