As an Infrastructure guy, the pattern I've seen time and time again is Developers thinking the previous generation had no idea what they were doing and they'll do it way better.
They usually nail the first 80%, then hit a new edge case not well handled by their architecture/model (but was by the old system) and/or start adding swathes of new features during the rewrite.
In my opinion, only the extremely good developers seem to comprehend that they are almost always writing what will be considered the "technical debt" of 5 years from now when paradigms shift again.
As Programmers we deal with numbers a lot more often, so this effect is minimized. But still there.
I think (at least, I hope) what OP was trying to get at was the relative irrationality of something 1/8000th the cost of the very expensive item taking a very long time to debate through (especially when $100,000 is most probably far less than many of those board members earn as a salary every year)
If you want a good result don’t skimp on the tools. Buy good quality brushes, rollers, filler, throws and paint. Also buy an edger to cut in the walls and ceilings. One final tip buy some of the disposable plastic liners for the roller tray so you don’t have to spend time washing out the tray at the end of the day.
1 - there are very good professionals, but they aren't cheap and have all the work they need
2 - an amateur can always decide to take economically irrational amounts of time on a project; a professional can't.
So the result is that as a careful amateur you can end up with a job you probably wouldn't be able to convince yourself to pay for.
We can probably all agree that the worst case is the careless amateur....
Even on the financial level tradesmen here in Australia are so overpaid (compared to everyone else) that it makes sense even for me to do it myself. My house was painted by "professionals" just before we moved in (purple??) and it cost the previous owner $20,000. My wife and I repainted it white (this required 5 coats of paint) and it took us two weeks including the time I had to spend getting the purple paint off the windows and floors and patching all the holes that had been painted over.
Tip. If you are selling your home don't paint it some unusual color. I managed to buy my home $200K under its actual market value and quite a bit of this was due to the crazy way the place has been painted.
I have used gray before to go from red to yellow as it seems to work in fewer steps.
I am not sure that description is apt. Underdelivering is only rational for the painter because the client won't be on the market long enough to gather sufficient information. The client is getting cheated, and it creates a lemons market, the fact that it is a Nash equilibrium does not make avoiding the entire thing irrational.
And the entire thing has parallels on software development...
An amateur painter will often spend far more time than any reasonable estimation of the market value of that job would justify. Part of this is lack of efficiency and experience, but another part is doing things with sharply diminishing returns. For example, you might apply expensive techniques to inexpensive materials, in a way that would not make for a viable business.
Put it another way - amateurs can easily arrive at a finished job (of painting, in this case) that they would never be able to convince themselves to pay market rate for. This is irrational in certain restricted senses.
For what it's worth I don't believe that "lemons market" is accurate for painters in general, but I'm guessing there is a segment of it that meets your description.
If you're looking at it from a economic standpoint, to be considered irrational, the tradeoff between the value of time vs the cost of hiring a professional would have to assume that the value of the time is greater than the value of the professional.
I always hear this compared to "what is your hourly rate in your job" or "my time is worth more than that", but I think for most people this just isn't a fair comparison. Just because I can spend 10 hours painting my room and I make $x/hour and it would cost <$x to pay a professional, does not mean I would be able to actually generate an _incremental_ $x per hour by not painting and hiring the professional.
For most people on a salary (where your pay is fixed no matter how much time you put in), their time outside of the job is, in a very real sense from an economic standpoint, valueless, and it would be perfectly rational to spend that time yourself, no matter how cheaply a professional could do it.
If you're looking at it from a economic standpoint, to be considered irrational, the tradeoff between the value of time vs the cost of hiring a professional would have to assume that the value of the time is greater than the value of the professional.
For what it’s worth, I don’t think we are really disagreeing much.
On the other hand, coming from a country that takes its trades extremely serious, I was shocked about what I observed every time I visited the UK. It seems the expectations vary drastically between different countries (though I'm sure there are quality professionals in the UK too).
House price / rent value is basically dependent on location. Money spent on fitting is generally wasted.
There are a lot of great craftsmen in the UK, but they tend to work on a subset of jobs - restoration, passion projects, really high-end stuff.
PS: This is also a direct result of ever-rising house prices. If land/house prices are consistently rising, it often makes more sense to let lots stay empty, rather than building (which is always a massive risk). It also makes sense to do any repair or upkeep work as cheaply as possible - since in a rising market, the only way you can lose money is by expensive development costs.
On the sand paper front make sure to spend money there. Cheap sandpaper last 2 nanoseconds and does a terrible job.
My parents (not in any way experts on that field) painted their entire house themselves, except for two rooms that were painted by a professional painter, and the professional painter left much worse corners than my parents. This was the paid-for result (ignore the dark corner at the bottom, that’s caused by the flash): https://i.imgur.com/s1VHV2W.jpg
Actually good software engineering is a much larger skill difference.
Based on my years across various companies, the difference between "hobbyist" and "mediocre professional" developer/programmer is close to nil
Also, note that this thread started with a comment about how many developers don't appreciate what the actual hard problems in their own line of work are, and how that produces poor results because of that. I expect that to be true of any profession that doesn't have some kind of rigorously enforced standard.
Painting isn't a profession with a particularly high return on experience. You hire a painter for comparative-advantage reasons. It may take him just as long as it takes you, and several thousand dollars more, but chances are you can make more than several thousand dollars in the time you would've spent painting your house. (If you can't, you may want to re-think hiring a professional, and instead go into business as a painter yourself...)
I find I can do most such works better than (average) professionals, but it takes a lot of time to learn, prepare, execute (+ sometimes re-execute) and clean. It all depends on when you are satisfied with the result.
For UK natives it's actually three steps:
"sheetrock" == drywall == Plasterboard
I'm not certain whether the water-resistant, mold-resistant variety typically hung behind tiles in bathrooms is ever referred to as "wetwall" or if they still call it "drywall".
Nor am I aware of whether anyone calls the foil-backed, glass-reinforced, fire-resistant variety "firewall" instead of "drywall".
I wouldn't be surprised if obsolete and inapplicable names still attach to items with the same function. If we ever move to polyethylene film panels sandwiched with phase-change material for our walls, it might still be called "plasterboard" somewhere.
I remember showing up on a site, and the drywall guys had just started carting in a ton of drywall. By the time we left in the afternoon, they had cut, hung, taped and mudded over 3,000 sq feet of home, it was insane.
I managed to paint my walls, but it took me much more time than expected and my flat was chaos for some weeks thereafter.
There is no problem painting for me and many others who know how to use their hand.
Is amazing when you talk to people and they are like: Did you do that? How do you know how to do that?
Learn, try and you can do anything. As people did at every step.
You can just follow all the same steps as you usually do:
1. Define the problem.
2. Determine the desired outcome.
3. Measure the existing state.
4. Note the foreseeable failure modes.
5. Plot the path from existing state to desired outcome as a series of reasonable steps, avoiding the failure modes.
6. Recursively analyze the steps, breaking them down into smaller steps if necessary.
7. Rework your plan as additional failure modes become evident.
In my experience, yes, you can do anything, as long as you ignore costs. You can't, for instance, justify buying a specialized tool to finish just one job. The hardest part is really step 2.
You must be new to DIY :)
When I see a team of 7 deciding to go with microservices for a new project I know they're gonna be in for a world of unnecessary pain.
Do you know if there are some well chronicled cases of this happening? I find it very believable, but you know, would love to read something "actual".
A kinda-random example off the top of my head of "shipping the org chart" would be the historical gaps between Windows, Development, and Office at Microsoft. Ie Office is getting a new ribbon, but no development can't supply those icons or any components because they're an "office thing", or the internal API/VBA battles along the same lines.
On the opposite side, as reported in the press, Amazon has used this effect to create manageable teams and build up their own SOA: the two-pizza rule for teams means that teams can only really make targeted self-contained services. In this case Amazon worked backwards and re-structured their teams so that 'shipping their org chart' created the desired architecture.
Microsoft has an app store built into every Windows 10 computer worldwide. And of course, you can not download Microsoft Office from it. However, it does helpfully tell you to get Office by heading to 'MicrosoftStore.com'.
It's all sort of head scratching. A normal person might ask a lot of totally reasonable questions about this, like:
- Why does Microsoft's App Store not have Microsoft's own Apps in it? Office isn't the only missing app, Visual Studio is missing too (even the free 'Code' electron editor).
- Why does the phrase 'Microsoft Store' refer to something 100% different (in products and functions) than 'MicrosoftStore.com'?
- Why does the Office team have their own app updating utility, when there's already supposed to be one 'blessed' place for App Updates inside the Store? (Same question for Visual Studio Code).
Anyway, I know a lot of the above is org chart related, or enterprise needs / backwards compatibility related. But they are all reasonable questions despite that.
And stuff like this is part of the reason why reasonable people still fall for phishing schemes. The above sounded to me like some weird popup advertisement trick, until I saw it first hand.
(Isn't it the only way to get Apple software for your Apple devices, even?)
Not that this actually happens-or would be a particularly good idea if it did for other reasons (like breaking up teams etc).
It does have a single point of failure that is also a bottleneck (the router) and it's a lot of work to load balance / failover it.
But other than that, it provides a fantastic way to make microservices:
- anything is a client of the crossbar router. Your server code is a client. The db can be a client. The web page JS code is a client. And they all talk to each others.
- A client can expose any function to be called remotely by giving it a name. Another client can call this function by simply providing the name. Clients don't need to know each others. Routed RPC completly decoupled the clients from each others.
- A client can subscribe to a topic at any time and be notified when anothe client publishes a message on that topic. Quick, easy and powerful PUB/SUB with wildcard.
- A client can be written in JS (node and browser), Python, C#, Java, PHP, etc. Clients from other languages can talk to each others transparently. They all just receive JSON/msgpack/whatever and error are propagated and turn into the native language error handling mechanisme transparently.
- The API is pretty simple. If you tried SOAP or CORBA before, this is way, way, way simpler. It feels more like using redis.
- It uses websockets, which mean it works anywhere HTTP works. Including a web page or behind a NAT or with TLS.
- Everything is async. The router handles 6000 msg / s to 1000 clients on a small Raspberry Pi.
- The router can act as a process manager and even a static / wsgi web server.
- you can load balance any clients, hot swap them, ban them, get meta event about the whole system, etc.
- you can asign clients permissions, authentications, etc. Sessions are attached to each connexion, so clients knows what other clients can do.
Service Lookup - Hashicorp's Consul.
Authorization/Authentication - I've played with Hashicorp's Vault. It seems overly complicated but the advantage is that it doesn't tie you to a single solution and it can use almost anything as a back end. I haven't had to solve that problem yet.
Had to implement a micro-service architecture in python about a year ago and was jealous of my Java colleagues who (so I heard) have great ecosystem for enterprise service discovery, messaging etc.
Faced this a couple of years ago, and I was the lone dissenting voice suggesting this was not going to go well. Then I learned that "microservice" in reality just meant everything was going to be one nodejs process running endpoints, with ios and android clients hitting it, which... didn't really fit my understanding of "microservice"; that's just "service".
Also, the same team spent several hours in meetings deciding whether or not to allow email signup, or facebook signup, or both, or neither, in the mobile app. Then had the same discussions/arguments a few weeks later when a couple new people joined the team.
I realize I sound a bit bitter. I got pushback because I'd used the 'microservice api' in a "we don't like that" language. consuming the api (which I'd understood to be part of the reason of having a central API vs just hitting db tables directly) by anything that wasn't also node was outside the groupthink, and caused problems.
I left the project.
They've got their microservice architecture, but no userbase (yet?) to be concerned about scaling issues.
I understand it's reasonable to be concerned about potential scaling problems, the team/project spent far too much time chasing architectural perfection (and really... 'shiny new stuff') vs executing a marketing plan. It's easier for a group that is tech-folk-heavy to focus on that; I get it. But it didn't solve any problems at hand. But when the mythical "2 million users in an hour" problem happens, it'll probably hold up, unless it doesn't.
I chose this approach because the developers who were already there were relatively new to C#, and I knew we were going to have to ramp up contractors relatively fast.
Our dev ops process revolves around creating build and release processes by simply cloning an existing build and release pipeline in Visual Studio Team Services - the hosted version of Team Foundation Services - and changing a variable. Every service is a separate repo. Each dev is responsible for releasing their own service.
1. All green field development for a new dev. They always start with an empty repo when creating a new service.
2. Maintenance is easier. You know going in all you have to do is use a few documented Postman calls to run your program if you need to make changes. Also, it's easy to see what the program does and if you make a mistake, it doesn't affect too many other people if you keep the interface the same.
3. The release process is fast. Once we get the necessary approvals, we can log on to VSTS from anywhere and press a button.
4. Bad code doesn't infest the entire system. The permanent junior employees are getting better by the month and we are all learning what works and doesn't work as we build out the system. Each service is taking our lessons learned into account. We aren't forced to keep living with bad decisions we made earlier and building on top of it.
A microservice strategy only works if you have the support system around it.
In our case, an easy to use continuous integration, continuous deployment system (VSTS), easy configuration (Consul), service discovery and recovery (Consul with watches), automated unit and integration tests, a method to standardize cross cutting concerns
And finally, Hashicorp's Nomad has been a god send for orchestration. Our "services" are really just a bunch of apps. Nomad works with shell scripts, batch files, Docker containers, and raw executables. It was much easier to set up and configure than kubernetes.
Imagine you have a library that implements the serialization and deserialization of something in your system (or anything else where the library implements two "halves" of some functionality). You might (or might not, depending on the change) need to push out that library to other apps to effect the fix you're working on. That new proposed library change may be in a queue behind another pending change that's in development, etc.
And guess what, you have to do the same thing with microservices. You must ensure that all its clients are pointing to the correct version, and you must ensure that the new version didn't add any bugs in the clients.
Adaptations for new service versions will also get into the release queue, and can also get blocked by other stuff.
YAGNI is almost always the right approach.
There have been plenty of times where I've taken a project out of a monolith and created a separate shippable package to be consumed by another monolith in a separate repo.
There were also occasions where I had to rip out a feature in a monolithic API that either needed to scale independently or be released independently and then just created a facade in the originally API that proxies the "microservice".
I mostly tell my people "We aren't going to do that yet - or possibly ever - but keep in mind that we might have to if we ever win the user growth hockey-stick graph lottery, so when choosing between options of how to architect things, prefer very narrow focus that _could_ be pulled apart into microservices more easily if practical"
I've also been known to stand up duplicated instances on a monolith with path routing on the load balancer as a "pretend microservices" scaling or upgrade trick.
Decomposing and fragmenting an established service into smaller clones of itself should be reasonably straightforward. In most languages with a component story (ie java), library reuse across microservices should make your service architecture orthogonal to your application logic. This kind of approach also eases a lot of pain of cross-service issues.
These days I kinda wince at "I run 8 services myself."
If it takes 1 person to run 1 microservice, we are all doomed.
Higher levels of abstraction makes it easier to get something up and running fast, but at some point you need to be able to look under the hood and understand what's going on, and many programmers today can't do that.
That being said I think the drop in average skill is mostly a product of the growth in the number of programmers. I imagine that if the ability to sculpt a basic statue suddenly became really valuable, the skill level of the average working sculptor would plummet.
More layers of indirection in a system and more dependencies on external libraries and tooling does not necessarily get you any abstractions. To take a contemporary example, there is no "abstraction" in being driven to use Docker because your dependencies have gotten unmanageable otherwise.
I also feel that ceteris paribus, the meetings got longer, project management tools now consume a lot of input from programmers, and I need to communicate with a lot more people to get something done.
Which seems to end up meaning productivity has gone down when measured by "things end users of websites can do", even though modern FE devs end up creating much more code and html and css than "the old days". (Admittedly, if you include privacy invasion, user tracking, and various other requirements of surveillance capitalism, dev productivity has probably skyrocketed...)
Complex? We still call a function with a return value on a stackmachine.
Sorry for the negativity.
You are correct, for the end user the complexity has absolutely not resulted in better homepages but worse.
seq 21 | while read port; do
python2 -m SimpleHTTPServer 80$port &
seq 21 | while read port; do
python3 -m http.server 80$port &
So if Netflix has a "User logon" service, and a "payment processing" service, used across their clients clients you might be looking at a couple of "microservice"s with hundreds of related employees. Imagine services for Googles search autocomplete, ML, or analytics...
As the article states the "micro" aspect is mostly in terms of deployment responsibility, freeing those 10-1000 employees from thinking about the totality of Googles/Netflixes' operations before rolling out a patch :)
It’s 8 or so but it’s possible for me to handle it all. If we add features they’re going to be new services, so adding big features to the services I manage is unlikely. It is more likely that I get a new service on my plate in 6 months time than getting additional members of the engineering team to work on already completed services.
I’m obviously not entirely alone... I’ll ask for help if I need it, and I help out with other people’s stuff too, but I am primarily in charge of them and I am responsible for keeping everything working well.
If it takes 10 people to manage one service, it is not a microservice by definition. It is more like a 10x-microservice or a macroservice.
The definition isn't small teams, per se, it's a small area of responsibility with singular focus. A lego block instead of duplo. That lends itself very well to small teams, but you could reasonably have 100 people working on a service and call it "micro".
Reason being: if those 100 people weren't working on their scoped 'microservice' they would be part of a much, much, much larger pool working on the shared 'product', 'platform', or 'service' that contains that exact same functionality, only without the clarity/scalability of application boundaries surrounding the individual service components.
That's not to say microservices are ideal, just that the size of "micro" is highly relative to ongoing operations.
One developer can certainly be responsible for coding many microservices and even maintaining them depending on the scope, number of users, etc.
Sometimes people might even rotate with a core architect coordinating how they all interact and if they are consistent with each other.
Discussing about microservices can be very confusing if people are thinking about different things and, as always, when there's a good idea implemented in a specific context and people who don't use it get exposed to it as a silver bullet we'll end up with a huge backlash like in this article.
And it's just one of many.
Developers want to own their thing. Microserivces desire springs up because of a lack of communication culture and desire for siloification in a companies organization to keep various interests from bothering the developers. Those almost always point to a failure of management in my mind rather than a technical failure.
If a few things are true, I could see this as a win:
* I can isolate my developers from outside interests using microservices.
* My developers are more effective in each dimension (quality, retention/happiness, velocity) because they are isolated from outside interests.
* My software is easier to operate and more reliable because it is a microservice.
If any of these three things aren't true, then I agree. But I'm not sure that a "communication culture" can scale to a large organization and I'd like to see a truly large company (1000+ developers) successfully doing so. I've seen more success come from separation of concerns and well-deployed microservices seem to be fairly effective to this end.
Conway's Law might as well be renamed "The Law of Microservices". Per Wikipedia , it states:
> "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."
"Microservices" are on a tear because they make a perfect cover for the blatant and bare expression of unbridled Conway's Law.
Such unbridled expression is much easier in the course of greenfield development, because it lets the core team of 2-3 people per service go about their development work without consulting anyone external. It lets them throw away any overriding convention or cultural concerns, and it avoids the difficulties of cross-group coordination. But it leads to a completely unmaintainable wreck when things transition into production.
This is not to say that no one will have a successful microservice deployment or that it's always a bad choice, but it usually goes way off the rails.
The parent post I was replying to seemed to simplify things down to "let your developers communicate and they'll build a more coupled system that works, instead of a morass of microservices that don't." That's another thing that looks good on paper but doesn't scale, at least in my experience.
I think in many cases microservice architectures appeal to engineering organizations with poor communication and cooperation skills where developers desire to be strongly independent because of the lack of management creating a cooperative and coordinated dev and work environment. I think that's actually saying something very similar to the Conway Law idea brought up by the other poster.
Microservices will not help you if your developers have the same level of skill and foresight as whoever wrote the monolith, which is probably true if those devs were selected by the same hiring process that your company has today, subject to the same organizational effectiveness, etc.
Indeed! Good developers/architects spot repetition or weaknesses in current techniques and can often devise solutions that can be added to the stack or shop practices with minimal impact. You don't necessarily need paradigm or language overhauls to improve problem-areas spotted. Poor developers/architects will screw up the latest and greatest also.
I've also seen really bad developers with that attitude: it's all crap, so just ship whatever already.
The good developers write code that can be replaced, rewritten, or rescaled later. Though, charitably, both monolithic service and microservice people are trying to do exactly that. It's just what sort of scale they're thinking about and what part of the software development lifecycle they think will be especially difficult going forward.
Good developers create code that's prepared for the possibility of being modified repeatedly and becoming foundational; on the other hand, preparing for code/components to be thrown away is a no-op.
You might be agreeing, but I find that the bad code that sticks around is bad code that is hard to get rid of. So in that sense, code that is removable isn't a no-op. It takes some effort, for instance, to keep a clear dividing line between two components so that either may be replaced someday.
You make a very good point.
Over the years, I learnt that almost nobody except the developer and maybe one or two peer-developers cares about good quality code. The management just wants to ship services/products. They don't care how good the code is. All they care is that they can meet their deadlines. Of course, good quality code can increase the chance of meeting deadlines, but working long hours can also increase the chance of meeting deadlines. Management does not understand code, but they understand working long hours.
If I ignore this and still care enough to write good quality code, in nearly all projects, I am not going to be the only one to work on the code. There is going to be a time, when someone else has to work on the code (because responsibilities change, or because I am busy with some other project). As per my anecdata, the number of people who do not care about code quality far exceeds those who do. So this new person would most likely start developing his features on the existing clean code in a chaotic manner without sufficient thought or design. So any carefully written code also tends to become ugly in the long term.
In many cases, you know that you yourself would move out the project/organization to another project/organization in a year or so, and the code would become ugly in the long term no matter what you do, so why bother writing good code in the first place!
It is very disappointing to me that the field of programming and software development that I once used to love so much out of passion has turned out to be such a commercial, artless, and dispassionate field. How do you retain your motivation to be a good developer in such a situation?
No, but they do care that this seemingly (to them) trivial change a year later takes two weeks instead of half a day.
> How do you retain your motivation to be a good developer in such a situation?
I honestly don’t know. I’m currently taking a little time out from work, but I’m dreading going back in a month because right now, I am completely disinterested in computers and especially software, things I used to love and be incredibly passionate about. Now with stuff like meltdown and spectre, technology just seems like this dumb house of cards and I have no energy for this BS. I’m pretty sure its burnout and it will pass eventually, but I just hope it does so soon as I don’t know how else to pay the bills.
On the plus side, I’ve spent a lot less time at the computer these last few weeks and spent time on other interests, including learning sleight of hand and card magic. :-P
No, that's not true, except in a very superficial sense. Yes long hours _can_ increase the chance of meeting deadlines... but often, it doesn't - or more precisely, it only works if the code quality is decent.
"Code quality" is not about following whatever patterns are en vogue today, or using the latest dev language. It is mostly about simplicity - dealing with few things at a time, and making those things explicit. If you need to understand the entire solution, and the entire domain model, and all the edge cases before making the tiniest of modifications - long hours are not going to help you.
To go back to microservices: many companies claim to build microservices, but actually build a distributed monolith. This doesn't help productivity, it actively harms it.
<rant target="not you">
And why should management be able to evaluate technical decisions and code quality in the first place? That's our job! The problem is that we have given management the false choice of lower quality + faster shipping vs. higher quality + slower shipping, when in fact we should not have given them any say in the matter. And before we hit me with the but we have to be first to the market and fix it later line, I need to point out that companies don't die because they were not first to the market, they die because their operations and development became so slow and costly that they could not compete anymore.
What we should do as developers is to stop talking about code quality to our managers! When we are asked for an estimate, we give them as accurate estimate as possible with the code quality that we feel is sufficient for long-term maintainability of the system. And we don't negotiate on quality anymore, and especially we don't negotiate on estimates! Only functionality (MVP and all that). Then we don't need to ask for refactoring time, rewrite time, code polish time, stabilization time on our systems (that we hardly ever get anyway), because it is all in there in the original estimate.
Management expects stable productivity, they base their estimates of operational costs and investment costs on the number of people working on the system, not on the age of the codebase (why should an old codebase cost more to work with? they ask). If we give them false hope on the productivity of the team by producing crap fast in the beginning, the whole business case may collapse when we produce the same crap slower and slower and slower later. Management is in no position to evaluate the effects of bad code on the business case, because they don't understand that. We do. The only thing we can do as developers, is to remove the option of low quality code altogether.
And we say that it is so slow to create quality code? And estimation is hard?
We learn it. We can write high quality code as fast as the usual junk we see in most systems. We keep track on our estimates and evaluate how well we did, and improve. But it takes effort. All I can say is that it is our responsibility.
On a positive note, the solution to this on a personal level is to find a place to work where technical excellence is built in the development culture (and there are such places), and cultivate that culture especially with the new hires (mentoring, pairing, etc.).
That only works until you encounter something for which there was no rational reason in the first place.
That is to say - most people are acting rationally, but their context to an outsider may not appear rational. Maybe there's something to be said for a general relativity of rationality?
There's legitimate arguments for looking at these patterns, the big one being "isolation of concerns". The biggest counterargument is that the ops cost is much higher than assumed, of course.
The idea that the existing code base could have problems shouldn't be a surprise to anyone. Amazon almost fell over because of their code base. Twitter too. And its not even not doing it right, but simply that scales change. Or patterns change.
And in new companies, it _could_ be that people don't get it.
"Microservices as mass delusion" discounts a lot of people who are really thinking hard about how to handle the pros and the cons of things.
Yes. That's been my experience.
> The idea that the existing code base could have problems shouldn't be a surprise to anyone. Amazon almost fell over because of their code base. Twitter too.
Most organisations aren't Amazon. Most organisations aren't Twitter. And even these web-scale organisations aren't as all-in as the microservice advocates. (I worked at last.fm for a time and while we did many things that could be classed as "microservices" from a certain perspective, we didn't blindly "microservice all the things")
> "Microservices as mass delusion" discounts a lot of people who are really thinking hard about how to handle the pros and the cons of things.
Most fads start from a core of sensible design. The web really did revolutionise commerce, but many "x, but on the web" companies of the late '90s really were dumb.
As you say there are a lot of pros and cons to any architecture or paradigm, which is why we're still talking about it and saying things like "right tool for the job" and not just using the One True Method(TM).
I legit think that a lot of people using the new hotness as a form of cargo cult programming with no understanding of the methods they're considering or how they apply to the problems they're trying to solve.
It's not just microservices that are improperly applied. I've been in the industry long enough to see dozens of languages, technologies, paradigms, processes, and everything else hailed as the second coming of christ and applied inappropriately all over the place until the shinyness wore off.
And I mean... When we start talking about developing for Amazon scale, we're already talking about situations that don't apply to 99% of developers. Not a great argument that their cases aren't inappropriate applications of the pattern.
There will come a point where you can't scale a monolith, sure, but that point is thousands rather than hundreds of engineers.
If some companies go completely nuts and deploy 5 microservices per developer, then yes, that is madness. If all of those microservices are REST/JSON microservices, that's madness squared because you're wasting half your time in serialization/deserialization. If you're managing all of this stuff with Kubernetes because it's trendy rather than because it's actually necessary for your use case, then you're probably making a bad call.
But ultimately, services give development teams ownership over features in a complete, end-to-end way that is fundamentally impossible with monoliths once you're past a certain size.
The impression I get - and this could be totally wrong - is that the difference is how the services relate to each other.
In this post's example, 5 of the 6 microservices look like something that would be exposed to the user in some way. I would call this a microservice architecture; they're networked together in a way where no one thing orchestrates the others. They likely all touch the same data storage, but the overall structure is mostly flat.
In a service-oriented architecture, there'd be a tree-like or graph-like hierarchy. To an end user, it would look monolithic, but the monolith would, behind the scenes, delegate to the "user-facing" services such as Upload and Download. Upload and Download would then use the Transcode service as appropriate. But the important part is that this would all be one way: Transcode isn't allowed to contact Upload or Download, just return the result to whatever called it.
Perfect, I've seen this happen many times as well. I think you're generous on the 80% part. Usually they nail the first 50%, but they time they get to 80% its getting just as messy and some of the developers are planning another rewrite again.
This is why a team needs access to good architect who's seen the paradigms shift, or even cycle. You're almost never starting from scratch, so you really need someone who's able to incorporate better or more suitable tech without throwing out the baby.
If you're microservices-based, that last part is easier, even if it falls into one of the described pitfalls, e.g., system-of-systems.
"Here’s the thing, most of the time we do something that incurs technical debt, we know it then. It’d be nice if there’s a way for us to log the decisions we made, and the debt we incurred so we can factor it into planning and regular development work, instead of trying to pay it off when there’s no other alternative."
So how would this ledger work? Well, for starters it has to
track what we can’t do because of current technical debt.
It should also be updatable to note any complications to
subsequent work or things you can’t do yet because for old
design decisions. At this point, you’re tracking the
“principle” (the original design decision causing technical
debt) and interest (the future work that was impacted by
This was an eye opener for me – thanks for sharing!
I found that a lot of the time developers start moving towards microservices when they find that a monolithic app becomes too difficult to work on. For example, multiple teams working on the same codebase will often have accidental code conflicts. Plus, scaling a monolithic app because one part of it is under load isn't always cost effective or logical. So, teams will start to break off components into microservices to make development easier and less painful. Naturally this has to be weighed up as microservices bring a different set of challenges, 'gotchas', etc, etc but in my experience the teams have done a proper job discussing the pros and cons.
It's also reflected in how we manage code at the micro-level: collecting related logic into a module until it becomes unwieldy and then separating out independent sub-functionality into their own modules and dependencies as they grow...
There is no right size for a class. Smaller is better, but the ideal is "right sizing". What's right? Well, that's tricky, but whatever doesn't hurt is pretty ok.
There's no right size for a service. Smaller is better, but...
I've come to view microservices in the context of Conway's Law. If you have a team of developers working on a project who don't like to communicate or work with each other, do not understand version control, and all have different programming styles and technology choices, the only feasible architecture is one service per person.
I have no trouble believing that this is what's really behind Netflix's adoption of microservices. From what I've heard it's a sociopathogenic work culture, and if I worked there I would probably want to just disappear from everybody too.
To me the big benefit of microservices is scaling out components into flexible independent release cadences but the trouble comes with employing them too early.
1. There is a believe, that component isolation (taken to extreme by microservices) enables better productivity of the development department.
That is more features, more prototypes, more people can be moved in and out a given role. So that those 5 crusty programmers are not a bottleneck for the 'next great idea' that a Product manager or CIO reads up on.
2. There is a constant battle for the crown of "I am modern" (eg data science, micro services, big data )
That is going on in every development or technology organization.
Where the closer you are in your 'vision' to google or Netflix, the more 'modern' you are.
The rest of the folks is 'legacy'. So you get budgets, you get to hire, you get to 'lead'.
Micro-services is the enabler to help to win this battle (although, probably, for a short term).
I personally, do not believe that microservices bring anything new compared to previously used methods of run-time modularization :
Abstractly spoken, I don't care whether you call f(x) directly, via IPC, RPC or as a microservice. In my preferred programming languages there is not much of a difference anyway.
Rewrites also serve as thorough code review and security audit.
I am more in the camp of "let's do something useful" than "let's rewrite this because previous guy didn't do it good, or it no longer meets our demands". Because whatever you do it will get rewritten again, and it's imho useful to resist the urge.
Ps: also a googler.
Mixed tabs and spaces, sometimes one space indentation or no indentation at all, 1000+ line Java methods, meaningless variable names, no comments or documentation. SQL transactions aren’t used, the database is just put into a bad state and hopefully the user finishes what they’re doing so it doesn’t stay that way. That’s just the server. The UI is just as bad and based on Flash (but compiled to HTML5 now)
> You don't solve problems. You take the problems you have, and exchange them for a different set of problems. If you're doing your job, the new problems won't be as bad as the old problems. That's all you can really do.
It's not a microservice if you have API dependencies. It's (probably) not a microservice if you access a global data store. A microservice should generally not have side effects. Microservices are supposed to be great not just because of the ease of deployment, but it's also supposed to make debugging easier. If you can't debug one (and only one) microservice at a time, then it's not really a microservice.
A lot of engineers think that just having a bunch of API endpoints written by different teams is a "microservice architecture" -- but they could't be more wrong.
They were having performance problems and "needed" to migrate to microservices. They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM. Of course if you were using microservices, you needed docker as well, so they had also developed a giant docker container containing all 12 microservices which they deployed to a single host (all managed by supervisord). Of course since they had 12 different JVM applications, the services needed a host with at least 9GiB of RAM so they used a larger instance. Everything was provisioned manually by the way because there was no service discovery or container orchestration - just a docker container running on a host (an upgrade from running the production processes in a tmux instance). What they really had was a giant monolithic application with a complicated deployment process and an insane JVM overhead.
Moving to the larger instance likely solved the performance issues. In place they now had multiple over provisioned instances (for "HA"), and combined with other questionable decisions, were paying ~100k/year for a web backend that did no more than ~50 requests/minute at peak. But hey at least they were doing real devops like Netflix.
For me, I've become a bit more aware of cargo cult development. I can't say I'm completely immune to cargo cult driven development either (I once rewrote an entire Angular application in React because "Angular is dead") so it really opened my eyes how I could also implement "solutions" without truly understanding why they are useful.
I've dealt with an even worse system, with a dozen separate applications, each in its own repo, then with various repos containing shared code. But the whole thing was really one interconnected system, such that a change to one component often required changes to the shared code, which required updates to all the other services.
It was a nightmare. At least your folks had the good sense to use a single repository.
What source control system?
Also, from the article:
> even though theoretically services can be deployed in isolation, you find that due to the inter-dependencies between services, you have to deploy sets of services as a group
This is the situation we are in, like you were.
Git in our case. And our direction was not to use submodules or anything like that to make life manageable. It was pretty unpleasant.
What's gonna look better on a devs CV: 'spent a year maintaining a CRUD monolith app' vs 'spent a year breaking monolith into microservices, with shiny language X to boot'.
We can be a very fashion and buzzword driven industry sometimes.
EDIT: this perverse incentive goes all they way to the top, through to CTO level. Sometimes I wonder if businesses understand just how much money and effort is wasted on pointless rewrites that make life harder for everyone.
This doesn't stop at engineering; open offices, teaser/trick based interviewing, OKR's, ... Even GOOGLE doesn't do some of those things anymore, but the follower sheep still do.
Their "microservices" suffered from the same JVM overhead and to remedy this they are joining their functionalities together (initially they had 30-40).
9 times out of 10 it's because developers don't know how to properly design and index the underlying RDBMS. I've noticed there is a severe lack of knowledge of that for the average developer.
Azure Functions are technically a "serverless" product, but using them as y'all intended to is a textbook definition of a "microservice" :)
well experimentally oracle solved that problem, somewhat. you could now use CDS and *.so files for some parts of your application.
it probably does not eliminate every problem, but yeah it helps a bit at least.
but well it would've been easier to just use apache felix or so to start all the applications on a OSGI container.
this would've probably saved like 5-7 GiB of RAM.
That's plainly wrong. I get the gist of what you are saying and I more or less agree with it but you expressed it poorly.
Having API dependencies is not an issue. As long as the microservices don't touch each others data and only communicate with each other through their API boundaries microservices can and should build on top of each other.
In fact that's one of the core promises of the open source microservices architecture we are building (https://github.com/1backend/1backend).
I think your bad experiences are due to microservice apps which are unnecessarily fragmented into a lot of services. Sometimes, even when you respect service boundaries that can be a problem - when you have to release a bunch of services to ship a feature that's a sign that you have a distributed monolith on your hands.
I like to think of services, even my services, as third party ones I can't touch. When I view them this way the urge to tailor them to the current feature I'm hacking on lessens and I identify the correct microservice the given modification belongs to easier.
I'm not sure what you think side effects are, but I'm using the standard computer science definition you can look up on Wikipedia. If you have a microservice that modifies, e.g. some hidden state, it's a disaster waiting to happen. Having multiple microservices that have database side-effects will almost always end up with a race condition somewhere. Have fun debugging that.
If no one then what's the point of that service's existence?
> In computer science, a function or expression is said to have a side effect if it modifies some state outside its scope or has an observable interaction with its calling functions or the outside world besides returning a value. For example, a particular function might modify a global variable or static variable, modify one of its arguments, raise an exception, write data to a display or file, read data, or call other side-effecting functions.
Note "write data to a display or file". I think we agree that writing to a database falls under this definition, hence using terms like "side effecting" when talking about microservices is misleading.
This management of API boundaries is likely handled for you by an app, though, so from a user perspective the story is still "open netflix, enter password, watch movie".
I gotta ask, how is this realistic? A salient feature of most of the software I've worked on is that it has useful side effects.
Whether it should consume other microservices is less clear, and gets into the choreography vs. orchestration issue; choreography provides lower coupling, but may be less scalable.
Can we extend that logic to classes or interfaces? Accessing data operations through a well-established API is generally seen as a good thing and is the exact cure for spaghetti...
Service APIs also entail load balancing and decoupled deployments, so they eliminate unclear architecture that arises at the app level when trying to tune the whole for individual components. Particularly when a shared component exist across multiple systems.
For a generalized microservices architecture: layering is a bit of a misnomer as everything is loosely in the same 'service' layer... I'd also point out that in N-tiered applications application services or domain services calling other services at the same layer is seen as the solely approved channel for re-use, not an anti-pattern.
This was one of the main ideas behind the original definition of OOP. The original notion of "object" was very similar to our current notion of "service":
Objects received messages, including messages sent over the network. There was not supposed to be a clear distinction between local and remote services - by design. A lot of inter-computer stuff could be/was handled transparently.
Just like sometimes, in real-world C, “goto” is the right tool, even though arbitrary jumps of that kind are also a “recipe for spaghetti”.
I am not quite sure what you mean. A microservice with REST api that has POST method is not a microservice?
If two black boxes directly contact each other, then that also defeats the purpose. Microservices are not appealing unless talk via message queues. The whole point of microservices was to handle scale independently for independent functions.
Where do you suggest storing that state if it needs to be persistent? The definition of microservices should not assume anything about how long I need to track my data. If two of your microservices are touching the same database fields, then that's the implementor's mistake.
a bunch of API endpoints written by different teams is a "microservice architecture"
Most people have enough trouble getting three methods in the same file to use the same argument semantics. Every service is an adventure unto itself.
We have a couple services that use something in the vein of GraphQL but some of the fields are calculated from other fields. If you have the derived field but not the source field you get garbage output and they don’t see the problem with this
Just out of curiosity, what alternatives are there to avoid API dependencies? Is it really possible to make non-trivial apps while avoiding internal dependencies?
At some level, is it really possible to have a truly decoupled system?
Very important how the boundaries a drawn. Generally, the more fragmented the micro-services the more API dependencies.
Also, look at the Bounded Context concept.
And Conway's Law certainly plays a role.
> At some level, is it really possible to have a truly decoupled system?
You cannot avoid all the API dependencies, but you can reduced their number.
APIs calling other APIs is...well, I'm having a hard time understanding how that could be construed as fundamentally wrong.
A large purpose of service oriented architecture is encapsulation. If no other microservices can make requests to your microservice, then you really haven’t encapsulated much.
If and when you need to support mobile devices independently of your web UI, you can have a mobile gateway. Same idea. This gateway is optimized to know how to handle mobile traffic realities like smaller download sizes, etc.
No, you definitely don't want microservices making synchronous requests to other microservices and depending on them that way.
But it still may be necessary for your services to depend on each other, and that's where you can allow that communication through asynchronous eventually consistent communication. Actor communications, queue submission/consumption, caching, etc.
Even in such cases you might want to move bulk of processing to asynchronous queue based system but part of the logic might need to be executed synchronously (authorise credit card payment, you can process the payment asynchronously later, perhaps in bulk cron jobs like Apple Itunes does it but initial authorisation which decides whether purchase is successful must be synchronous).
I just wish someone with "street cred" (or with a famous, recognizable name I could use for appeal to authority) could create a simple post saying "Hey, if you have a shared data store that all services depend on and are accessing directly, you are not doing microservices". "And you also don't have microservices if you have to update everything in one go as part of a "release"".
That way I could circulate it throughout the company and maybe get the point across. I've tried to argue unsuccessfully. After all, "we are doing K8s, so we have micro-services, each is a pod, duh!" No, you have a monolith, which happens to be running as multiple containers...
The best imagery I know for this picture is a two-headed ogre. It might have multiple heads, but one digestive system. Doesn't matter which head is doing the eating, ultimately you have the same shit. I've head semi famous people talk about this in conferences, but few articles.
So yes, you now have an authority that says doing that is bad.
It depends on what you want to debug. It is like unit test vs integration test. If you are finding a bug related to integration between multiple services, you definitely need to debug on multiple services.
I hate to burst your bubble, but you shouldn’t and can’t have truth working along side systems that access it. Data is messy and tends toward dishonesty. The only way to get clean truth for your organization is by thoughtfully applying rules, cleaning and filtering as you go. The more micro your architecture is, the more this is true. Because there is no way 20different teams are all going to have the same understanding of the business rules around what constitutes good, clean input data. Even if your company is very clear and well-documented about business and data rules, if you hand the same spec sheet to 20 different teams, you are going to get 20 variations on that spec.
The only way to get usable data that can be agreed upon by an entire company (or even business unit) is by separating your truth from your transactional data. That’s kind of the definitional of a data warehouse.
If you let your transactional systems access and update data directly in your warehouse, you are in for a universe of pain.
I strongly agree with this assessment :)
I have posted a bit more on this nearby, but Apache Kafka is well positioned as a compromise to support both of those truths: an orthogonal data warehouse full of sanitized purity and chatty apps writing crappy data to their hearts content.
By introducing a third system in between the data warehouse and transactional demands, Kafka decouples the communicating systems and introduces a clear separation of concerns for cross-system data communication (be they OLAP, or OLTP).
If your transactional data is crappy (mine is!), and you want your data warehouse pure (I do!), then Kafka can be a 'truthy' middle ground where compromises are made explicit and data digestion/transformation is explicitly mapped, and all clients can feast on data to their hearts content.
Your data warehouse can suck facts from Kafka (with ETL on either side of the operation, or even integrated into Kafka if you so desire), and you can keep Kafka channels loaded with micro-"Truth"s (current accounts, current employees, etc). That way apps get basically real-time simplified access to the data warehouse while your data warehouse gets a client streaming story that's wicked scalable. And no coupling in between...
It's a different approach than some mainstream solutions, but IMO hits a nice goldilocks zone between application and service communication and making data warehousing in parallel realistic and digestible. YMMV, naturally :)